The application of the multi-alternative approach in active neural network models

The article refers to the construction of intelligent systems based artificial neuron networks are used. We discuss the basic properties of the non-compliance of artificial neuron networks and their biological prototypes. It is shown here that the main reason for these discrepancies is the structural immutability of the neuron network models in the learning process, that is, their passivity. Based on the modern understanding of the biological nervous system as a structured ensemble of nerve cells, it is proposed to abandon the attempts to simulate its work at the level of the elementary neurons functioning processes and proceed to the reproduction of the information structure of data storage and processing on the basis of the general enough evolutionary principles of multialternativity, i.e. the multi-level structural model, diversity and modularity. The implementation method of these principles is offered, using the faceted memory organization in the neuron network with the rearranging active structure. An example of the implementation of the active facet-type neuron network in the intellectual decision-making system in the conditions of critical events development in the electrical distribution system.


Introduction
Process of a circuitry development for the intelligent systems based on neuron networks is based on a direct analogy between the tasks solvable by these systems and the motives of higher nervous activity of living organisms.
Currently, the implementation of said reduced analogy is limited primarily to the attempts to reproduce electrochemical processes in the artificial biological neuron systems by way of the development of different neuron excitation and inhibition patterns and the elementary organization of their interrelation.
In its classic form a neuron network is seemingly the functional: where b, c are the vectors of configurable parameters (weight coefficients); r is the number of the network layer; j r is a neuron number r in the layer; i r is the entry number in the neuron; N is the number of layers of the network; x, y are the vectors of the input and output variables of the network; x ir,jr is the i element of the vector x as an input to neuron j in the r layer; F(b,c,x) is the neuron activation function, e.g., the sigmoidal form.
The formal justification for the use of neuron networks in the of decision-making problems is the theorem of completeness, which says that: every continuous function on a closed bounded set may be uniformly approximated by the functions computed by the neuron-type network (1), if the activation function F(b,c,x) of a neuron is continuously twice differentiable.
However, in practice these formal grounds for the implementation of artificial properties of biological networks were not sufficient, and the artificial neuron networks of type (1) have inherent fundamental inconsistency with their biological prototype.
These inconsistencies are associated primarily with the attempt to reproduce biological processes using some functional with a predetermined constant structure, which alteration is made only at the lowest level, namely, the parametric one: the network is unable to alter its structure in the process of its training, i.e. it is passive.
A failure of passive neuron network to alternate its structure results in the shortcomings of these network as such, that are not peculiar to their biological prototypes [1,2].
The most important of these are: the retraining problem which consists in the growth of a network error in case of presentation of an a priori unknown excessive amount of training vectors. In vivo neuron networks the memory elements demonstrate their high selectivity, whereas the nature of memory is cumulative, making it possible to keep previously stored data without the distortion on the network in amount that is practically unlimited; low generalizing properties, establishing the relationship of "private-general" between the recognizable situations, for which the network should have developed a multi-level, hierarchical structure; the lack of functional autonomy of the elements of artificial neuron networks resulting in a nonlinear increase of the number of adjustable parameters alongside with the increase of the tasks number, i.e., the manifestation of the «curse of dimensionality» in the process of their training. In biological neuron networks there are no such limitations of a recurrently stored data, they were not observed.
In order to eliminate these shortcomings in this work a new approach is developed to the construction of a neuron network model of the intelligent control system based on the evolutionary principles of the organization of its functioning that are comparatively general, that is, the principles of multialternativity [3][4][5]: the multilevel (hierarchy) principle which is implemented by the organization in the biological systems of at least two-level control one of which functions in the domain of tolerance variations of the system state, and the other responds to the critical deviations only from this condition; the principle of diversity of the control algorithms and the separation of functions (multi-mode and switch structures), which is closely associated with the multilevel principle and reflects the flexibility of the control algorithms in the changing conditions of the system operation; the principle of modularity of the structure which provides a variety of levels and modes of control based on the combinations of a limited set of elementary modules of a system.
As shown below, a set of these principles described above can solve the problem of constructing the neuron network intelligent systems which are not a result of the qualitative complexity of their algorithm, but by means of increasing gradually the number of simple ingredients in the system that are gradually embedded into the structure of the neuron network. -neuron network activity ensuring the restructuring of relations between the structure of ensembles after each training event (the formation of new stable ensemble and its embedding into the total structure of the network). To implement these properties in an artificial neuron network the authors propose to abandon the attempts to simulate a biological nervous system at the level of the elementary functioning of neuronal processes and proceed to the reproduction of the data store and processing structure.

Active neuron network based on the multialternativity principles
This structure may implement the faceted principle (from the French facette -a face) of objects classification can be used which is characterized by the fact that for every event, the ensemble (set) {f,s} is formed from the signs-facets f, whose set of values s defines itself the specific object a(f,s): (2) Faceted principle of data storage allows us to combine different objects in a network structure separately for each feature f i , feature where the introduction of this additional feature or object does not require any adjustment of a previously existing structure of relations but complements it.
Graphic illustration of the relations (2) describing the memory faceted organization in the neuron network is shown in Fig. 1.  In particular, the excitation of f 4 , f 5 , f 6 receptors activates the a 1,2 ensemble and further, in the absence of inhibition by the a 1,1 ensemble, the a 2,1 ensemble is excited which is activated indirectly, i.e., not by the primary receptor f, but in case of a previous activation of a 1,2 .
Thus, a controlled sequence of events occurs in the system, which corresponds to the a 1,2 and a 2,1 ensembles. At the same time the excitation of the a 2,1 ensemble in the absence of inhibition by other ensembles, such as a 2,2 , a 2,3 , lead to the activation of an a 3,1 event.
As a result, under the external influence of {f 4 , f 5 , f 6 } the sequence of events with the specified excitation has been formed.

Example of the active faceted type neuron network
As an example, the electric power distribution network is shown, consisting of an input high voltage substation and thirty low voltage transformer substations shown in Fig. 3 [6].
We assume that the power transmission cables indicated by dashed lines are reserved and shall be connected to the system in a critical event: an overload or failure of the main power cables. Now we set the problem of the development of the decision making neuron network model of an enterprise providing in case of a critical event the command to connect the corresponding spare line. For this task a prototype two-layer passive network with thirty inputs and ten neurons in the first layer and eleven outputs was chosen.
The numbering of the reserve line is shown in Table 2.
A list of decisions taken (the matrix of decision rules) is shown in Table 3.
The training of a neuron network was carried out by error back propagation and consisted of the presentation of thirty critical situation vectors x and their corresponding solutions y, that were required to restore the network operability. Furthermore, in the experiments 2 ... 13 (Table 4) in order to The numbers of these events with the substation numbers to be connected to the failed line are shown in Table 1 below.  evaluate the properties of generalizing neuron network, the training was conducted on incomplete samples, whereas in the process of confirming the situations were presented for recognition that were not participating in the training.   With the same purpose simultaneous failures of two main lines were simulated in the network training previously on a complete set of single failures ( Table 5).
The test results are summarized in Tables 4, 5 and testify to the following: -from thirteen options of trained networks only one network (test 1, Table 4) fully recognized all emergencies charged on it that were included in the training set; -the majority of networks do not recognize events that are not included in the training set (Table. 4), i.e., passive neuron networks demonstrate the absence of extrapolating properties and a complete mismatch of their structure to the structure of critical events. In particular, in the test 2, Table 4 the situation 9 was not recognized which is a special case of the six (sic!) situations numbered as 1, 2, 3, 5, 7, 8 (see. Fig. 1, tabl.1,2), each of those are normally recognized by the network. Similar  Table 1. These facts are an evidence of the absence of generalizing ability of the obtained networks; -in case of an occurrence of two events, where each event is recognized individually by the network in all experiments (Table 5), no fully correct decision has been obtained indicating that the interpolating and selectivity properties of a network are low.  In general, the low functionality properties revealed in the constructed passive neuron network does not permit to use it for the responsible control tasks of critical situations. Any attempts to change the network configuration, the number of layers, the number of neurons and the training method do not lead to a substantial improvement in the network because of its general fundamental limitations: the network is not able to change its structure in the process of training, i.e., it is passive. Elimination of this limitation is possible by way of the transition to the concept of a network design based on the evolutionary principles of multialternativity: the modularity, multi-level structure and division of functions that which keads to the creation of active neuron networks.
Let's proceed to the construction of a neuron network of the active faceted type, the general principles of which were described above.
To obtain the perfect solution, the list of facets },  Thus, the use of the model will lead to the activation of two redundant lines Nos. 10 and 5. Analysis of the electrical circuit in Fig. 3 shows that in case of a failure of the primary line 11-12 both solutions are equivalent. It follows that the model has generalizing properties to the extent which is allowed by the object structure. Indeed, if the network in the process of training is charged with the situation 20 (F 20 in Fig. 4)  It should be noted that the emergence of generalizing properties of the faceted neuron network is explained by the presence in its inherent hierarchical relationships, i.e., its multi-level structure.
Repeated experiments reported in Tables 3 and 5, for a constructed faceted neuron network has fully confirmed its advantages over the passive network: in all cases of single and double failures the active neuron network correctly recognized the situation and made an appropriate decision.
The learning procedure in this neuron network includes: adding a new object in the form of an ensemble of characteristic values a(f,s) z+1 ; the inclusion of each feature-facet of an object in the network structure of this feature.
Here the pre-formed relationships in the network are not destroyed, thus eliminating the possibility of the network (partial or total) retraining phenomenon. Since the block nature of the training is reduced to a simple addition of new situations data and does not require a multi-parameter optimization, the network preserves its highly selective properties: for each emergency situation a single solution is generated with a superposition of these solutions, in case of multiple failures. For example, while the occurrence of the events 10 and 11 the model shall work out the decisions 7, 8 and 9, fully restoring the power supply (see. Figure 3).

Conclusion
Using passive models of neuron networks in the decision making and control systems faces significant difficulties for their implementation due to the propensity of these models to retraining and their low extrapolating options.
Designing the neuron systems is based on the evolutionary principles of multialternativity provides the creation of multiple-action neuron networks with reconfigurable structure, the properties of which resemble to a greater extent their biological prototypes. The comparative analysis of examples of the implementation of passive and active neuron networks in this work demonstrates the following: a multi-level hierarchical scheme of internal connections in the active network provides high generalizing ability of the system in the process of the decisions making in the situations that were not encountered in the process of their training; modular structure allows to build-in the new ensembles implementing the initial structure of the active neuron system, without facing the particular restrictions of the «curse of dimensionality» and the retraining effect; faceted memory organization following the «one event -one group» rule provides an opportunity to build-up an unlimited number of selected events in the active system and the practical implementation of the principle of information variety.