A novel recurrent self-evolving fuzzy neural network for consensus decision-making of unmanned aerial vehicles

Currently, for years, unmanned aerial vehicles have been widely applied in a comprehensive realm. By enhancing computer photography and artificial intelligence, it can automatically discriminate against environmental objectives and detect events that occur in the real scene. The application of collaborative unmanned aerial vehicles will offer diverse interpretations which support a multiperspective view of the scene. Due to diverse interpretations of unmanned aerial vehicles usually deviates, thus, unmanned aerial vehicles require a consensus interpretation for the scenario. To previous purposes, this study presents an original consensus-based method to pilot multi-unmanned aerial vehicle systems for achieving consensus on their observation as well as constructing a group situation-based depiction of the scenario. Further, a fuzzy neural network generalized prediction control system known as a recurrent self-evolving fuzzy neural network is mainly used to ensure stability through the use of a descending gradient online learning rule. At the same time, users can think along the lines of evolutionary biological design. Unmanned aerial vehicles can be modeled as system experts for solving group problems that require the definition of conditions that best describe the scene. First, this method allows each unmanned aerial vehicle to set high-level conditions for detection events by aggregating events based on fuzzy information. These aggregated events are modeled by a fuzzy system ontology, which allows each unmanned aerial vehicle to report its preferences in conditions. Therefore, the interpretation of each drone is compressed to achieve a collective interpretation of the state. The final polls, consent and affinity polls confirmed the final decision group’s reliability ratings. The rated consensus indicates how well the collective interpretation of the scene matches each drone’s point of view.


Introduction
There has been a deep interest in applying to unmanned aerial vehicles (UAVs) to complete complicated assignments for decades.The use of UAV has the benefit for people to avoid directly involving in dangerous tasks or assignments which are the places of hard-to-reach.The UAVs have been utilized in various realms such as military (e.g.enemy attack, crowd monitoring, military intelligence deployment detection) and civilian employment (e.g.2][3][4] Sensor-equipped UAVs that are augmented with the techniques of computer photography 1 and artificial intelligence 5 to manipulate tracking data for detecting people, targets, and environments, and identifying events that occur in the real scene (e.g.vehicles driving on a highway).Since most problems involve various factors such as weather conditions, sensor capabilities, reliability of method application, and environmental patterns, the performance of only a single UAV is not satisfactory in most cases.Moreover, applying single-look scene interpreting for the scenario description will be considerably finite, particularly just only used one unmanned ground vehicle.Thus, by teams comprising diverse UAVs as well as augmented with many technologies and sensors, it could assuredly offer a precise multi-looks watching in a real scene, that is, a complete scenario comprehension, 5 for achieving consensus on every UAV individual scene perspective.
Figure 1 displays a brief system of multi-UAVs that applied to monitor an urban district.Every UAV observes the scene from its own interpretation of the occurring situation and makes its own interpreting results to the occurring condition which can differ from other UAVs.Thus, there requires a consensus on scene comprehending, the most truly interpreting on a scenario.The experimental case could be resolved as a problem of group decision-making (GDM) that requires collective consensus amid these different UAV views for dynamic of the practical scene.
For GDM problems, the application of a consensus measure can help reach agreement among experts 6,7 while reaching a final solution that exactly satisfies each expert's interpretation. 8,9Through consensus modeling, this study proposes a decision-making approach that supports multiple UAV systems to reach consensus on the conditions encountered in the observed environment.This method allows each UAV to communicate its preference for conditions through fuzzy aggregation of detection events.][12][13][14] Thus, the collective interpretation of the state was achieved at the expense of consensus on individual drone preferences.9][20] However, the main problem with ANNs is that these numbers hidden in neurons have a direct and strong effect on the neuron's performance.That means we need to sacrifice the operation time to fulfill the efficiency and accuracy of the computations, which makes the NN tool hard to be utilized online or in real time for applications.Even so, in Bayesian methods for estimating unknown parameters, it is often difficult to assume probabilistic prior information.In addition, the observed data may be vague (fuzzy) rather than certainty (crisp).In this article, a probabilistic approach to deal with such situations was described by introducing the concept of likelihood function for fuzzy data in probabilistic models.This approach employs probability distributions to model prior information.Thus, traditional NN methods, such as multilayer perceptrons for time-varying signals or systems, have less applicability due to the static structure of the system.To address this issue, fuzzy neural networks (FNNs) are considered a flexible and plausible alternative as they combine biologically inspired learning with the mechanisms of human thought.By tuning the mechanism with fuzzy and recursive self-growth schemes, stability and performance are improved and demonstrated in this article.Combined with nonlinear activation functions, recurrent neural networks can handle complex spatiotemporal patterns.Therefore, in this article, we will focus on recurrent self-evolving FNN (RSEFNN) with local feedback for classifying cognitive system states for various UAV applications.
This method generates significant beneficial results for multi-UAV systems for condition awareness, for example, the reliability evaluation of consensus-based group decisions.For example, when applying UAV team participated in a rescue mission, if their detecting results achieve a decision with highly degree agreement, then the rescuers can regard as dependable by the UAV team's interpreting scene.Otherwise, rescuers cannot trust the results of the UAV team.Thus, if the ultimate result can satisfy the scenario interpretations of the individual UAVs, the consensus evaluation is obviously significant.
This study is arranged as follows.The second section presents preliminary knowledge of the GDM procedures such as consensus modeling, the fuzzy ontologies, and refers to the cognition of multi-UAV systems.The third section describes the study's method, emphasizes the UAV preference generating in conditions and constructing the decisionmaking model of consensus-based.The fourth section displays the operating principle of the study method in a scenario of a classic case.Finally, the fifth section dissertates the merits and inferiority of the provided approach and compares it with other approaches mentioned in the article.And the final section proposed conclusions of the article.

System description of RSEFNN
FNNs are mainly used to represent fuzzy "if-then" rules in network structures.At the same time, fuzzy "if-then" rules can be trained using known learning algorithms for ANNs.Key processes include fuzzy rules, inference processes, and fuzzy knowledge.Fuzzy rules determined by previous events and consequences for modeling the relationship between control inputs and outputs.Inference processes are mainly used to define aggregate operators such as fuzzy concatenation and fuzzy inference methods.The proposed algorithm is used to adjust the parameters of neural fuzzy networks.The proposed evolutionary algorithm can take into account the influence of partial solutions and provide an appropriate search space to increase the probability of satisfying the global solution.The ith rule of the fuzzy dynamic model has the form: where M ip is fuzzy sets; x(t) is the state vector; u(t) is the input vector; xðtÞ 2 R n , uðtÞ 2 R n , A i ðtÞ 2 R nÂm , and B i ðtÞ 2 R nÂm ; r is the number of IF-THEN rules.The ith fuzzy rule describes the fuzzy model's dynamics within the fuzzy region i specified the rule's if-part.Given a pair of (x(t), u (t)), the final output of the fuzzy system is inferred as follows The concept of parallel distributed compensation is presented to develop a fuzzy controller to stabilize a fuzzy system for the abovementioned TS fuzzy continuum model. 21The idea is to develop a compensator for each local model.Linear control design methods can be used for each rule.The result of a universal nonlinear global fuzzy controller is a "fuzzy mixture" of individual linear controllers.The fuzzy controller uses the same fuzzy set as the fuzzy system.According to the above fuzzy model, FNN can combine the following modeling schemes.
Consider a multiple NN system N consisting of L interconnected subsystems where the lth isolated framework exits S l ðl ¼ 1; 2; …; LÞ layers for R e ðe ¼ 1; 2; …; S l Þ neurons in every layer with state variables x l ðkÞ*x l ðk À p þ 1Þ and inputs u l ðkÞ*u l ðk À p þ 1Þ.
We assume that v is regarded as the transfer functions T ðvÞ and the net input is described as sigmoid function where t > 0 and l > 0 are the parameters related with the sigmoid capacity.Additionally, you should be familiar with some additional documentation that describes these teams.Indexes are used to separate layers.In particular, add the number of layers by displaying the name of the factor.In this way, the weight lattice for the eth layer is composed as W e , and the transfer vector function of the eth layer could be derived as e ðvÞ ½ T 1 ðvÞ T 2 ðvÞ ⋯ T R e ðvÞ T ; e ¼ 1; 2; …; S l where T & ðvÞð& ¼ 1; 2; …; R e Þ are the transfer vector functions which are associated with e ðvÞ.Subsequently, the min-max matrix Gðv e ; e Þ is vector in the following Moreover, based on the method of interpolation, we could have & ¼ 1 under h vl !0. Suppose that there exist bounding matrices DH ilj such that k DF j ðtÞ k ∑ for the trajectory x j ðtÞ, and the bounding matrix DH ilj can be described as follows where k d ilj k 1, for i; l ¼ 1; 2; …; r j , j ¼ 1; 2; …; J .Then, we have DF T j ðtÞDF j ðtÞ ½ H j x j ðtÞ T ½ H j x j ðtÞ Namely, DF j ðtÞ is bounded by the specified structured bounding matrix H j .
The repetitive structure of RSEFNN is obtained by returning the firing strength of fuzzy rules to the system itself, thus avoiding the use of additional external registers to store past states.Figure 2 shows the system structure of the RSEFNN system model.The details of the system functions of all layers of RSEFNN are shown below. 22he theory is as follows, u(l) defines the system output in a node of the lth layer.(1) This Layer 1 (Input Layer) inputs are illustrated by X = (x 1 …x n ).Level 1 does not perform calculations.Each node in this layer corresponds to an input variable and simply passes the input value to the next (2) The Layer 2: The second level (Fuzzification Level) is also called the membership function level.Each node in these layers uses a layer of Gaussian membership functions corresponding to the linguistic target labels of the layer 1 input variables.These membership function values computed in layer 2 give in which, m ij and 2 ij are the Gaussian membership function's means and variance separately of the jth term of the ith input variable x i .(3) The Layer 3: In layer 3 (spatial firing layer), 23 each node is connected to one fuzzy algorithm and functions layer as a spatial algorithm node.The level 3 node takes the degree of membership as a onedimensional number by the corresponding algorithm from the given level 2 node.And the blur operator AND is used to preprocess the blur algorithm as a logarithmic product operation to obtain the spatial shot strength F j .Therefore, the spatial radiation intensity of each node of the algorithm can be expressed as where j u ð2Þ ij , for algorithm nodes, it expresses the corresponding algorithms' spatial firing intensity.(4) The Layer 4: Each node in layer 4 (temporal firing layer) is a recursive fuzzy algorithm node, 24 forming an inner feedback loop.The time step is represented by t.The output of this cyclic node is a temporal firing intensity j q ðtÞ which comprises the spatial firing intensity and the temporal firing intensity j q ðt À 1Þ.The temporal firing intensity j q ðtÞ is a linear constituted of the firing spatial intensity F j (t) and the preceding firing temporal intensity j q ðt À 1Þ.Hence, the equation for this constitute is in which 0 l i q 1 is a recursive parameter which decides the ratio amid the donation of the past and current situations.The recursive parameter is an initial random value that is generated sequentially over the interval [0, 1] and then updated using a learning math algorithm.
(5) The Layer 5: Nodes at level 5 (result layer) are called hidden result nodes.Each node of the recursive algorithm at level 4 has a corresponding result node at level 5.These resultant node functions produce linear compositions of variable inputs.The output of layer 5 is calculated as (6) The Layer 6: In layer 6 (Output Layer), the output node implements fuzzy defuzzification, synthesizes all of these operations recommended in layer 5 and the recursive nodes on layer 4.This layer adopts a weighted average defuzzification approach in which y is the output of the RSEFNN model and R is the total number of these fuzzy rules.
For simplicity, we consider the case with one output, and then our goal is to minimize the error function The parameter result vector is updated as follows where h 2 ð0; 1Þ is a learning constant.Each recurrent parameter l j q is updated in the following The Gaussian membership function mean is updated in equation (10) and the proof of the stability criterion for fuzzy neural LMIs are in Appendix

A group interpretation scenario
Figure 3 displays the logical viewpoint of the study's framework.Applying various types of UAVs patrol a district and probe events coming from the scene of observation.Every UAV has been equipped with a technical background to complete detection event (UAV's event detection team).UAVs could specifically probe mobile targets on the scene via tracking video algorithms and also use scene ontology with contextual knowledge to fuse this information. 25,26or modeling UAV's probed events and its values of frequency, we expanded Track Stick ontology to a fuzzy ontology.All types of UAV probed events as well as their valuable frequency are appended to the system ontology as principles.The applied principle is expressed as the triple < e; u; f >, whereas e the event type, u has been the UAV entity, and f the event type's valuable frequency.The principle indicates that this UAV u probed the event type e with the valuable frequency f.
8][29][30][31][32][33] Figure 4 displays three descriptors' event definition to a specific event e: Low E, Medium E, and High E. That is, the event e has been a model of variable linguistic (fuzzy linguistic terms) in the three fuzzy concepts, depicted by the membership functions which is fuzzy of the figure.These three conceptions depict different densities of vehicles (or people) related to the kind event e, in the specified scene.Depending on the valuable frequency, the descriptors events describes the participations in type event in detail in the form of membership value which is fuzzy.For example, if the valuable frequency of e is low, low E can depict people's participation in this event type more Medium E and High E.
As soon as every UAV conveys preferences in conditions, the module M2 first permits UAVs to establish a decision in group and then evaluate team agreements as well as probes which UAVs dominate the decision through utilizing consensus reaching processes.Condition understanding based on multi-UAV system is founded as a problem of GDM [34][35][36][37][38][39][40] ; these conditions have been regarded as the alternatives, and every UAV in the system team can be evaluated as one expert.So, every UAV represents its preferences for these detected conditions (refer to Section of RSEFNN), officially, defined n UAVs as well as m conditions, every UAV conveys preferences in the m conditions.These preferences represented through the ith UAV are expressed in vector P i ¼ ðx i 1 ; x i 2 ; …; x i m Þ, whereas P i 2 R m ; 8i ¼ 1; 2; …; n.This preference x i j , by the vector preference P i expressed the quantity of the ith UAV prefers some jth condition over any others.The number of dimensionality of preference vectors equals the number of stars used for the ranking system.The document preference vector is represented based on the average vector of term preference vectors.
These UAV systems could be used in various kinds of UAVs (e.g.aerial, ground, sensor-based, etc.), every UAV possesses distinct functions and abilities.Further, the weather, for example, luminosity and humidity, or any other environmental characteristics (i.e.dense forests, radioactive regions), may decrease the capabilities of some UAVs.Therefore, each UAV has a reliability level; more specifically, w i means reliability weight incorporated with the ith UAV.For example, let us premeditate a UAV team comprise of three UAVs (i.e.UAV#1, UAV#2, and UAV#3), in which UAV#1 and UAV#3 have been equipped with action cameras and UAV#2 equipped with infrared cameras.
This kind of model summarizes UAV preferences and defines the collective vector preference about this condition.The collective vector preference cp ¼ ðcp 1 ; cp 2 ; …; cp m Þ is comprised of m components, in which the jth factor ðcp j Þ presents this team's preference for this jth case.Hence, we let w i and P i ¼ ðx i 1 ; x i 2 ; …; x i m Þ be, respectively, that vector preference as well as weight which are incorporated with this ith UAV, cp j are given while the arithmetic weighted mean in this UAV preference in this jth case (11)   in which the cp j value presents this global aggregation preference values in the jth condition, and j ¼ 1; 2; …; m.While given θ similarity vectors, n UAVs are evaluated, whereas t ¼ n Á ðn À 1Þ=2.Assume P j and P i be the vectors preference for the jth and ith UAVs respectively, the vector similarity SV k amid the UAV pair are computed as these distances between the UAV's vectors preference. 41By integrating resemblance vectors between the UAV pairs, the degree of consensus amid all the UAVs for each condition (CS) is acquired.Given the resemblance vector SV k ¼ ðSV k 1 ; SV k 2 ; SV k m Þ amid the vectors preference P i and P j in which i 6 ¼ j and i; j ¼ 1; 2; …; n, the degree of consensus cs j amid all of the UAVs on that jth condition has been computed as the average power mean of that jth element in all of the resemblance vectors [42][43][44][45][46][47][48] SV k ¼ jP i À P j j (12) where j ¼ 1; 2; …; m, and p is the pÀnorm average power value, i 6 ¼ j and k ¼ 1; 2; …; i; t; j ¼ 1; 2; …; n, k ¼ 1; 2; …; i; t; j ¼ 1; 2; …; n.The degree of CS determines in what kind of conditions the UAVs existing divergence, thence, discriminate whether the team's decision is reliable on each condition.CS degree under all of the conditions (cr) has been computed as the average power means of the CS degree.CS on the relation (cr) offers an unparalleled accumulative gauge for assessing the consistency amid UAVs in this team under all of the conditions.The denser cr is to zero, the higher the consistency of UAV under all the conditions, and the higher the reliability of the last decision group (ccp).The collective cumulative preference (ccp) is computed as these arithmetic means in the factors of these collective preferences where i ¼ 1; 2; …; n.The proximity and consensus degrees have been applied to explain the axioms of conditions and UAVs.Thereby, the system can probe the most reasonable conditions and UAVs guiding the team's decision through requests.Once the vector preference P i ¼ ðx i 1 ; x i 2 ; …x i m Þ for that ith UAV, the collective preference for that ith UAV amid all of the conditions is the arithmetic means of its factors.

Numerical case
A case study is in the section to demonstrate what our model acts in the practical scene.Let us premeditate the experimental scene displayed in Figure 5.This scene relates to some people crossing as well as other walking nearby the road.If a group of six drones arrives at a site, monitoring the area, then every UAV could simultaneously probe five  people at the scene via tracking video, other moving objects have been filtered off (as shown in obj_6 in the figure).The UAV infers the constructed epistemology to probe events as the system ontology axioms (i.e.predicate-object subjecttriples), in which the events type (predicate) has been involved in the relevant personnel (subject) as well as the position where this event takes place (object).][51][52][53][54][55] Depending on the "A group interpretation scenario" section, the module M1 implements an initial step which is identified by (0) for UAV configuration preferences, so the module M2 guides UAVs to the last interpretation group via others steps.
(0) Situation and preference generation: Calculating the frequencies which were related to each event's type probing by the UAVs.At this time, based on the query-based maximum concept satisfiability, it is probable to calculate this preference of UAV#1 on the marching people states.Generally, the preference of UAV for a certain condition is produced by requesting the maximum satisfiability concept of UAV instance as well as their events type frequencies.It demonstrates the preferences on the conditions of marching people in Table 1 produced by the six UAVs.Considering the concept of the marching people defined in listing 3, the request will be utilized to UAVs to query the valuable frequency of the four events types which is included on the concept of marching people from the second to fifth column.These final columns indicate that request consequences, which represent the valuable preference of each UAV in this condition.The higher those preference values, the more suitable those UAVs will consider describing the observation scene.In this case, the situation of marching people is regarded as very suitable for scene description by UAV#4.
(1) Collective preferences: There are five statuses that can be recognized in this scene display as Figure 5, such as simple crossing, people marching, traffic, shopping, and men working on the road.UAVs will produce the preference values on these statuses, and their results are reported in Table 2.In the light of (1) we calculated the collective preference (CP) team vector and reported their values as Table 3.Based on the results, simple crossing (CRS) team vector is the most suitable status to depict this observation scene, while men and traffic (TRF) working on road (WRK) are the least suitable statuses to describe how it occurred.The consequence denotes that ultimate team decision.Due to the scene not showing any status which may affect any UAV's performance, for the purpose of Simplify, by allocating its weights to one of that supposing each UAV has the same reliability.
(2) Consensus: As soon as the collective preferences are produced, the consensus gauges, depicted in previous Section, permit assessing the consistency level amid the UAVs.As mentioned previously, our consensus system model has been constituted of three aggregation levels, such as those depicted in Section 3-B.In level 1, vectors similarity among pairs of UAVs evaluates the resemblance amid pairs in UAVs.Vectors similarity has been the rows of Table 4, which expresses the consistency of UAV pairs in different statuses.For instance, in experimental case by the crossing simple status, the UAV pairs, that are the most consistent, have been the couples (UAV#1, UAV#4), (UAV#2, UAV#3), (UAV#2, UAV#5), and (UAV#3, UAV#5).The combinations with more than two in a team are also feasible without the limit to two members and this article just uses the brief instance for the case study.That similarity integration of the vectors on the UAVs depending on cs measure (3) permits the assessment of that consensus degree amid the UAVs on every status.That consequences are expressed as the vector cs and listed in Table 5.Note that the team   mainly allows with the trf traffic status and extremely opposes the sho shopping status.Beginning from a vector cs, this cr consensus by the relation gauge has been given (4).Their values has been given 0.46 that illustrates that the average consistency of UAVs in all cases is 54%.That is, they partially agree on all cases.

Comparisons
For probing the UAVs which guide the team's decisionmaking, the group's proximity ps (5) of each single UAV was evaluated.The outcome of vectors ps is illustrated in Table 6.These values in the ith row explain those differences between the preference of ith UAV and the preference of the team in various statuses.This indicator probes the most inconsistent status between a single UAV and the team, and which UAVs dominate the team's decisionmaking in each case.For instance, UAV#2 and UAV#5 are the biggest differences in the team's preference for simple crossing (crs) status, while UAV#4 and UAV#l guide the decision procedure in this status.The numbers of decisionmaking leader (i.e.UAV#1, UAV#5, and UAV#6) are the largest in the status of people marching (mar).For probing the UAVs which guide the decision-making procedures in all statuses, the cumulative proximity in situation gauge has been adopted (8).Table 7 demonstrates cps vectors.UAV#5 guides team decisions in all statuses and UAV#2 and UAV#4 represent the decisions in most distinct from the ultimate team decision.The model error of the overall fuzzy neural approximation has been simulated bounded and satisfied in Figure 6.The dashed line (the real error of modeling) is totally bounded in solid line (system allowed error of states) and it can guarantee the stability and stabilization of controlled systems.From Table 8, the comparison for the controllable range with time lag shows that the proposed methodology is much more flexible for the controlled system in applications.The allowance of time delay for the controllable range in the studies of Zhen et al. 4 and Coyle et al. 18 is less than the method proposed in this article and similarly the modeling error by traditional techniques cannot guarantee the modeling error bounded.Therefore, from Figure 6 and Table 8, we ensure and demonstrate the performance better compared to exiting published papers.

Discussion, conclusion, and future study
In this article, we use a modified Lyapunov evolving NN, which is a biomimetic algorithm with a high rate of convergence, easy parameter tuning, and memory function.Gray evolutionary neural networks train RSEFNN and generate random seeds in decision space to simulate consensus decision-making and use algorithmic computational    formulas to approximate optimal solutions.7][58] Be clear on your direction and use greedy strategies to avoid excessive unnecessary searches.The new RSEFNN combines neural network linear differentiation schemes with Lyapunov stabilization methods for nonlinear systems and UAV applications.This study proposed a new method to support the multi-UAV system for making decisions about what happens on the scene.A model GDM which possesses modeling consensus is employed in multi-UAV controls.The model permits multi-UAV controls for spying to make various decisions and assess instant results on status probing through the environmental observation.The model we proposed improves the new "ability" of multi-UAV systems to process scene interpretation.The collective preferences permit the multi-UAV controls to describe this judgment of that global team at what happening on the scene.The model offers an assessment of the reliability of controls decision-making that could assist an autonomous ground station to take action or human operators.Owing to the system consensus achieved by UAVs, the station ground can determine whether to replan a task to obtain more knowledge and enhance scene interpretation.To improve the performance of real-time applications, knowledge of iterative structure is useful because it enables neural networks to remember past events.To test the generality of the method, we use an interdisciplinary approach to evaluate the effectiveness of the proposed iterative architecture-based prediction system.The performance of the RSEFNN-based system was evaluated using a generalized method between subjects, and the results showed that the RSEFNN model is feasible, stable, and validated.The highlights of contributions could be listed as below: 1.A novel online gradient descent learning rule of evolved biological algorithm can be realized.2. The approach allows UAVs to build high-level situations from the detected events through a fuzzy-based aggregation.3. The consensus and proximity measures support the evaluation of the final group decision reliability.
The future study directions will emphasize multi-agent paradigm for UAV system design, on the basis of the proposed consensus-based GDM model, we will train in defining cooperation assignment activities, aimed, instead, at the UAV consensus in the scene interpretation.Furthermore, the experiments and simulated verifications would be done in the future research.

Appendix 1
Let the energy function for the neural network (NN) be defined as where P l ¼ P T l > 0. Before the calculation, the variables in the following are given that l M denotes the maximum of eigenvalue, l denotes the eigenvalues of each subsystem, h represents the summation and standardization of weigh by defuzzification with the assumption h i ⥂ ðtÞ !0 and ∑ r i¼1 h i ⥂ ðtÞ ¼ 1.
Definition 1.If the function V ðxÞ is positive definite and has continuous partial derivatives, and if its time derivative along any state trajectory is negative semi-definite, then V ðxÞ is said to be a Lyapunov function with stability. 21e then evaluate the backward difference of V(k) on the trajectories to get h il ðkÞ h jl ðkÞH ijl X l ðkÞ þ l ðkÞ h ul ðkÞ h vl ðkÞH uvl X l ðkÞ þ 1 ðkÞ !À X T l ðkÞP l X l ðkÞ ( ) where h il ðkÞ h jl T l ðkÞP l H ijl X l ðkÞ h il ðkÞ h jl ðkÞh l k X l ðkÞ k 2 (1H) with h l ¼ ∑ L n 6 ¼ l ðL À 1Þl M ðP n ÞjjC2 ln Substituting equations (1C) to (1H) into equation (1B) yields h ul ðkÞ h vl ðkÞh il ðkÞ h jl ðkÞa ijl ) k X l ðkÞ k 2 with We have DV ðkÞ < 0 and the proof is thereby completed.

Figure 1 .
Figure 1.From the UAVs, a different spot interpretation needs the agreement collective on more probable description of scene.UAV: unmanned aerial vehicle.

Figure 2 .
Figure 2. The structure of the RSEFNN model.RSEFNN: recurrent self-evolving fuzzy neural network.

Figure 3 .
Figure 3.The video from finding team UAV events to the last interpretation of the team scene.UAV: unmanned aerial vehicle.

Figure 4 .
Figure 4.The descriptors in the e event.

Figure 5 .
Figure 5. Experimental study illustrating six UAV's observations and a practical scenario interprets.UAV: unmanned aerial vehicle.

Table 1 .
Priority UAV generation monitoring marching people position event.

Table 2 .
Six UAVs preferences on five situations.

Table 3 .
The collective preferences.a WRK: MEN ON THE ROAD; CRS: SIMPLE CROSSING; TRF: TRAFFIC, MAR: PEOPLE MARCHING; SHO: SHOPPING.a Values represent collective decisions in each situation.

Table 4 .
Similarity vectors among UAV have been assessed.

Table 6 .
The individual UAV proximity on the five situations.

Table 7 .
UAV cumulative proximity.a UAV: unmanned aerial vehicle.a Each row illustrates what a drone decision distinct from a decision in group in all of situations.
h 2 jl ðkÞX T l ðkÞðH T ijl P l H ijl À P l ÞX l ðkÞ ∑ il ðkÞ h jl ðkÞ h vl ðkÞX T l ðkÞðH T ijl P l H ijl À P l ÞX l ðkÞ ∑ il ðkÞ h jl ðkÞ h vl ðkÞl M ðQ ijvl ÞX T l ðkÞX l ðkÞ il ðkÞh ul ðkÞ h jl ðkÞ h vl ðkÞX T l ðkÞðH T ijl P l H uvl À P l ÞX l ðkÞ il ðkÞh ul ðkÞ h jl ðkÞ h vl ðkÞl M ðQ ijuvl ÞX T l ðkÞX l ðkÞ h h