The generalization negation of probability distribution and its application in target recognition based on sensor fusion

Target recognition in uncertain environments is a hot issue. Fusion rules are used to combine the sensor reports from different sources. In this situation, obtaining more information to make correct decision is an essential issue. Probability distribution is one of the most used methods to represent uncertainty information. In addition, the negation of probability distribution provides a new view to represent the uncertainty information. In this article, the existing negation of probability distribution is extended with Tsallis entropy. The main reason is that different systems have different parameter q. Some numerical examples are used to demonstrate the efficiency of the proposed method. Besides, the article also discusses the application of negation in target recognition based on sensor fusion to further demonstrate the importance of negation.


Introduction
In recent years, information fusion is paid great attention in military applications. 1,2Many methods based on information fusion have been proposed to classify objects, 3 target recognition 4,5 and decisions making. 6,7n most cases, the final result which is obtained by information fusion of multiple sensors may be reasonable.However, the recognition result maybe counterintuitive due to high conflict. 8Up to now, fusion rule is still an open question.Besides, the information gathered in sensors fusing system [9][10][11] exists uncertain as it is incomplete, inconsistent and possibly imprecise.3][14] However, due to the existing uncertainty, [15][16][17] it is essential to obtain more information according to known knowledge.
9][20] In most cases, we commonly use 'must', 'may' and 'likely' to estimate whether an event will happen or not.2][23][24] Probability distribution is used to quantitatively describe the possibility of occurrence of a result in real applications. 25More importantly, for most cases, it is much easier to describe the negation of the events than directly describe them in some circumstances.For example, if it is difficult to prove a mathematical formula rigorously, however, a counterexample can easily prove the formula wrong.Similarly, there is a significant property in probability, namely mutual exclusion.
Mutual exclusion means the occurrence of one thing when its opposite will not happen.In other words, the probability of an event will affect its opposition.Therefore, it is meaningful to study negation of probability distribution (NPD). 26Recently, the NPD based on Gini entropy was proposed by Yager to present knowledge in a new view. 27he motivation of this study is to find a more general and reasonable mode to evaluate the uncertainty of NPD.If the correlations between the N elements are strong enough, then the extensivity of some entropies is lost, which is incompatible with classical thermodynamics. 28Tsallis entropy 29 is proposed to overcome this difficulty with non-extensive property.In Tsallis entropy, q is not a 'tunable' parameter, but is strictly determined by the system itself. 30As a result, always using Gini entropy in Yager's NPD, 27 which is a special case of Tsallis entropy when q = 2, is not reasonable to many real systems.To solve this problem, the article uses Tsallis entropy to measure the uncertainty of NPD, which can be more reasonable.
More importantly, in nature and society, everything has its negation.Regret and expect give us two views to consider a problem.Moreover, the best alternative is as close as ideal solution and is as far as negative solution.Besides, NPD can provide another information based on known information.That is to say, we can analyse this target from two sides, which can improve the correctness of decision-making.The reason is that NPD has the properties of imprecision and unknown, which is beneficial for decision.More importantly, if the original information is highly conflicting, the conflict of negation may not be highly conflicting.Based on the discussion, it is of great significance to study negation.Hence, the article also uses negation to recognize target based on sensor fusion.
The rest of this article is structured as follows: In the section 'Preliminaries', preliminaries of some entropies and Yager's negation method are introduced.The proposed method is introduced in the section 'The proposed method'.In the section 'Examples and discussion', a numerical example is used to illustrate the method of negation and using Tsallis entropy to evaluate the uncertainty.The application of negation in target recognition based on sensor fusion is introduced in the section 'Application of negation based on sensor fusion'.Finally, some conclusions are given in the section 'Conclusion'.

Preliminaries
In this section, the preliminaries of some entropies and NPD will be briefly introduced.

Gini entropy
][33] In Yager's NPD, the Gini entropy is adopted. 27finition 1. Gini entropy is defined as follows 34 where p i is the probability distribution which satisfies Gini entropy is expansive, that is, G(P 1 , P 2 , Á Á Á , P n ) = G(P 1 , P 2 , Á Á Á , P n , 0).It means adding an element with zero probability does not affect the Gini entropy.
Moreover, if only for the sake of comparing the uncertainty associated with the two distributions, the Gini entropy may be preferred to the Shannon with simpler calculation. 35Knowledge representation is of great significance to modern science.4][45] Interestingly, probability is similar to coins, it also has a original side and a negative side. 46Summarizing, negation provides a new view to investigate the property of probability.Definition 2. Assuming a probability distribution P = P(A 1 ), P(A 2 ), Á Á Á , P(A n ) f g , the NPD was defined as 27 P( Because the probabilities are completely mutually exclusive, its negation was normalized, so the NPD satisfies It should be pointed that since the basic probability assignment has more flexibility to represent uncertainty, the negation of basic probability assignment is also paid attention recently. 46allis entropy Definition 3. Given a probability distribution P = P 1 , P 2 Á Á Á , P n f g , the Tsallis entropy can be defined as 29 when q ! 1, which corresponds to the Boltzmann-Gibbs entropy, namely One can rewrite Tsallis entropy as follows 29 where q is a real parameter sometimes called entropicindex which generalizes the usual exponential and plays the important role in description of thermodynamic.k is the Boltzmann constant (or some other convenient value, in areas outside physics, such as information theory, cybernetics and others). 28,29Besides, when q = 2, the Tsallis entropy degenerates the Gini entropy.

The proposed method
From the above, it can be seen that Yager's method would be the special case.In order to expand the application of negation, the article proposed a method that uses Tsallis entropy to measure the uncertainty of NPD.Using Tsallis entropy to measure the uncertainty of NPD can expand the application of negation In the following part, the property of negation-based proposed method can be discussed.
Property 1. Tsallis entropy can increase after negation.Specific proof is as follows when q 6 ¼ 1 From the above calculation, it can be seen that Tsallis entropy can increase after negation.
Property 2. There is maximum Tsallis entropy with uniform distribution after multiple negations.
The general formula of the negation method is as follows where x i + 1 represents the ith negation.After multiple negations, the probability distribution is as follows From the above, it can be seen that Tsallis entropy reaches the maximum after multiple negations.
Besides, in nature, any operation can cause energy consumption.The change of entropy means the consumption of energy, and consumed energy cannot be reused.Similarly, the variation of information entropy is accompanied by the consumption of information.Of various entropies, Tsallis entropy is non-extensive entropy and applied in artificial and social complex systems.Hence, using Tsallis entropy to measure uncertainty of NPD can expand the applications of negation.

Example
Example 1.Given the event space X = fx 1 , x 2 , x 3 g, P(x 1 ) = 0:2, P(x 2 ) = 0:5, P(x 3 ) = 0:3, the NPD can be obtained as follows P(x 1 ) = 0:4, P(x 2 ) = 0:25, P(x 3 ) = 0:35 Comparing P(x i ) with its negation P(x i ), it can be seen that the event x i which attaches to lower probability acquires higher probability after negation process.Attractively, what happens if negation is taken to P(x i ) ? Second negation is made and the results are obtained as follows P(x 1 ) = 0:3, P(x 2 ) = 0:375, P(x 3 ) = 0:325 It is apparent that the original probability distribution P(x i ) is not equal to P(x i ).Generally, this process for obtaining the NPD is irreversible when N ø 3 P(x i ) 6 ¼ P(x i ) It is necessary to explore what causes this irreversibility.From Example 1, it can be found that the probability would be redistribution after negation.Hence, it should be considered whether the uncertainty has changed after taking NPD.In the next section, there are some discussions on why the negation process is generally irreversible and how to measure the uncertainty of the probability based on our negation method.

Further discussion
Again considering the Example 1, the Table 1 is used to show the change of probability after each iteration of negation process.It can be clearly known that probability is reallocated after negation.Gradually, the probability distribution becomes more and more close to the uniform distribution.What is the cause of this phenomenon?
The concept of entropy is derived from physics. 47,48ith the increasing application of entropy, 49,50 information entropy has become an indispensable part of modern scientific development. 51,52Shannon 53 first proposed the concept of information entropy to describe the uncertainty and has applied in a lots of fields. 54,55However, as in typical physical problems, there are some examples where the Boltzmann-Shannon entropy is not suitable. 56,57In 1988, Tsallis proposed a non-extensive entropy called Tsallis entropy.Subsequently, non-extensive statistical mechanics which is Generalization of Boltzmann-Gibbs Statistics emerged based on Tsallis entropy.More importantly, the Boltzmann-Gibbs statistics is recovered as the limitation when q ! 1. 58 The value of q is hidden in the microscopic dynamics of the system.It can be obtained by some experiments.A relevant improvement on the inference accuracy by adopting non-extensive entropies is proposed by Tsallis, 57 where q equals 2.5 obtains a better result.Besides, in article, 59 q = 2:5 is used to obtain mutual information during DREAM4 data.As a consequence, using Tsallis entropy to measure the uncertainty extended the method of Yager. 27Therefore, it is of great significance to measure uncertainty of negation with Tsallis entropy.
Let's calculate the uncertainty using Tsallis entropy with different q to observe how Tsallis entropy change after negation, as shown in Figure 1.There are changes of Tsallis entropy with five different q from 1 2 to 3. It can be easily seen that the Tsallis entropy increases gradually in the process of negation and is almost invariable after the fifth negation, regardless of the value of q.It is clearly shown that the uncertainty gradually increases in any system after each iteration of negation process.When there are multiple iterations of negation process, the probability distribution would be uniform distribution and Tsallis entropy has maximum entropy, which is consistent with Property 1 and Property 2. Hence, using Tsallis entropy to measure uncertainty of NPD is reasonable.

Application of negation based on sensor fusion
Target recognition is paid great attention in military applications. 602][63] Fusion rules can help us make better decisions. 64,65However, using existing information to make  There are three sensors to recognize the target which maybe A, B and C. The results from the sensors are as follows Dempster rule is widely used in sensor data fusion. 64he combination results are shown as follows Using the method of negation, new information is obtained as follows Similarly, fusion rule is used as follows It is well known that conflicting coefficient K plays an essential role in information fusion. 8As a result, analysing the difference of K between the original probability and its corresponding negation is necessary.It is easily found that the conflict after negation becomes bigger.The reason is that p(A) = 1 À p(A) represents the probability the target is not A, namely p(A) is the probability that target is B or C. Besides, the method of negation contains the uncertain class (AorB).Hence, the conflict becomes bigger due to the existing uncertain class after negation.
More importantly, by comparing the results between the original and negation, the decision has more support to target A. p(A) = 0:055 reflects the probability that the target is not A. That is to say, it can increase the probability of target A from the other side.Hence, it can get two results from the two sides based on data, which is better to make a more reasonable decision in target recognition.
Next, considering the changes of entropy between the original and negation, assume that q = 3 as there are three sensors From the above, it can be seen that the Tsallis entropy after fusion becomes less, showing that the fusion can decrease the uncertainty of information.
However, fusion rules don't work in extreme probability distribution.Using negation can provide another view to analyse this phenomenon.There is a special example to better explain the application of negation.There are two sensors to recognize the targets which maybe A, B and C. The results from the sensors are as follows Using fusion rules, one can get the results as follows From the result, it can be seen that the target must be C as p(C) = 1.Obviously, the result is not very reliable.Besides, it can be seen that when evidence is highly conflicting, Dempster rule doesn't work.However, negation can provide another information, which is useful to make decision Next, considering the changes of entropy between the original and negation, assume that q = 2 as there are two sensors T 1 = 0:4800, T 2 = 0:4800, T = 0 T 1 = 0:8100, T 2 = 0:8100, T = 0:7480 Obviously, negation gives us some new information.From the above result, the probability that target is not C is 0.62.It can be seen that the result of negation is more reasonable than the original result.More importantly, negation provides another view to analyse problem and can better handle conflicting evidence.Hence, negation is useful to make decision.
In addition, analysing the Tsallis entropy between original and negation, it can be found that Tsallis entropy becomes bigger.Besides, according to the second law of thermodynamics, the entropy of an isolated system never decreases.The negation can increase Tsallis entropy.Moreover, entropy is irreversible.From this view, the negation can make systems spontaneously evolve towards thermodynamic equilibrium, the state with maximum entropy.That is to say, negation tends to make the system uniformly distributed, and in fact it does.Hence, the negation not only provides a new way to understand problems, but also provides better performance of decision support system.Besides, because the Tsallis entropy is highly applied in many applications, negation provides a new view to obtain information.Hence, using Tsallis entropy to measure the uncertainty can enlarge the application of negation.

Conclusion
Probability distribution is efficient to represent knowledge.However, everything in nature and society has its negation, which shows that negation is very essential.Similarly, probability distribution also has its negation.The article extends the proposed negation by using Tsallis entropy.Besides, numerical example is used to calculate the uncertainty by different q of Tsallis entropy after negation.It can be found that there is maximum Tsallis entropy after taking many iterations of negation no matter what the value of q is, meanwhile, the probability distribution becomes uniform distribution.Hence, it is reasonable to choose Tsallis entropy in NPD after q is determined.From the above, it can be known that the negation can consume some information and increase the Tsallis entropy.Finally, the article also discusses the application of negation in target recognition based on sensor fusion, which shows that negation can obtain a more reasonable decision when the conflict is high.In sum, negation not only provides a new view to obtain information from another side based on known information, but also can include imprecise class to make better decision.Combining the known information and the information of negation can increase the accuracy of decision-making.
This article is a preliminary study to obtain the NPD based on uncertainty measurements.This work is done mainly to design a more efficient negation process and uncertainty measurement, and expand the application of negation.Besides, it is essential to determine the uncertainty related to the negation and study its properties.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work is partially supported by National Natural Science Foundation of China (Grant Nos 61573290, 61503237).

Figure 1 .
Figure 1.The trend of Tsallis entropy after each iteration of negation process.

Table 1 .
The change of probability after each iteration of negation process.