A novel pigeon-inspired optimization with QUasi-Affine TRansformation evolutionary algorithm for DV-Hop in wireless sensor networks

In modern times, swarm intelligence has played an increasingly important role in finding an optimal solution within a search range. This study comes up with a novel solution algorithm named QUasi-Affine TRansformation-Pigeon-Inspired Optimization Algorithm, which uses an evolutionary matrix in QUasi-Affine TRansformation Evolutionary Algorithm for the Pigeon-Inspired Optimization Algorithm that was designed using the homing behavior of pigeon. We abstract the pigeons into particles of no quality and improve the learning strategy of the particles. Having different update strategies, the particles get more scientific movement and space exploration on account of adopting the matrix of the QUasi-Affine TRansformation Evolutionary algorithm. It increases the versatility of the Pigeon-Inspired Optimization algorithm and makes the Pigeon-Inspired Optimization less simple. This new algorithm effectively improves the shortcoming that is liable to fall into local optimum. Under a number of benchmark functions, our algorithm exhibits good optimization performance. In wireless sensor networks, there are still some problems that need to be optimized, for example, the error of node positioning can be further reduced. Hence, we attempt to apply the proposed optimization algorithm in terms of positioning, that is, integrating the QUasi-Affine TRansformation-Pigeon-Inspired Optimization algorithm into the Distance Vector–Hop algorithm. Simultaneously, the algorithm verifies its optimization ability by node location. According to the experimental results, they demonstrate that it is more outstanding than the Pigeon-Inspired Optimization algorithm, the QUasi-Affine TRansformation Evolutionary algorithm, and particle swarm optimization algorithm. Furthermore, this algorithm shows up minor errors and embodies a much more accurate location.


Introduction
The optimization problem is the values of a set of parameters under certain constraint conditions, so as to obtain a certain performance, and play a role to a certain extent. It usually solves mathematical problems. Swarm intelligence algorithms are mostly stemmed 1 from biological behaviors in nature, and most of them have strong optimization ability, which is a powerful tool to solve problems in life. 1 In the last few years, a good deal of naturally inspired computation methods are proposed, which have their own expansibilities and have been widely used in engineering and daily life. Although the Pigeon-Inspired Optimization (PIO) algorithm has obvious advantages such as faster convergence speed and simpler calculation comparing with other algorithms, it still needs further study in the basic theory of convergence and multi-objective optimization.
In the early years, Duan and Qiu 1 propose pigeoninspired optimization which comes out the simulation of homing behavior of pigeons through magnetic field and landmarks. 2 Pigeons could seek out their homing tools with no difficulty: magnetic fields, the sun, and landmarks. The homing ability of pigeons has been known, and pigeons are used as communication tools a long time ago. When the pigeons are far away from their destination, they use the geomagnetic field to identify the direction, and when they are closer to the destination they use the local landmark to navigate. The first model in PIO algorithm is based on the geomagnetic field and the sun. While the landmark operator model is based on landmarks.
The PIO algorithm has good performance, but it has certain limitations. Based on the simple structure of the PIO algorithm and the peculiarity of simply combining with other intelligent algorithms, we bring forward an improved algorithm that combined with the QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm. QUATRE is a new metaheuristic algorithm proposed by Meng in 2016, which uses an evolutionary formula like the affine transformation in geometry. 3 The PIO makes use of a group of pigeons to delegate candidate solutions in the solution space to be studied and the optimization problem is that pigeons remove toward the best solution for a given measurement of performance through iteration.
In this article, our proposed algorithm combines the PIO algorithm with the QUATRE algorithm. This novel QUasi-Affine TRansformation-Pigeon-Inspired Optimization (QT-PIO) evolutionary optimization algorithm still comprises two operators. 4 In the primary stage, the map and compass operator is executed to explore and exploit the solution space for multiple iteration. The particles learning strategy combines evolutionary guidance matrix B and co-evolutionary matrix M of QUasi-Affine TRansformation Evolutionary Algorithm into matrix form. In the second phase, the population is divided into two groups. In the first part, the population size reduces from ps to ps=2. The second part of particles start their new explorations.
With the rapid development of information science and technology, the requirement of node location is becoming more and more accurate in wireless sensor networks (WSNs). 5,6 Nowadays, WSNs are widely applied in various fields, for example, intelligent transportation, defense, environmental monitoring, health care, space exploration, and many other fields. The Distance Vector-Hop (DV-Hop) positioning algorithm is one of a series of distributed localization methods on the strength of distance-vector routing and Global Positioning System (GPS) localization.
The DV-Hop node localization algorithm is an important distance-independent positioning algorithm. 7,8 DV-Hop is a node localization algorithm that is independent of signal attenuation, and it has high practicability in terms of network cost, layout, and signal attenuation. 9 In a randomly distributed network, the node positioning error is large. More and more intelligence algorithms are used to optimize DV-Hop, this paper mainly takes advantage of QT-PIO to optimize that decreases the error of positioning, makes the location more accurate, and thus ensures a certain degree of accuracy.
This text mainly proposes a new approach with regard to finding an optimal solution. The optimization ability performs well and effectively improves the convergence speed in this model. Applying it to the DV-Hop algorithm and the experimental results indicate that our consideration greatly reduces errors of positioning and improves positioning accuracy. In a word, it is justifiable to select QT-PIO to find the optimum solution. This article is separated into five sections, the second section describes the basis of the new algorithm. The third section raises our novel algorithm. The fourth section describes the consequence of evolutionary optimization. Next, the fifth section discusses the optimal DV-Hop algorithm. The last section is conclusion.

Related work
This section mainly exhibits the basis of the proposed algorithm, including the original PIO algorithm and the QUATRE algorithm.

Canonical PIO algorithm
On the strength of the specific navigational behavior of the pigeons in their search for a home, a bionic group intelligent optimization algorithm is proposed. PIO algorithm puts to use two diverse operator model two disparate that is put forward by means of simulating the mechanism in which the pigeons make use of various guide instruments in a disparate stage to find the loft. There are diverse tools: the map and compass operator and landmark operator. The pigeons perceive the field of the earth with magnetic material as their map and think of the height of the sun as a compass.
In the loft model, the objects of the study is the particle which is virtualized from pigeons in the navigation process. 10,11 Initial setting of the parameters are set, for example, the population of pigeons is ps, the pigeons' search in D-dimensional space, the number of running is set to 100, the current iteration is iter, the total number of iterations is Iter max = 600, in the first stage iterations are maxIteration 1 = 300, in the second phase iterations are maxIteration 2 = 300, and the position and velocity of every pigeon are recorded as below (j = 1, 2, 3, :::, ps) where j means the j th individual in the population,X j represents the coordinates of j th individual andṼ j signifies the j th velocity. When iter is greater than 1 and less than maxIteration 1 , the map and compass operator has an effect on the homing of the pigeons. The pigeon X i updates its velocity at (iter + 1) th bỹ where R is a positive number that serves as the first factor, b is a numerical value randomly generated between 0 and 1, andX iter gbest is the global optimal position obtained by making a comparison among the positions of all the pigeons after iter iterative loop.
Affected by the map and compass operator, pigeon adjusts its position with the new velocity of each pigeon at (iter + 1) th as below When iter is greater than the set maximum in the first phase, the exploration and exploition are stopped immediately, and the optimization proceeds into the following phase which takes advantage of the landmark operator. In this process, half of the pigeons is deserted with the passage of each iteration, which navigate toward the loft when the number of iterations iter is from maxIteration 1 to maxIteration 1 + maxIteration 2 . Pigeons that are far from their destination are considered to have no ability to distinguish their home path, so they are discarded. While the current number of iterations is less than maxIteration 1 + maxIteration 2 , the positionX j of each remaining pigeon is expressed as in the following expression X c is the midpoint among the own pigeons left and serves as a landmark in the second phase, the reference direction in which the rest of the left pigeons will fly. And b is defined by a random value between 0 and 1. ps iter is representative of the population of iter generation. For the different questions asked, F(X iter i ) can be solved as follows for the minimization problem and the maximization problem, where fobj(X iter j ) is the function value of the pigeonX j at the iter th generation and a is an any nonzero constant. 12 Similarly, the landmark operator stops working, supposing that iter exceeds the maximum iteration limit set previously at the present.

QUATRE algorithm
The evolutionary algorithm used in the QUATRE algorithm is similar to the affine transformation in geometry. 13,14,15 The evolutionary architecture adopted by the QUATRE algorithm is X7 !MX + MB, so the equation is given as below where X stands for individual population matrix, X = (X 1 ,X 2 ,X 3 , :::,X j , :::,X ps ) T , j 2 ½1, ps. There are two main operations for conversion: the first step randomly arranges all the elements in all the row vectors in the initial matrix M intitial , and then randomly arranges all the row vectors in the matrix in which has been arranged at the first step randomly. So after the above two consecutive steps, the matrix M is formed.
In QUATRE algorithm, the coordinate dimension is usually smaller than the population size ps, the initial matrix M intitial ought to be expanded to the size of the particle population. For example, equation (12) shows how to extend to ps (the population size ps = 3D + 2). On condition that ps%D = k, where % is the remainder operator, k D 3 D lower triangular matrices constitute the first k Ã D rows of the matrix M intitial . After performing a primary transformation on M intitial , matrix M is immediately evolved in accordance with M intitial  ;  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7  7 Each individualX j (j 2 ½1, ps) in the matrix X has a corresponding evolutionary instruction vector B, which has several different generation methods given in Table 1. 16 X rl, g (l 2 ½1, ps) means entirely the random matrix which is produced by arranging the line sequence of matrix X of the g th generation at random. X gbest denotes the global best position at g th generation.
The same ps row vectors make up X gbest, g and which X gbest, g is shown in equation (13) X gbest, g =X gbest X gbest X gbest ::: X gbest F is the control factor of the difference matrix with a range of (0,1], where the outcome of (X r, i ÀX r, j ) (i, j 2 ½1, 2, :::, 5) is deemed to difference matrix. Commonly, F = 0:7 is a good choice.

QT-PIO algorithm
The newly filed approach is an optimization algorithm combining pigeon-inspired optimization algorithm with a quasi-affine transformation algorithm. The basic PIO algorithm makes use of the first operator for exploration and takes advantage of the second operator for exploitation. The structure of the pigeon herd optimization algorithm is relatively simple. It combines with QUATRE algorithm, promoting and supplying each other, which enhances the capability of PIO to solve complex problems. The QT-PIO algorithm changes the iterative update strategy of particles and the particles still work in two consecutive phases to find the optimal solution. First, initialize the coordinate and speed of all particles in the form of a matrix using equations (14) and (15) which are given as follows X =X 1 X 2 ::: X j ::: X ps whereṼ j = ½v j, 1 , v j, 2 , :::, v j, D , :::, v ps, D , j 2 (1, 2, :::, ps) In the first phase, the equation corresponding to the mutation scheme ''QUATRE/rand/1'' is taken as the donor matrix B during the execution of the second operator. The positions X of all the particles at (iter + 1) th are updated by equation In each iteration, under the mechanism of the first phase, all particles update their speed and position together in the form of a matrix, rather than updating one particle at a time. Ultimately, the velocity matrix and the position matrix are obtained after every iteration. After the position of species calculated by the evolutionary equation (16) are compared in pairs with individuals of the previous generation, the optimal individual is finally selected as the individual in the next generation population.
In the second optimization phase in QT-PIO, instead of abandoning the part of particles having relatively worse fitness values in the swarm (ps = ps=2), this article considers that each particle has its own value. Whereas, there are two groups of particles and they adopt, respectively, disparate communication strategies to make them play different roles. At the first group, the population quantity of particles cut down linearly from ps to ps=2 along with iter running and population quantity after changing in the first group is recorded as ps1. Without a doubt, the spare particles are the second group. The first set of particles still uses the center position X iter of the remaining particles as the landmark and that acts as their exploitation direction to the best final solution. The exact evolution equation described in equation (17) ps1 = (À( ps 2 )) where maxIteration 2 is the amount of iterations at the second phase, and F(X iter j ) is the weight of the j th particle, which is computed by equation (18) on the basis of the maximization or minimization of the requested questions F(X iter j ) = fobj(X iter j ), for minimization 1 fobj(X iter j ) + a , for maximization The other part of the particles explores the best final solution using the learning strategy in QUATRE is shown in equation (8) and no longer regards the landmark as a direction to explore. In the meantime, the evolutionary guidance matrix B is calculated using equation (19) B = X gbest, g + F Ã (X r2, g À X r3, g ): In our algorithm, we set parameters as follows: dimension D, the search space ½R D min , R D max , the population size of particles ps, the control factor F = 0:7, the map and compass factor R = 0:2 in an any nonzero constant a, r takes a random value of 0 to 1, the amount of iterations are Iter max (including the amount of iterations maxIteration 1 in the first phase and the amount of iterations maxIteration 2 in the second phase), generate randomly the speed matrix V and the position matrix X of ps particles, and set fitness function f (X i ). The generation process is outlined below: Step 1: Assign parameters and the X and V of particles in the form of a matrix.
Step 2: Set the previous position X pbest = X, optimal solution location X iter gbest , the optimal value f (X iter gbest ).
Step 3: Produce matrix M and matrix B.
Step 4: Enter the first iteration.
Step 4: Sort the particles arrange them from smallest to largest, select the center position from ps1 historical best points.
Step 5: Enter the second iteration.
That is all shown in Algorithm 1. 1: Input: Initialize D, ½R D min , R D max , ps, F, R, a, Iter max , maxIteration 1 , maxIteration 2 , V, X, f (X j ); 2: //Compute the fitness value of every particle 3: for i = 1 : ps do 4: initialization of evaluation fitness value f (X iter j ); 5: end for 6: Set the previous position X pbest = X, optimal solution location X iter gbest , the optimal value f (X iter gbest ); 7: Generate the co-evolution matrix M according to equation (11) 8: for iter = 1 : maxIteration 1 do 9: B = X r1, g + F Ã (X r2, g À X r3, g ); 10: // Update position and velocity// The novel algorithm is combined with PIO and QUATRE that introduces matrices and adopts new update strategies. In the second stage, the number of particles is linearly processed, but no one is discarded, so that enhances the vitality of the particles, and the multiformity of the species is augmented, thereby improves competitiveness of this algorithm.

Experimental results and analysis
On behalf of evaluating the newly proposed QT-PIO algorithm, twenty-four functions are selected to test the performance of the algorithm. 17, 18 We regard convergence speed and the minimum solution as the criteria of measurement in this article. There are twenty-four test functions applied, the dimension settings and search scope in the following tables. Table 2 shows the unimodal functions. Table 3 contains the multimodal functions. Table 4 displays fixed-dimension multimodal functions.
Everhart and Kennedy presented the particle swarm optimization (PSO) first in 1995, and its basic concept originated from the research on the foraging behavior of birds. 19,20,21 PSO is originally inspired by the cluster activity of birds and a useful model is created by group intelligence. 22,23 To validate the quality of the property   for the optimization problems of the QT-PIO algorithm comparing with the other three algorithms: PIO algorithm, PSO algorithm, QUATRE algorithm. The parameters of the four algorithms used in the comparison process are expressed in Table 5. The data of the average and square deviation under the test functions of over 600 iterations are obtained, and the comparison is expressed in Table 6. In the values of each row, the best value is highlighted as shown. Observed from the comparison of Table 6, the new QT-PIO algorithm can often provide superior results in most selected test functions than other algorithms of over 100 runs. QT-PIO algorithm wins twenty in twenty-four on mean value comparsion as Table 6 shown. For the sake of more intuitively evaluating the quality of the QT-PIO algorithm, Figure 1 exhibits the curves of PIO, QUATRE, PSO, QT-PIO with the best final fitness value obtained minimum for the unimodal functions   10 4 210.1532 10 4 210.4028    and the curves of multi-modal functions is shown in Figure 2. Figure 3 holds up the curves of PIO, QUATRE, PSO, QT-PIO with the best final fitness value obtained minimum for the selected fixeddimension multimodal functions in experiments. In addition, Figure 1, Figure 2 and Figure 3 display the function search space of particles is used for optimization. As known from Figure 1, the curve of the QT-PIO algorithm not only converges fastest but also has the best optimal solution. This means that it performs best under all selected seven unimodal functions. As a consequence, we could say the QT-PIO algorithm optimizes the original function in the case of unimodal functions. We select six multimodal functions for testing, Figure 2 indicates that the QT-PIO algorithm achieves the best optimal solution as soon as possible under the five functions of six multi-modal functions. Figure 3 shows that QT-PIO achieves a fast convergence speed.
In accordance with the comparison of Figures 1, 2, 3, and Table 6, there is no doubt the convergence speed of QT-PIO is distinctly faster and can achieve better optimal value. Strictly, under the selected test functions modal the proposed algorithm can be said to be beautiful.
On the whole, QT-PIO is excellent and effective compared with before. Local optimization in the original PIO algorithm is solved well in the proposed algorithm, and the diversity of QT-PIO algorithm is improved effectively.

Experiment for DV-Hop in WSNs problem
The research of positioning technology has far-reaching significance. Compared with traditional sensors that have limitations, WSNs have numerous advantages, such as smaller size, less energy consumption, simple organization, no special personnel required, and high fault tolerance. Not only can it reduce network deployment time, cut down deployment costs, but it can also be used in a number of areas where traditional sensors cannot.
WSN is a self-organized, multi-hop distributed network composed of randomly deploying sensor nodes that communicate wirelessly in the monitored area. 24 According to whether the distance between practical nodes requires to be measured during the locating process, the localization algorithm can be classified into a range-free location algorithm and range-based positioning algorithm. The DV-Hop is a non-ranging location algorithm based on distance vector routing in order to avoid direct measurement of the distance between nodes. Distance vector routing is based on the distance of the destination to determine the best path.

Incipient DV-Hop
Pristine positioning algorithm obtains the minimum hop-count by distance vector routing between the nodes whose location is unknown and the beacon node first. Next, dividing the distance between each beacon node by the total number of hops gets the average distance per hop. The average distance hop avedi of per hop is calculated in equation (20) hop avedi = where hop ij is the hop-count from the anchor node i to the anchor node j.
And then the average distance of per hop is multiplied by the minimum hop-count between this unknown node and the beacon node, hence the obtained product is taken as the estimated distance between this unknown node and the beacon node. In the end, the location estimation of the unknown node is obtained by means of the maximum likelihood estimation method. In this way, the error of state estimation would be tested by the objective function as equation (21) f (x, y) = min There n is the amount of all anchor nodes, d i signifies estimated distance between the unknown node and anchor node, (x, y) denotes the estimated position coordinate of unknown nodes in the intelligence algorithm combined with DV-Hop. (w i , z i ) means the position coordinate of the anchor node.

Our algorithm for optimizing DV-Hop
Although DV-Hop has the characteristics of a simple method, the requirements for accuracy are getting higher with the needs of life and production. 25 Therefore, people often combine intelligent computation with it to locate. In this article, we make use of QT-PIO to optimize DV-Hop, which improves the precision of positioning and reduces the error. In the optimized DV-Hop, first calculates the minimum value of hops among nodes and figures up the average hop distance of each beacon node and then the hop-count is used to estimate the distance. At last, our novel QT-PIO algorithm is applied to estimate the position coordinates of unknown nodes. The main problem of positioning in the WSNs is to reduce the positioning error, here it uses the novel algorithm to optimize. 26,27,28 The experiment considers the two-dimensional best final position obtained in the QT-PIO algorithm as the approximated location of the unknown nodes. The detailed procedure is as below: 1. Set parameters of QT-PIO and initialize distance and hop-count matrix among nodes; 2. Generate the minimum hop-count among all the nodes; 3. Calculate average hop distance of beacon nodes using equation (21); 4. d i is figured out; 5. Employ QT-PIO to gain the two-dimensional variable.

Optimal experiment
The In the course of combining evolutionary algorithms with sensor localization, the two-dimensional optimal solution (x i , y i ) signifies the i th unknown node. The errors of the position of estimated unknown nodes are calculated by equation (22). Table 7 shows the comparsion of positioning errors and accuracy that come out of equation (23)    where S is the communication distance of nodes. Figure 4 depicts the comparsion of errors of the applied QT-PIO, PSO, PIO, QUATRE after running 20. When scanning the errors and accuracy of four algorithms applied to DV-Hop, the QT-PIO algorithm could reduce the errors of estimation effectively and have a great effect on the positioning precision.
Distinctly observed from above that the results of the QT-PIO under the given testing function for the DV-Hop positioning algorithm in WSNs is preferable than the other three algorithms in the process of comparison.

Conclusion
To improve the capability and performance of PIO, this study proposed a new optimization algorithm, which mainly applies the co-evolutionary M and mutation B in QUATRE algorithm to PIO algorithm, and enables individuals to update their positions in the form of matrix in the process of optimization. This learning strategy not only merely raises the diversity of PIO but also effectively avoids the problem that pigeons of the PIO algorithm plunge into a local optimal solution. The curve of registration results demonstrates the converging speed of this update strategy model. Furthermore, the rate of convergence is faster than the traditional model, and this model has good practicability. Later we use the new optimized algorithm for DV-Hop positioning, which further improves the error caused by positioning and improves the accuracy. Our algorithm supplies an effective optimization mechanism for going into WSNs and the application of WSNs. The proposed QT-PIO algorithm may be further improved by adopting some efficient methods. 29,30