Sensor Mobility Control for Multitarget Tracking in Mobile Sensor Networks

In emerging tracking systems using mobile wireless sensor networks, sensor mobility management is essential for balancing the tracking performance and costs under limited network resources and sensor movements. This paper considers the sensor mobility control problem for multitarget tracking (MTT), in which multiple mobile sensors are dynamically grouped and moved to track multiple targets and collaborate within each sensor group via track data fusion. A novel sensor mobility control framework for the mobile sensor network-based MTT is proposed. It is formulated as a constrained optimization problem that aims to maximize the overall tracking performance for all targets while conserving network energy and providing tracking coverage guarantee. The optimization problem is relaxed as a convex programming problem for computational tractability and its solution is implemented in a distributed manner. The newly proposed sensor mobility control scheme, implemented on the basis of iterative subgradient search, is shown via computer simulation to have better performance over the static sensor network-based MTT.


Introduction
Target tracking is one of the key enabling techniques of wireless sensor networks in a variety of applications including security and surveillance, traffic management, wild animals tracking, and environmental monitoring [1][2][3][4][5][6][7][8]. In some applications, such as warehouse management and security, mobile sensors would be more efficient and flexible than fixed ones [9][10][11]. The use of a few mobile sensors would be sufficient for monitoring an area of interest and the deployment of a large number of static sensor nodes is no longer needed. We will consider in this paper tracking multiple targets using a network of wireless sensors that are mobile.
The use of mobile sensor network introduces new challenges in obtaining satisfactory performance for multitarget tracking (MTT). Among them, we will focus on the problem of sensor mobility control that arises mainly due to the mobility and the autonomous nature of sensors. For example, a sensor node may choose to move towards a certain target so as to attain target signal measurements with higher signalto-noise ratio (SNR) and improve the tracking accuracy. However, this decision could enlarge the distances between the sensor and other targets and, as a result, degrade the overall MTT performance. On the other hand, several sensors may track the same target and, to enhance performance, it would be desirable to fuse their target signal measurements. The usefulness of this technique is nevertheless complicated by the fact that, in some applications, the number of targets that can be simultaneously tracked by a mobile sensor is limited due to, for example, the insufficient on-board resources for computation, sensing, and communications. Therefore, an efficient sensor mobility control scheme that can optimally schedule the movement of the mobile sensors and allocate them for tracking different targets would be crucial for MTT with mobile sensor networks.
We will develop in this paper a novel sensor mobility control framework for a mobile sensor network-based MTT system, where each sensor node is able to measure the time of arrival (TOA) and the direction of arrival (DOA) of the target signals. Obtaining both TOA and DOA measurements ensures that each mobile sensor can determine the target location individually, as compared to the systems where only range or angle measurements are available. The proposed sensor mobility control scheme jointly specifies the moving 2 International Journal of Distributed Sensor Networks trajectories of the mobile sensors and forms groups of sensors for tracking different targets in an optimum manner. The sensor movement scheduling impacts the MTT tracking performance by changing the relative locations of the targets with respect to the sensors and thus affecting the noise level of the target TOA and DOA measurements. The sensor allocation enables the application of data fusion techniques to obtain improved target trajectory estimation. The combined effect of the above two functions of the developed sensor mobility control scheme is to maximize the MTT performance.
The development of the sensor mobility control scheme is nontrivial. The underlying reason is that the sensor allocation is indeed a multidimensional integer programming problem and finding the solution is still an open problem. In the existing literature, greedy algorithms, heuristic searching, divideand-conquer method, or grid-based exhaustive searching was most widely tried. However, these algorithms have very high computational complexity, which hinders their practical applications. Moreover, some techniques, such as the greedy and heuristic searching, may not always yield the globally optimal solution (i.e., they suffer from local convergence). In this paper, the primal-dual approach and the subgradient searching algorithm are applied to solve for the optimal sensor movement and sensor allocation in the sensor mobility control for MTT with mobile sensors. The proposed solution can be realized in a centralized or distributed manner and its complexity increases linearly with the number of mobile nodes and targets, which makes it attractive for practical deployment.
The rest of the paper is organized as follows. Section 2 surveys the literature related to our work and highlights the contributions of this paper. Section 3 presents in detail the considered mobile sensor network-based MTT system and Section 4 formulates the sensor mobility control as an optimization problem. Iterative solution for the sensor mobility control problem on the basis of the subgradient searching technique is developed in Section 5 and its distributed implementation is given in Section 6. Section 7 illustrates the performance of the proposed sensor mobility control scheme using computer simulations. We conclude our work in Section 8.

Related Work
Target tracking with wireless sensor networks is a classic problem that has been extensively studied in the literature. In particular, in single target tracking, significant efforts have been spent on grouping and scheduling stationary sensors for achieving the optimum tracking performance [12][13][14][15][16]. For the more general case where there exist more than one target of interest, the problem of sensor allocation has also been investigated under various optimization criteria and several centralized sensor allocation schemes have been proposed [17][18][19][20][21][22].
Relatively less attention has been paid to the problem of sensor mobility control that consists of sensor movement scheduling and sensor allocation for target tracking with mobile sensor networks. Within the single target tracking framework, the sensor movement scheduling was considered in [23,24]. More specifically, in [23], the authors studied the range-only tracking and proposed a centralized sensor movement scheduling method for minimizing the trace of the covariance matrix of the target position estimate, or, equivalently, the localization mean square error (MSE). The minimization problem was solved using the modified Gauss-Seidel and linear programming relaxations. This work, nevertheless, did not take into account the sensor allocation aspect. Reference [24] determines the sensor movement by partitioning the field of interest into grids of equal area and performing grid-based exhaustive search. The associated computational complexity can be prohibitive when the region to be monitored is large. References [25,26] considered the adaptive flocking of static sensors with limited sensing range (LSR) and the grouping of mobile sensors for tracking a single maneuvering target via the Kalman-consensus filter (KCF). The sensor grouping was based on the metric of Fisher information. For the MTT scenario, [1] proposed a greedy algorithm for sensor allocation and sensor movement scheduling to minimize the capturing time for target tracking. Reference [27] investigated collaborative prediction among mobile sensors with limited communication connectivity for improving the prediction of spatial-temporal physical phenomena using truncated measurements. The sensor navigation scheme was established via solving an optimization problem that aims at minimizing the prediction variance.
This paper is a significant extension of the authors' previous work [28], where accurate target identification and the use of static sensor networks were assumed. We will consider the MTT problem with mobile sensor networks here and we continue to assume perfect data association as in [28]. Our work differs greatly from other literatures in the following aspects. Firstly, we propose a generic framework for sensor mobility control in an MTT system where a sensor node is allowed to track multiple targets simultaneously and each target can be tracked by several sensor nodes, thus enabling data fusion for improved performance. Secondly, the sensor movement scheduling and the sensor allocation within the sensor mobility control are determined jointly for obtaining the optimum tracking performance under practical system constraints. Finally, we apply the linear programming relaxation and the subgradient searching technique to solve for the desired sensor mobility control scheme.

System Model
We consider tracking multiple moving targets with a network of wireless mobile sensors. It is assumed that the sensor nodes have sufficient sensing and wireless communication capabilities so that they are able to keep monitoring the whole region of interest and maintain their intersensor communication connections. The sensor node positions are known from the use of network self-calibration techniques such as the one in [29] or built-in GPS units. We further assume that the mobile sensor nodes can move freely within the monitored area.
Each sensor node is allowed to track either a single target or multiple targets, and it can collaborate with other sensors through track fusion to estimate the position of the International Journal of Distributed Sensor Networks 3 same target with better accuracy. The mobility of the sensor nodes together with their freedom to select the targets to track give rise to the problem of sensor mobility control investigated in this work. In particular, we will consider the problem of joint sensor allocation and sensor movement scheduling for attaining optimal MTT performance. The distributed implementation of the proposed solution to the above problem will also be presented.
Within the mobile sensor network, each sensor node is configured by a multilayer target tracking system, which was originally developed for wireless local positioning system for remote mobile monitoring [28]. It consists of five functional modules intertwined in a hierarchical manner with each module focusing on one designated task [28,30]. They are termed as the sensing, the sensor mobility control, the target measurement collection, the MTT, and the information relay modules. The MTT task during each sampling interval is performed as follows. The sensing module at a sensor detects the targets by measuring the received SNRs. The sensor mobility control module dynamically decides which targets to track and the location the sensor node should move to. Upon the arrival at the new position, the target measurement collection module is triggered to obtain target position-related measurements, namely, the time of arrivals (TOAs) and the direction of arrivals (DOAs) of the target signals. Afterward, the tracking and fusion module steps in, where a tracking algorithm, a Kalman filter (KF) in this paper, is executed to fuse the measurements from a target possibly collected by a number of sensor nodes to update the estimates of the target motion parameters such as its position and velocity as in [1]. The function of the information relay module is to pass on necessary information to other sensor nodes if the current node quits the tracking of specific targets. The above multilayer tracking system will be further elaborated in Table 1.
In the rest of this section, we will present the Kalman filtering algorithm adopted in the tracking and fusion module. The purpose is to introduce the relating symbols and notations in order to facilitate the development of the distributed sensor mobility control scheme in the next section.
where A ( ) is the transition matrix and k ( ) is the Gaussian system noise with zero mean and covariance Q ( ). Let ( ) be the set of sensor nodes whose sensor mobility control modules decide that they would collaboratively track target at time index . Suppose that sensor belongs to ( ). Therefore, its sensing module extracts the TOA and DOA from the signals of the th target. We will denote them as ( ) and ( ). The target position at time index , z ( ) = [̃( ),̃( )] , can be estimated at sensor using where s ( ) = [ , ( ), , ( )] is the position of sensor at time index that is determined by the sensor mobility control module. The above transformation has been shown to be unbiased and consistent under some mild conditions that can be satisfied in many practical scenarios [30,31]. The measurement vector z ( ) is related to the target state vector u ( ) via where H ( ) = [I 2×2 , O 2×2 ] and w ( ) is the measurement noise independent of the system noise k ( ). The linearity in the target motion model (1) as well as the measurement model (3) enables the direct application of the classic KF in track fusion. It is worthwhile to point out that, when the target position-related measurements are nonlinearly related to the target state vector such as in bearing-only tracking (BOT), the proposed sensor mobility control scheme is still applicable, as long as we convert the nonlinear measurement equation into a linear one using first-order Taylor series expansion. This is equivalent to the utilization of the extended Kalman filter (EKF) but cautions must be taken because the EKF has potential of introducing unmodeled errors [30].
In the target motion model (1), the process noise k ( ) has a covariance matrix Q ( ) that is typically assumed to be time invariant and known a priori. On the other hand, the measurement noise w ( ), despite its zero mean and Gaussianity, has a covariance matrix R ( ) that is time varying. This is mainly because it is dependent on the received SNR as well as the relative positions of the target and the mobile sensor node. Out of the unbiasedness and efficiency of the transformations (2a) and (2b), we choose the Cramer-Rao Lower Bound (CRLB) of the measurement noise as the functional form for R ( ) in the development of the sensor mobility control scheme. In fact, the CRLB of R ( ), denoted as M ( ), has been derived in [32].
After obtaining the current measurement vector z ( ), Kalman filtering is performed to update the estimate of the target state vector. We will summarize the necessary processing for the completeness of this paper. Letû ( − 1 | − 1) be the a priori state estimate for target at time index − 1 andû ( | ) the a posteriori state estimate at time index . Correspondingly, their error covariance matrices are P ( − 1 | − 1) and P ( | ), respectively. We adopt an information form of KF for track fusion in order to facilitate the development of the sensor mobility control scheme. The state vector estimateû ( | ) and its error covariance matrix P ( | ) are transformed into the information state vector y ( | ) = P −1 ( | )û ( | ) and the information matrix 4 International Journal of Distributed Sensor Networks Table 1: Distributed implementation of the proposed sensor mobility control scheme.

Sensor mobility control module
prediction: updating: where M ( ) is the CRLB of the measurement vector z ( ) at sensor . The information form of KF has the same estimation accuracy as its conventional counterpart but it provides computational advantages for multisensor data fusion [33]. This is evident from (4c) and (4d) where the sensor measurements contribute in an additive manner, and, as such, the distributed implementation of the information form of KF follows naturally. The task of the sensor mobility control scheme is to determine ( ), = 1, . . . , , as well as the sensor positions s ( ), = 1, . . . , , so as to obtain the optimal tracking accuracy for the targets (the sensor positions would affect M ( ) as discussed in [32]).

Problem Formulation for Sensor Mobility Control
This section aims to formulate the optimization problem for sensor mobility control that allocates the sensors for tracking targets and moves the sensors to designated positions. The goal is to estimate the target state vectors with the optimal accuracy using the information form of the KF in (4a), (4b), (4c), and (4d).
International Journal of Distributed Sensor Networks 5 We select the mutual information of the target state vector u ( ), denoted by [u ( ); z ( )], as the basis for generating the objective function. [u ( ); z ( )] evaluates the reduction of uncertainty in u ( ) due to the utilization of the sensor measurements from target up to time index that are collected in z ( ). Mathematically, we have (5) where [u ( )] is the prior uncertainty depending on the target motion only and [u ( ) | z ( )] is the remaining uncertainty after the data z ( ) has been obtained. It has been shown in [28] that where (u ( ) | z ( )) is the probability density function (PDF) of u ( ) given the sensor measurements up to time index , is a constant, and | ⋅ | indicates the matrix determinant. From (4d), Y ( | ) is the information matrix of the KF outputŷ ( | ). The determinant of the inverse Y −1 ( | ) is a measure related to the volume of confidence ellipsoid for the target state estimateŷ ( | ) [30].
Note that the targets move independently and we then sum (5) over = 1, 2, . . . , to produce the desired objective function to be maximized for sensor mobility control. In other words, the sensor mobility control achieves the optimal tracking accuracy for all targets by reducing to the maximum level the uncertainty in the target state vectors at time index . Again from (4d), we have that the value of the objective function would depend on ( ) as well as M ( ), which corresponds to the allocation of sensors for tracking target and the positions of sensors at time index . They have to be optimized jointly. Putting (4d) into (5) and noting that the prior uncertainty [u ( )] relates to the target motion solely, we can express the obtained objective function as In fact, the derivation of (7) can be found in [28]. It is important to point out that the objective functioñ( ) actually needs to be maximized before time index to obtain the optimal performance at time index . This imposes difficulty in evaluating the CRLB of the sensor measurement M ( ) because the true target state vector at time index remains unavailable. To address this issue, we will replace M ( ) with its predicted version M ( | − 1) that has the same functional form as M ( ), except that the target state vector at time index is substituted by its predictionû ( | − 1) = Y −1 ( | − 1)ŷ ( | − 1) which can be obtained from (4a) and (4b) at time index − 1.
We proceed to incorporate practical constraints on the variables to be optimized in (8), namely, ( ) and s ( ), to accomplish the formulation of the optimization problem for sensor mobility control. In particular, firstly, to avoid losing the tracking of any target, it is required that each target is tracked by at least one mobile sensor node. That is, ∑ =1 ( ) ≥ 1 for = 1, . . . , . Secondly, although each mobile sensor node is allowed to track more than one target at the same time, the total number of targets being simultaneously tracked should not be over , due to the limitation on the processing capability, memory size, and/or energy supply. Mathematically, we have ∑ =1 ( ) ≤ for = 1, . . . , . The combination of the above two constraints yields four scenarios of practical interest. More specifically, they are In Case 1, also the simplest case, one target is allowed to be tracked by one sensor node only while one sensor node can track at most one target at a time. Case 2 differs from Case 1 in that each sensor node is able to track more than one target simultaneously. In Case 3, each target is tracked by more than one sensor nodes and each sensor is tracking more than one target. In contrast to Case 3, in Case 4, a sensor node can track one target at most.
Combining any of the four constraints with the objective function in (8) would yield the corresponding optimization problem for the sensor mobility control in a particular network setting of interest. Essentially, no matter which 6 International Journal of Distributed Sensor Networks scenario is selected, we will obtain a nonlinear combinatorial optimization problem. A number of solutions have been proposed in the literature such as [34,35]. However, there are two major disadvantages in these algorithms that hinder their application to the sensor mobility control problem considered in this paper. They have (1) high computational complexity and (2) centralized processing nature. Those observations motivate us to develop in the next two sections distributed solutions with relatively lower complexity to the sensor mobility control problem.

Iterative Solutions for Sensor Mobility Control
We will present the solutions for the sensor mobility control problem formulated in the previous section that reduces maximally the summed uncertainty of the target state vectors u ( ) by jointly optimizing the sensor allocation variables ( ) and the sensor positions s ( ) at time index (see (8)). In particular, the solutions to the optimization problems with constraints on ( ) being from Cases 1 and 3 are presented in detail. The sensor mobility control solutions for Cases 2 and 4 can be established in a similar manner and they are described briefly.

Sensor Mobility
The above problem is NP-hard mainly due to the binary nature of ( ) [34]. Enumerating all the feasible solutions is unacceptable as the number of solutions is the factorial of the problem dimension.
To solve the maximization problem (9) where We will develop an iterative solution to (10) by combining the primal dual formulation with the projection subgradient searching technique. Subgradient searching offers greater simplicity and better robustness over commonly used algorithms such as the interior-point technique or the Newton's method [36,37]. In addition, the implementation of the subgradient method requires much less memory, which makes it attractable for mobile sensor network applications.
The algorithm development begins with deriving the Karush-Kuhn-Tucker (KKT) conditions for the optimal solutions to (10). It can be shown by following the same approach in [28] that the optimization problem (10) is convex with differentiable objective and constraint functions, which guarantees the global convergence and the optimality of the subgradient searching method when it is used to solve (10). As a result, the Slater's condition is satisfied. The associated Lagrangian is where and are the Lagrange multipliers and their dependency on the time index is suppressed for simplicity of the presentation. The Lagrange dual problem can be written as where ≥ 0, = 1, . . . , and = 1, . . . , . The optimal solutions to the primal and dual problems are * ( ) , s * ( ) = min ( ( ) , s ( ) , * , * ) , where −1 = |P −1 ( | − 1) + H ( )M −1 ( | − 1)H ( )| s * ( ) and * ( ) is ( ) evaluated at the optimal sensor positions s * ( ). It is very hard to find the optimal values * ( ) and s * ( ) in closed form from the KKT conditions. Therefore, an iterative searching algorithm, the subgradient searching technique presented below, is resorted to produce the desired solution for sensor mobility control.

Sensor Mobility Control for Case 3.
We consider in this subsection the sensor mobility control under the constraints from Case 3. In this scenario, each sensor can track multiple targets and each target is tracked by more than one sensor. Track fusion using the information form of the KF in (4a), (4b), (4c), and (4d) is applied to explore jointly the measurements from the sensors tracking the same target. The optimization problem for sensor mobility control in this case can be written as, after using (8), the constraints from Case 3 and the LP relaxation that transforms ( ) ∈ {0, 1} into 0 ≤ ( ) ≤ 1, This is essentially a nonlinear convex optimization problem. The associated Lagrangian is By applying the primal-dual formulation and the subgradient searching approach similar to those described in Section 5.1, we obtain the iterative solution for sensor mobility control for Case 3, which is given as where the LP relaxation on the sensor allocation variables ( ) has been applied. The associated Lagrangian of the above optimization problem is The corresponding iterative sensor mobility control solution can be shown to be, through the utilization of primal-dual formulation and the subgradient searching with step size , The definitions of ,Γ, andΛ can be found in (23).

Distributed Sensor Mobility Control
This section addresses several important aspects in implementing in a distributed manner the iterative sensor mobility control scheme developed in the previous section. The first aspect is on the stopping criterion for terminating the iteration within the proposed sensor mobility scheme. We note that the objective functions are all differentiable. Hence, the subgradient searching algorithm is guaranteed theoretically to converge to the optimal values, that is, lim → ∞ ( ) ( ) = * ( ), = 1, . . . , , = 1, . . . , , given that the step size is sufficiently small [36,37]. However, we may have limited processing time in real-time applications and not all ( ) may be able to synchronously reach their optimal values. Therefore, in practical implementations, we will terminate the iterations as follows. The iteration should be stopped at the th iteration for sensor when the difference ∑ | ( ) ( ) − ( −1) ( )| falls below a small threshold th or when a maximum number of iterations max have been reached. The final value of ( ) ( ), if it is fractional, is rounded to 0 or 1 and it will be considered as the sensor allocation decision at time .

Simulation Results
Computer simulations are conducted to demonstrate the impacts of applying the proposed iterative sensor mobility control scheme on the MTT accuracy. In the simulations, the mobile sensors assume that the motion of any target follows the same first-order Markov process given in (1) The system noise k ( ) is a zero-mean Gaussian random vector with covariance matrix 2 V I 2×2 . We set 2 V = 0.05 m 2 /s 2 and the sampling interval Δ = 1 s in the simulations.
Each mobile sensor is also assumed to have the same target TOA and DOA measurement model. Specifically The received target signals are corrupted at the mobile sensors by the additive white Gaussian noise (AWGN). Under the above settings, the CRLBs of the TOA and DOA from target jointly measured at sensor , denoted by C ( ) and C ( ), can be expressed as C ( ) = 1 / ( ) and C ( ) = 2 / ( )sin 2 ( ), where 1 and 2 are constants dependent on the signal effective bandwidth and wavelength of the target signal, the number of antennas, and the antenna separation [32]. position estimate, where the estimation error w ( ) is also zero-mean and has a covariance matrix M ( ) related to C ( ) and C ( ). The explicit form for M ( ) is given in [32]. Thanks to the mobility of sensor nodes, it is not necessary to deploy many sensors in order to achieve the effective monitoring of an area of interest. Hence, in the simulations, we will consider only 4 sensors deployed in a square with an edge length of 140. We will investigate the performance of the sensor mobility control schemes developed for MTT Cases 1 and 3 (see Section 4). In particular, in Case 1, the number of moving targets is set to 4, and each target can be tracked by only one mobile sensor and each sensor is allowed to track one target only. On the other hand, for Case 3, we also set the number of moving targets to 4, but, in this case, each target is tracked by more than one sensor and each sensor is able to track more than one yet no more than two targets at the same time (i.e., = 2, = 1, 2, . . . , 4).
Performance results from simulation of the sensor mobility control scheme for MTT Case 1 are shown in Figures 1-3. Specifically, Figure 1 plots the true and estimated trajectories of four targets in consideration for a particular ensemble run. The target trajectory estimates, obtained via the KF in (4a), (4b), (4c), and (4d) augmented with the iterative sensor mobility control scheme for Case 1, match the true ones closely. Figure 2 shows the corresponding movement of mobile sensors during the whole tracking process. Figure 3 illustrates the index of the target tracked by each sensor as function of time. As we can observe from Figures 1-3, due to the fact that, in this ensemble run, the sensor initial positions are near to those of the targets, each sensor follows the motion of the target initially closest to the sensor most of the time. This is somewhat expected, as the sensor mobility control schemes aim at optimizing the overall MTT performance and this is achieved in this case with each target being tracked by the sensor near to it, which would generally result in improved target position estimate. Figures 4-6 give the simulation results for the sensor mobility control scheme developed for MTT Case 3, where Figure 4 shows the true and estimated target trajectories in a certain ensemble run, while Figures 5 and 6 illustrate the sensor movements and the targets allocated to each sensor for tracking. Different from the previous simulation, in this experiment, sensors 1 and 4 are not close to any targets at the beginning of the tracking process. Besides, the mobile sensors are allowed to track at most two targets simultaneously in this case. The above two factors greatly complicate the dynamics of the target allocation, as indicated in Figure 6. Examining carefully Figures 4-6 together reveals that sensor 3 follows the motion of target 1 and sensor 2 tends to move closer to targets 3 and 4 and keep tracking them, since they are deployed near to one another in the initial stage. This observation is similar to the one obtained from Figures 1-3. But in contrast to sensors 2 and 3, the trajectory of sensor 4 does not appear to follow the motion of any particular target, except that, at the end of the tracking process, it starts moving near to target 3. Despite the diverse movements of the mobile sensors, all the four sensors are tracking the maximum allowable number of targets most of the time during the whole tracking process. This indicates that the proposed sensor mobility control scheme attempts to improve the target tracking accuracy by increasing the number of measurements on the position of each target, which is expected.
To quantify the tracking performance, Monte Carlo simulations of 500 ensemble runs are performed and the cumulative distribution function (CDF) of the localization error for each target is plotted for investigation. This metric illustrates how likely a location estimate would have an error less than a prespecified value. We compare the localization error CDF for two techniques, namely, the mobile WSNbased MTT and the static WSN-based MTT. The simulation setup is the same as the ones used to generate Figures 1-3. The difference is that we consider here 4-10 targets randomly moving in the area of interest and the measurement noise is generated independently for each ensemble run. Overall CDF curves are shown in Figure 7. The minimum mean square error (MMSE) of locating all targets is shown in Figure 8. It can be observed that the use of mobile sensor networks offers significant tracking accuracy improvement over the case where static sensor networks are deployed.

Conclusions
In this paper, the problem of sensor mobility control for a multisensor multitarget tracking system was investigated. We designed a new sensor mobility control scheme that jointly optimizes the sensor movement and allocation for maximizing the tracking performance for all targets under practical system constraints. A generic optimization framework for sensor mobility control was developed and solved using projection subgradient searching. The newly proposed sensor mobility control algorithm can be implemented in a distributed fashion with the complexity being linear with the number of sensor nodes and targets.
In future works, we plan to improve the proposed sensor mobility control scheme by taking into account more practical aspects including but not limited to potential loss of network connectivity and sensing coverage due to sensor movements. Energy efficiency is also an interesting topic that considers reducing the amount of energy consumption needed in sensor movements.