Game-Theoretic Based Distributed Scheduling Algorithms for Minimum Coverage Breach in Directional Sensor Networks

A directional sensor network, where a lot of sensors are intensively and randomly deployed, is able to enhance coverage performances, since working directions can be partitioned into different K covers which are activated in a round-robin fashion. In this paper, we consider the problem of direction set K-Cover for minimum coverage breach in directional sensor networks. First, we formulate the problem as a game called direction scheduling game (DSG), which we prove as a potential game. Thus, the existence of pure Nash equilibria can be guaranteed, and the optimal coverage is a pure Nash equilibrium, since the potential function of DSGs is consistent with the coverage objective function of the underlying network. Second, we propose the synchronous and asynchronous game-theoretic based distributed scheduling algorithms, which we prove to converge to pure Nash equilibria. Third, we present the explicit bounds on the coverage performance of the proposed algorithms by theoretical analysis of the algorithms' coverage performance. Finally, we show experimental results and conclude that the Nash equilibria can provide a near-optimal and well-balanced solution.


Introduction
In recent years, wireless sensor networks (WSNs) have attracted much attention as the promising platform for many applications, such as environmental monitoring and battlefield surveillance [1]. As a fundamental problem for WSNs, coverage optimization has been explored thoroughly in networks based on an omnidirectional sensing model [2]. Recently, with the advent and introduction of video sensors, ultrasonic sensors, and infrared sensors, coverage control algorithms for directional wireless sensors networks (dWSNs) have become an active subject and the state of the art is well surveyed in [3].
Power conversation is still a critical issue in dWSNs since the directional sensors in the network are usually batterypowered and nonrechargeable devices. Therefore, designing coverage optimization algorithms with energy efficiency is quite challenging for successful applications of dWSNs. One approach to meeting these challenges is to partition the working directions of sensors into covers. By activating a different cover at each time slot and cyclically shifting through these covers, thereby, the network's lifetime can be extended effectively by a factor of [4].
Some efforts have recently been devoted to the research of coverage optimization with energy efficiency for directional sensor networks. For example, Ai and Abouzeid [5] proposed a directional sensing model, where a sensor is allowed to work in several directions, and the objective is to find a minimal set of directions that can cover all targets. Cai et al. [6] defined the multiple directional cover set problem of organizing the directions of sensors into a group of nondisjoint cover sets to extend the network lifetime, in order to maximize the network lifetime of a directional sensor network. The network lifetime is defined as the time duration when each target is covered by the working direction of at least one active sensor. Following the work in [6], Wen et al. [7] gave the method for prolonging the lifetime of networks based on the combination of equitable direction optimization algorithm and the neighbors sensing scheduling protocol. Generally, the aforementioned work is mainly to maximize the network lifetime of directional sensor networks while covering all targets, which is quite strict for general coverage problems. Sometimes, due to energy constraint, coverage breach [8] (i.e., targets are not covered) may occur if available working directions in a cover are not enough to cover all the targets. Instead of finding the maximum number of directional covers for complete target coverage, the problem of direction set -Cover for minimum coverage breach (dKC-MCB) is reduced to schedule the working directions into covers ( is predefined) to minimize the coverage breach. Although considerable research work has been devoted to the problem of set -Cover for minimum coverage breach in omnidirectional sensing model [4,9,10], to our knowledge, few research efforts have been devoted to the problem of dKC-MCB. Even Yang et al. [11] dealt with the problem of minimum coverage breach under lifetime constraints in directional sensor network by formulating the problem as an integer programming and solving the problem by centralized greedy algorithms.
Although the existing research works have achieved some success on coverage optimization with energy efficiency in directional sensor network, some challenges still remain unanswered, especially for the problem of dKC-MCB. As mentioned in the paper [11], since the directional sensors are energy constrained, distributed coverage optimization algorithms need to be exploited where a sensor takes coverage optimization decisions independently, based purely on communications with its neighbors.
Game theory [12] is a mathematical theory about modeling and analyzing the strategic interactions among intelligent, rational decision makers. Recently, game theory is beginning to emerge as a powerful tool for the design optimization algorithms that can be distributed across many decision makers [13]. The core advantages of game theory for distributed optimization lie in that it provides a hierarchical decomposition between the distribution of the optimization problem (game design) and the specific local decision rules (distributed algorithms). Particularly, if the game is designed as a potential game [14], then there is a possibility that local decision dynamics can achieve convergence to pure Nash equilibrium which coincides with a desirable outcome of the original optimization problem.
Inspired by the previous discussion, in this paper, the problem of dKC-MCB is formulated as a game. Two gametheoretic based distributed algorithms are proposed to solve the problem. Specifically, the principal contributions of this paper are as follows.
(1) We first formulate dKC-MCB as a game: directions scheduling game (DSG). Sensors, as players of the game, interact with each other. Each sensor makes decisions independently to maximize its individual coverage utility. The utility for a sensor is defined as the sum of the marginal coverage contribution of working directions to networks coverage.
(2) We then prove that DSG is a potential game, whose potential function is the same as the optimization objective function of dKC-MCB. This correspondingly enables the design of a coverage optimization scheme that induces the equilibrium of a DSG consistence with the optimal coverage of the underlying network objective. Moreover, this consistency allows us to establish near-optimal performance for sensor dynamics applied to the original network coverage optimization, since the natural sensors' utility-update dynamics converge to a Nash equilibrium.
(3) We propose the synchronous and asynchronous distributed scheduling algorithms, which are proved to converge to pure Nash equilibria. Further, we analyze the coverage performance of the distributed algorithms from the theoretic perspective. We then present the explicit bounds on the coverage performance of both synchronous and asynchronous distributed algorithms.

Preliminaries
Game theory [12] is a mathematical tool that analyzes the strategic interactions among rational decision makers. Three major components in a strategic-form game model = ⟨ , ( ) ∈ , ( ) ∈ ⟩ are as follows.
(1) is a finite set of players.
(2) A = 1 × 2 ×⋅ ⋅ ⋅× , where is a finite set of actions (or pure strategies) available to player . For any set , let Π( ) be the set of all probability distributions over . Then the set of mixed strategies for player is ∈ = Π( ). A mixed-strategy profile s ∈ 1 × ⋅ ⋅ ⋅ × is Cartesian product of the individual mixed strategy. A vector a = ( 1 , . . . , ) ∈ A is called a pure-strategy profile, often denoted by a = ( , a − ), where is a strategy of player and a − is the strategy vector of other -1 players.
(3) (a) : a → R is a real value utility function of player . The utility function (a) measures the outcome of a player at a profile a. In a game, a player chooses the proper actions against other players to maximize its individual utility.
A Nash equilibrium (NE) is a stable strategic profile where no player gets any incentive to unilaterally deviate from this profile. Thus, a Nash equilibrium in some sense is a reasonable outcome of a game. Following, we present the definition of a Nash equilibrium based on the definitions of the players' best response.
Definition 1 (see [12]). A best response for a player to a mixed-strategy profile s − is a mixed strategy * ∈ such that ( * , s − ) ≥ ( , s − ) for all strategies ∈ .
Definition 2 (see [12]). A best response for a player to a pure-strategy profile a − is a pure strategy * ∈ such that ( * , a − ) ≥ ( , a − ) for all strategies ∈ .
With respect to the existence of Nash equilibria of a game, Nash [12] proved that every game with a countable number of players and strategy set at least has a mixed strategy Nash equilibrium. However, in general, mixed Nash equilibrium only implies stable probability distributions over profiles, not the fixed play of a particular joint action profile. This type of uncertainty is unacceptable in many applications, such as our direction scheduling scenario. Instead, in this paper, we focused on the game with pure Nash equilibria. However, pure Nash equilibria are not available in every game. Recently, potential games, which were introduced by Monderer and Shapley [14], received increasing attention since they possess some desirable properties for engineering application scenario [15,16], such as admitting a pure-strategy NE and best response dynamics can achieve the convergence to a pure-strategy NE.
Definition 5 (see [14]). A game = ⟨ , ( ) ∈ , ( ) ∈ ⟩ is a potential game if there exists a potential function Φ : A → R such that, for for all ∈ , all a − ∈ A − and , ∈ : In a potential game, the change in a player's payoff that results from a unilateral change in strategy equals the change in the potential function. When formula (1) is satisfied, the game is called a potential game with the potential function Φ(⋅). In a potential game, the set of pure Nash equilibria can be found by locating the local optima of the potential function, since the incentives of all players are mapped into one function. Thus, we have the following theorem.
Theorem 6 (see [12]). Every potential game with finite strategies space at least has a pure-strategy Nash equilibrium [14].
In addition to the existence of Nash equilibria, the quality of NE also needs to be considered. The concept of price of anarchy (PoA) [17] was introduced to measure the quality of Nash equilibria. Formally, let (a) : a → R be the social objective function and let a be a social optimum if and only if a = arg max a∈A (a).
Definition 7 (see [17]). The price of anarchy of a game is defined as PoA( ) = max a∈A ( (a OPT )/ (a)). Intuitively, the PoA of a game is the ratio of the objective social value of the worst possible Nash equilibrium to the value of the social optimum.

Problem Statement
In this section, we formally give the definitions of the problem of direction set -Cover for minimum coverage breach in DSNs. We consider a directional sensor network of directional sensors and targets. Let = { 1 , . . . , } and = { 1 , . . . , } denote the directional sensors and the target set, respectively. Let = { 1 , . . . , } be the set of directions of all sensors. Each sensor has a set of directions = { , | = 1, 2, . . . , }. Without loss of generality, is the initial lifetime for each sensor , which is the time duration when the sensor is in the active state all the time. is the th time slot of sensor networks and TL = ∑ =1 is the total lifetime of sensor networks. Being similar to the problem of set -Cover for minimum coverage breach defined in [2], the problem of dKC-MCB is formally identified as follows.
Definition 8. A direction schedule of directional sensor networks is a set of ordered pairs, ( , ), = 1, 2, . . . , , in which ⊂ is the set of working directions in time slot . At a time slot , any sensor has at most one working direction; that is, for all , , | ∩ | ≤ 1. Since the lifetime for each sensor is limited, for any sensor , one always has Definition 9. Assume that ( , ), = 1, 2, . . . , is a direction schedule. The total network lifetime is given by TL = ∑ =1 . Let ( ) be the set of coverage targets covered by the set of active directions in time slot . is a target set. Total coverage breach is defined as Based on the definitions of direction schedule and coverage breach, we give the definition of dKC-MCB.
Based on the results in [11], we know that the problem of dKC-MCB is NP-complete.

Direction Scheduling Game
is a set of players, in which a player ∈ is a directional sensor. is the set of direction scheduling strategies of . A strategy ∈ is an allocation of working directions of ∈ among the time slots and can be described as a set of ordered pairs = {⟨ , , ⟩ | , ∈ , = 1, 2, . . . , }. Due to the limitation of energy, a feasible strategy should satisfy the next properties. (1) For all ⟨ , ̸ = 0, ⟩ ∈ , ∑ =1 ≤ ; that is, whole working lifetime of the sensor is less than .
(2) At a time slot , there is at most a working direction of . At a time, direction scheduling strategies of all sensors are composed of a direction scheduling profile, which is denoted 4 International Journal of Distributed Sensor Networks by a = ( 1 , . . . , , . . . , ). At the profile a, the coverage utility of a sensor is defined as follows.
Definition 11. At a profile a and a time slot , the working direction of sensor is denoted by a , ∈ . If there is no working direction for at time slot , a , = 0. Given a profile a, the coverage utility of a sensor is denoted by (a) and then defined as follows: Intuitively, the utility function is defined as the sum of marginal coverage contribution of the sensor's working directions to networks coverage. In a direction scheduling game, a sensor tries to activate its directions in time slots where the sensor obtains most marginal coverage contribution. Obviously, sensors interacted with each other by maximizing each individual utility. Thus, the game is actually a dynamical interaction process. Would the interaction dynamics finally terminate and converge to a pure Nash equilibrium? In order to answer this question, we discuss the mathematical properties of direction scheduling games.
In a direction scheduling game, the set of working directions in a time slot is determined by the profile a. Thus, at a profile a, the set of working directions in a time slot is denoted by a and the objective function of the problem of dKC-MCB is denoted by (a) = ∑ =1 | ( a )|. In what follows, we prove that direction scheduling game is a class of potential games, the potential function of which is (a), that is, the objective functions of dKC-MCB.

Theorem 12.
A direction scheduling game is a potential game with the potential function Φ(a) = (a).

Theorem 13.
A pure Nash equilibrium of direction scheduling game is a local optimal solution of the objective function (a) = ∑ =1 | ( a )|.

Theorem 14. An optimal solution of the problem of dKC-MCB is a pure Nash equilibrium of a direction scheduling game.
We give the proofs of Theorems 12, 13, and 14 in Appendices A.1, A.2, and A.3, respectively. Some interesting mathematical properties for DSGs to solve the problem of dKC-MCB are well established by the results of Theorems 12, 13, and 14. Specifically, a DSG is proved to be a class of potential games by Theorem 12. From the previous result of Theorem 6, a DSG at least admits a pure Nash equilibrium. The connection between the solutions of dKC-MCB and pure equilibria of DSGs are described by Theorems 13 and 14. In particular, the consistence between the potential function and the objective function of dKC-MCB allows us to establish a near-optimal performance for local decision dynamics applied to the original network coverage optimization.

Distributed Scheduling Algorithms for Minimum Coverage Breach
In this section, based on the aforementioned properties of DSGs, we propose both synchronous and asynchronous distributed scheduling algorithms for the problem of dKC-MCB. From the perspective of game theory, both synchronous and asynchronous algorithms are actually some kind of best response dynamics of DSGs. Specifically, sensors are assumed to be randomly deployed in a target area. A sensor is supposed to know its location and be aware of the location of its neighbors through local communications. A sensor has a sensing range and a communication range . In this paper, we assume the communication range of a sensor node is at least as twice as the sensing range; that is, ≥ 2 . Denote as the neighbors of a sensor . In particular, for a sensor ∈ , the distance between and is less than 2 . Actually, the utility of a sensor depends on the strategies of the sensors within . In other words, obtains the sum of marginal coverage contribution only through local communications with the sensors within . Thus, both synchronous and asynchronous algorithms are based on local information.
In a synchronous distributed algorithm, at each time step, sensors are considered to be able to synchronize their actions with one another according to a system clock. We also propose an asynchronous distributed algorithm for the case that maintaining tight clock synchronization is sometimes difficult. In asynchronous distributed algorithm, each sensor maintains its individual time clock. At a time step, only one sensor has an opportunity to update its direction scheduling strategy.

Synchronous Distributed Scheduling Algorithm.
At each time step of the synchronous distributed scheduling algorithm (SDA), all the sensors are assumed to be able to synchronize their actions with one another according to a system clock. The algorithm will terminate based on a mark = ¬( 1 ∨ ⋅ ⋅ ⋅ ∨ ).

Definition 15.
Let be the mark of strategy update for a sensor . If a sensor is able to increase utility by updating its strategy, is set to be true. Otherwise, is set to be false. Thus, the termination mark of the algorithm is defined as the following Boolean symbol = ¬( 1 ∨ ⋅ ⋅ ⋅ ∨ ). Specifically, if all the sensors send a message of update false to the system, the algorithm terminates. The synchronous distributed algorithm is shown in Algorithm 1.

Theorem 16.
A synchronous distributed scheduling algorithm converges to a pure Nash equilibrium.
Proof. First, we prove that, at a time step of Algorithm 1, if more than one sensor is able to increase utility by updating their strategies, these sensors should be independent of one another on utility.
Input: An initial strategy of ; A system time clock = 0; The mark of strategy update ← . Output: A Nash equilibrium strategy of : * .
(1) WHILE = DO (2) Communicate with ∀ ∈ to obtain the strategy of ; Based on the definition of utility function, compute: and (a ( )) = arg max ( ( , a( ) − ) − ( , a( ) − )); Broadcasting Δ (a ( )) to ∀ ∈ ; (8) IF Δ (a( )) > max {Δ (a ( )) | ∀ ∈ } THEN (9) ( ) ← (a ( )); Sending ← to the system; (10) END IF (11) ELSE (12) Broadcasting Δ (a ( )) = 0 to ∀ ∈ ; Sending ← to the system; Let a( ) be the profile in a time step . Denote the maximal increment of utility of sensor at a( ) by Δ (a( )); Δ (a( )) = max ( ( , a( ) − ) − ( , a( ) − )) ≥ 0. When Algorithm 1 is not terminated, END is false. From the definition of END, there is at least one sensor updating its strategy. Assume that more than two sensors update strategies at the time step. From line 6 to 10 of Algorithm 1, two utility-dependent sensors are not able to update strategies at the same time step. Actually only the sensor of the highest increments on utility has an opportunity to update a strategy. In other words, when SDA algorithm proceeds, if more than one sensor updates strategies simultaneously, then these sensors should be independent of one another on coverage utility.
Second, we prove that when Algorithm 1 proceeds, the network coverage monotonically increases.
Since direction scheduling game is a potential game, as described in (1), the change in a sensor's utility that results from a unilateral change at a profile equals the change in a global potential function. Moreover, based on Theorem 12, the potential function of direction scheduling game is the same as the optimization objective function; that is, Φ(a) = (a). Therefore, the increment in a sensor's utility that results from a unilateral change at a profile equals the increment in a network coverage.
If Algorithm 1 is not terminated, ∃ ∈ , Δ (a( )) > 0. Based on (1) and Φ(a) = (a), Δ (a( )) > 0 results in the increase of Φ(a) = (a) = ∑ =1 | ( a )|. Thus, network coverage monotonically increases. When there is more than one sensor that has opportunity to increase utility by updating strategies, from results of the first part of this proof, these sensors should be independent of one another on coverage utility. Thus, the increment of (a) equals the sum of increment on utility resulting from the sensors updating strategies. Also, in this case, network coverage monotonically increases.
Since (a) is finite, Algorithm 1 finally converges. At the time, no sensors are able to update strategies to increase utility; a pure Nash equilibrium is achieved.

Asynchronous Distributed Scheduling Algorithm.
In the asynchronous distributed scheduling algorithm (ADA), we use the asynchronous time model [18], which is well matched to the distributed nature of sensor networks. In particular, each sensor has an independent clock whose "ticks" are distributed as a rate 1 Poisson process. A mark of updating strategy is set for each sensor to permit updating its strategy.
is set to be true when ticks and to be false at other time. The asynchronous distributed algorithm is shown in Algorithm 2.

Theorem 17. An asynchronous distributed scheduling algorithm converges to a pure Nash equilibrium.
Proof. Assume that sensors are deployed in a target area. Each sensor has an independent clock whose "ticks" are distributed as a rate 1 Poisson process. has an opportunity to update its strategy when the mark is true. Since tick times are exponentially distributed, independent among sensors and independent across time, the tick time model can be equivalently formulated in terms of a single global clock ticking according to a rate Poisson process. By letting { } ≥0 denote the arrival times for this global clock, then the individual clocks can be generated from the global clock by randomly assigning each to the sensors according to a uniform distribution. Based on properties of the Poisson process, at each arrival time of { } ≥0 , there is only a sensor that has an opportunity to update its strategy.
Let a( ) be the profile in a time step . Denote Δ (a( )) as the maximal increment of utility of sensor at a( ); that is, Δ (a( )) = max ( ( , a( ) − ) − ( , a( ) − )) ≥ 0. When Algorithm 2 proceeds, END is false. There is a sensor updating its strategy and Δ (a( )) > 0. Since direction scheduling game is a potential game, based on (1) and Φ(a) = (a), Δ (a( )) > 0 results in the increase of (a). Thus, network coverage monotonically increases with the execution of Algorithm 2. Since the coverage objective function (a) is finite, Algorithm 2 finally converges. At the time, no sensors are able to update strategies to increase utility and a pure Nash equilibrium is achieved.

The Coverage Performance Analysis of Distributed Algorithms
A direction scheduling game is a potential game with potential function Φ(a) = (a). Hence, network coverage strictly increases with execution of both synchronous and asynchronous distributed algorithms. Furthermore, both synchronous and asynchronous distributed algorithms finally converge to a stable profile, that is, a pure Nash equilibrium of direction scheduling game. Moreover, from the results of Theorem 16, the optimal solution of dKC-MCB is actually a pure Nash equilibrium of DSG. However, pure Nash equilibria are not unique for a DSG. Both synchronous and asynchronous distributed algorithms are not guaranteed to converge to optimal coverage. Thus, we should obtain explicit bounds on the coverage performance of both synchronous and asynchronous distributed algorithms. In terms of algorithmic game theory, we obtain the price of anarchy (PoA) [17] of direction scheduling game to analyze the coverage performance of proposed distributed algorithms.
Definition 18. The price of anarchy of a direction scheduling game DSG is defined as where (a) = ∑ =1 | ( a )|, that is, the objective function. a OPT is the optimal solution of (⋅). a * is a pure Nash equilibrium of a DSG.
Intuitively, a PoA is the ratio of optimal coverage and the worst possible Nash equilibrium coverage. Theorem 19 presents the explicit bounds on PoA of a DSG strongly depending on the submodularity [19] of coverage utility function of a DSG.

Theorem 19. The upper bound of price of anarchy of a is 2.
We give the proof of Theorem 19 in Appendix B. From the result of Theorem 19, we know that at least 1/2 optimal coverage can be obtained when synchronous and asynchronous distributed algorithms terminate.

Simulation Results
In this section, we evaluate the coverage performances and convergence of SDA and ADA algorithms through simulations. There are two measures for the coverage performance of algorithms. The first is the average coverage rate (ACR) of coverage optimization algorithms. ACR is an average of the coverage rate (CR) of each time slot which is the ratio between the number of targets covered by sensor networks and the number of targets located in a target area. The second is the coverage stability (CS) of coverage optimization algorithms which is measured by the variance of CRs of all time slots. The convergence is evaluated by the speed of convergence of SDA and ADA algorithms to a pure Nash equilibrium.

Experimental Demonstration of the Coverage Optimization.
As an intuitive demonstration of distributed algorithms, Figure 1 shows the snapshots of coverage results of a random deployment and a SDA algorithm. In order to make the results accessible to readers, the small-scale simulations parameters are used in the demonstration. In this demonstration, coverage results in 3 different time slots when SDA algorithm converges to a pure Nash equilibrium. As we can see from the results shown in Figure 1, coverage results of SDA algorithm obviously outperform results of random deployment.

Average Coverage Rate of the Distributed Algorithms.
In order to evaluate the effectiveness of algorithms, we compare the coverage performance of SDA and ADA algorithms with the RANDOM algorithm and MCBLC-G algorithm. In a RANDOM algorithm, each sensor allocates randomly working directions into time slots under the constraint of its lifetime. MCBLC-G is a centralized greedy algorithm for the problem of minimum coverage breach which is developed by Yang et al. in [11]. The experiments are set up by the following settings. The target area is a 10 × 10 size area where 50 targets are randomly located. We evaluate the coverage performance of the SDA, ADA, RANDOM, and MCBLC-G algorithms by randomly deployed =  100, 125, 150, 175, 200, 225, 250, 275, 300 sensors with = 2 in target area, respectively. Total lifetime TL is divided equally into 6 time slots; that is, = 6. Each sensor at most chooses 3 time slots to allocate its working directions; that is, ≤ 3. In order to guarantee the reliability of simulations, for each value of , we repeat the simulations by 50 times. The ACR is finally computed by ((∑ =1 | ( a )|)/ )/50. Figure 2 shows coverage performances of the SDA, ADA, RANDOM, and MCBLC-G algorithms. As we can see from Figure 2, SDA, ADA, and MCBLC-G algorithms provide similar coverage performances which are obviously better than results of the RANDOM algorithm. However, MCBLC-G is a centralized algorithm and the SDA and ADA algorithms are distributed algorithms.
Since the potential function of direction scheduling game coincides with the coverage objective function of the problem of dKC-MCB, the coverage of sensor network increases along with the each sensor's utility increasing when SDA and ADA algorithms proceed. We can see the results from Figures 3 and 4.

Coverage Stability of the Distributed Algorithms.
The coverage stability is another important performance measure of coverage optimization algorithms. A coverage optimization algorithm with good coverage stability can guarantee well-balanced coverage performance for every time slot. Figure 5 shows the coverage rate variance of SDA, ADA, RANDOM, and MCBLC-G algorithms. As we can see from the results of Figure 5, RANDOM algorithm randomly allocates working directions into time slots. Coverage rates among time slots cannot be guaranteed well balanced. This results in a high coverage rate variance for RAN-DOM algorithm. MCBLC-G algorithm has a lower coverage rate variance than RANDOM. However, the coverage rate variance of MCBLC-G is influenced by the sequence of greedy selection of sensors. Compared with RANDOM and MCBLC-G algorithms, SDA and ADA provide well-balanced coverage performance.

Convergence of the Distributed Algorithms.
A SDA (ADA) algorithm terminates when the algorithm converges to a pure Nash equilibrium. The speed of convergence to a pure Nash equilibrium of a SDA (ADA) algorithm mainly depends on two factors: the first is the number of sensors deployed in a target area. The second is the number of time slots . As shown in Figure 6, the number of iterations for SDA algorithm increases with the number of sensors deployed in a target area. At the same time, given the number of deployed sensors, the number of iterations for SDA algorithm increases with the number of time slots . As the number of increases, each sensor has more choices when it decides to allocate working directions. This leads to more complicated interactions among sensors and prolongs the convergence dynamics. Similar convergence results of ADA algorithm are shown in Figure 7.

Conclusions
Direction scheduling algorithms with energy efficiency are always important for directional sensor network. Since directional sensors are energy constrained, distributed direction scheduling algorithms need to be exploited where a sensor  takes direction scheduling decisions independently, based purely on communications with its neighbors. In this paper, the problem of directional -Cover for minimum coverage breach (dKC-MCB) is formulated as a game: direction scheduling game (DSG). Both synchronous and asynchronous game-theoretic based distributed algorithms are proposed for solving dKC-MCB, respectively. The coverage performance of distributed algorithms is analyzed from a theoretic perspective and the explicit bounds on the coverage performance are presented. Experimental results show that our proposed algorithms can provide a near-optimal and well-balanced solution to the problem of dKC-MCB.
Game theory, particularly potential game, is beginning to emerge as a powerful tool for the design and analysis of distributed optimization algorithm [13]. However, many research challenges still remain unanswered, especially for the development of a systematic methodology for the design of distributed optimization functions that satisfies virtually any degree of locality while ensuring the desirability of the resulting equilibria. As future research work, we plan to investigate a systematic approach to distributed optimization using the framework of potential games and apply this approach to solve various real application problems. We prove that, for a direction scheduling game with utility function defined by (3), the potential function is Φ(a) = (a).   (c) schedules its working direction from 0 at a profile a to , at a profile a . Thus, a = { a ∪ { , }}.
(d) maintains an equal schedule between a and a . Thus, a = a .
Without loss of generality, at a and a , we suppose that . Therefore, we prove that a direction scheduling game is a potential game with potential function Φ(a) = (a).

A.2. Proof of Theorem 13.
A pure Nash equilibrium of direction scheduling game is a local optimal solution of the problem of Set -Cover for minimum coverage breach.
Proof. Let a * be a pure Nash equilibrium profile of DSG. By the definition of Nash equilibrium, at the profile a * , there is no sensor that can unilaterally deviate from the profile a * by changing its scheduling strategy to increase its individual coverage utility. Sine DSG is a potential game of which the potential function Φ(⋅) is consistent with (⋅), that is, the optimization objective function of dKC-MCB, in a local area around the profile a * , there is no profile a such that (a * ) < (a ). Thereby, the Nash equilibrium profile a * is local optimal of the optimization objective function of the problem of dKC-MCB.

A.3. Proof of Theorem 14.
An optimal solution to the problem of dKC-MCB is a pure Nash equilibrium.