Sensor placement optimization for critical-grid coverage problem of indoor positioning

It is more practical and efficient to deploy sensors in critical areas rather than common areas to ensure indoor positioning accuracy and reduce deployment cost. This study focused on the sensor placement optimization for critical-grid coverage problem with two objectives: accuracy and cost. After reviewing some related works, this article proposed a multi-objective optimization model for critical-grid coverage problem of indoor positioning considering k-coverage problem as well as the topological rationality of sensor distribution. Then, NSGA-II algorithm was used to solve the optimizing model of sensor placement. At last, the simulation experiment and real environment validation were conducted for proposed method. The results showed that the optimized schemes obtain a lower error (1.13, 1.21 m) and a higher reduction of sensor deployment cost than the uniform deployment scheme (1.44 m). As a conclusion, the proposed method could reduce the cost of sensor deployment while ensuring the accuracy of indoor positioning for critical areas. It also provides a new direction for improving the accuracy of indoor positioning.


Introduction
The applications of indoor positioning are becoming more and more important; typical examples are resource searching, tourism navigation, meeting guides, looking for the elderly and children, [1][2][3] and so on. These applications benefit from the technology of Wireless Sensor Networks (WSNs), mobile communication, location-aware computing, 4 and location-based services. 5 These services cannot be separated from the support of local position system (LPS). An LPS consists of a set of radio frequency base stations that are able to communicate with each other and locates users by measuring the received signal strength (RSS), the time difference of arrival, or the time of arrival (TOA). 6 In LPS, each user communicates with a fixed set of base stations, and its location process is independent of that of other users, by using received signal strength indicator (RSSI) in wireless communication technologies, such as ZigBee, Wi-Fi, Bluetooth, and so on. LPS is gaining more and more popularity among the ubiquitous computing, intelligent robotics communities, and indoor positioning services in commerce. 7 However, these different techniques of indoor positioning mentioned above have the advantages and 1 disadvantages, and there is not yet a dominant technique to solve all problems. 8 Thus, a number of researchers have already attempted to improve the indoor positioning algorithm to gain a higher positioning accuracy. [9][10][11][12] Those researches mostly focus on filtering sensor signals, fusing with other location signals, or improving location algorithm by compensating ranging errors to improve the accuracy of indoor positioning, while the researches on optimizing the placement of sensors are rare, especially for improving the accuracy of indoor positioning.
In addition to the accuracy of indoor positioning, there are other significant challenges, such as cost, coverage, indoor environment (obstacles), energy efficiency or lifetime, and so on. Thus, the issues of sensor distribution optimization for indoor positioning are multiobjective optimization problems. What's more, the indoor area in which the sensors are deployed needs to be adequately distinguished as ''critical areas'' and ''common areas'' in many indoor applications. For example, in a railway station or airport, critical areas are the ''hot spots,'' such as elevator entrance, boarding gate, toilet and service center, and common areas are the waiting area, excluding the critical areas. Critical areas and common areas have different positioning needs and should be differentiated. 13 It is more practical and efficient to deploy sensors in critical areas rather than common areas to ensure positioning accuracy and reduce deployment cost.
Therefore, this study focused on the sensor placement optimization for critical-grid coverage problem 14 with two objectives of accuracy and cost. The contributions of this article include the following: Review of the state-of-the art sensor placement optimization focusing on indoor positioning; Proposal of the geometry model of sensor coverage for critical-grid coverage problem of indoor positioning; Proposal of optimizing model of sensor placement for critical-grid coverage problem of indoor positioning and its computing method; Numerous simulations, analysis, and real environment example for the validation of the proposed method.
The article is outlined as follows. The ''Related works'' section reviews some related works. The section ''Computing model for critical-grid coverage'' proposes the computing model of sensor coverage for criticalgrid coverage problem of indoor positioning. The section ''Optimizing model of sensor placement'' introduces optimizing model of sensor placement for critical-grid coverage problem of indoor positioning and its computing method. The ''Experiment and validation'' section presents simulation results and analyses of experiments and real environment validation for proposed method. The ''Conclusion'' section concludes the article and presents future work.

Related works
The emerging WSNs technology provides an inexpensive and powerful means to monitor physical environments. [5][6][7] In a typical sensor network application, sensors are to be placed (or deployed) so as to monitor a region or a set of points. Sensor deployment has corresponding methods or algorithms, which can be summarized as follows: deterministic sensor deployment and nondeterministic sensor deployment. 15 The difference of the two methods depends on whether the environment permits location selection for sensor deployment. Early researches were mainly focused on theoretical exploration and mathematical expression of models or algorithms for sensor deployment, and sensing coverage and connectivity of the WSNs were of great concern. 16,17 Considering the coverage of critical area, research can be summarized into two categories: area coverage and target or point coverage. 14 On the problem of area coverage, the sensing range of the deployed sensors should cover a specific given area (critical area), whereas, in regard to the target or point coverage, as the name suggests, a set of specific given targets or points should be monitored by sensors. Since the space of sensor placement is usually divided into square grids, 13,14,18 the goal is to construct a full coverage of critical square grids with the minimum number of sensors on square grids, termed critical-square-grid coverage. Furthermore, most of these coverage problems are defined as NP-hard optimization problems. 13,19 To obtain optimal solutions of sensor placement, many researchers have to turn to the heuristic algorithms, such as genetic algorithm, 20,21 simulated annealing, 22 monkey algorithm, 23 ant colony algorithm, 24 and particle swarm optimization. 25 However, these studies do not emphasize critical area coverage problems of sensor placement for indoor positioning.
The critical area coverage problems of sensor placement for indoor positioning are significant for decision making in some situations, 26 such as fire rescue, medical monitoring, indoor navigation, and so on. In previous studies, there are a few studies on optimization of sensor placement for indoor positioning in this field. Vlasenko et al. 27 proposed a methodology of optimizing sensor placement for indoor localization in the platform of the Smart-Condo, and the frequently visited areas of the living space covered by a number of sensors were the optimization objective. Berka et al. 28 proposed a method to optimize the number of sensor nodes near decision points for intermodal navigation of blind pedestrians. In Domingo-Perez's 29 sutdy, the number of sensors, accuracy and coverage were taken into account for optimizing the sensor deployment in the region of interest. However, researches on sensor placement optimization of critical-grid coverage problem of indoor positioning have not been reported yet.
Concerning the multi-objective optimization algorithms, the NSGA-II was the most used algorithm in previous studies. Chen et al. 30 proposed a solution based on NSGA-II to make fuller use of the whole sensor nodes and further increase the network lifetime. Yang et al. 31 presented the optimizations of a spacebased reconfigurable sensor network under hard constraints by employing NSGA-II to find multi-criteria solutions in the sense of Pareto optimality. Syarif et al. 32 conducted a study based on NSGA-II for achieving coverage and connectivity as two fundamental issues in WSNs, and introduced Network Simulator, NS-2, used for network simulation to verify the performance of optimized solution. Le Berre et al. 33 proposed a modified NSGA-II of bi-objective sensor placement with the minimization of the number of deployed sensors and the minimization of the tracking constraints violations, under the coverage constraints. In Hasson and Khudhair's 34 study, NSGA-II is used recently as a tool to obtain the best trade-off between network coverage and connectivity as two competing objectives. Harizan and Kuila 35 proposed a novel NSGA-II with modified dominance to solve the multi-objective coverage and connectivity problem. In some others' studies, NSGA-II algorithm was used for comparison with other algorithms for multi-objective optimization of sensor coverage problems. [36][37][38] However, the validation for proposed method in most previous studies is by numerical simulations. There are few real case studies for the method validation or evaluation.
Due to the unique computing characteristics of indoor positioning, the sensor coverage is different from that mentioned above. In indoor positioning application, the sensor coverage usually involves the problem of multiple sensors coverage, which means that each target in the sensing area needs to be covered by at least k (k ø 3) different working sensors, 8 known as kcoverage problems. 39 This might increase the complexity of sensor placement optimization in geometrical representation, topology structure, and computation. The topological rationality of sensor distribution should be considered in the optimization.
To sum up, from early works of literature, most of published works in sensor placement optimization use the NSGA-II to achieve the multi-objective optimization (i.e. coverage, connectivity, energy efficiency, number of sensors, etc.). Very few works talk about the critical-grid coverage problem of indoor positioning in this field. Furthermore, from practical point of view, real case study in indoor positioning by using the optimized sensor deployment may also be needed, because the real environment is usually different from the ideal conditions in numerical simulations. However, this is very important for the practical application of the algorithm. In this article, some crucial topics of the sensor placement optimization are stated as follows: The multi-objective optimization using NSGA-II for the critical-grid coverage problem; The k-coverage problem as well as the topological rationality of sensor distribution; The real environment case study by using the optimized sensor deployment in indoor position.

Geometry model of sensor coverage
The given space of sensor deployment is decomposed into small contiguous regular grids in previous studies. 13,14,18,40,41 This concept is also adopted in this article. Since the three-dimensional (3-D) potential coverage space is complex and challenging, in order to facilitate the discussion of sensor deployment, twodimensional (2-D) potential space is discussed in this study. The 2-D potential space is divided into small grids, and each grid is characterized by a length a and a width b (a and b are less than R, the effective sensing range of a sensor). The center point of each grid is the potential placement of a sensor. Only one sensor can be deployed in a grid. In this article, the grid is considered to be covered when the center point falls within the sensing range. The model is depicted in Figure 1. In this case, b \ a \ R, and nine grids are covered in the sensing range.
Sensor coverage problems can be divided in the ONE-coverage problem or the k-coverage problem. 14 The k-coverage of the targets indicates that each target node must be covered by at least k sensor nodes. 39 How many sensor nodes are needed depends on the specific issues and requirements of the applications. The higher k is, the more expensive the WSN costs, but the more stable the WSN is. Even after failure of k 2 1 sensor nodes, the target will still remain covered. However, in some applications, k must be greater than 3. In that case, the two-coverage is a failure. The concept of the k-coverage model (i.e. k = 2, 3, 4) can be seen in Figure 2. In the indoor positioning application, how many sensors are needed depends on the method used for positioning calculation. 8

Method of indoor positioning calculation
Recent approaches of indoor positioning are generally based on distance measurement. 11 The indoor positioning algorithm of distance measurement is to measure the distance between the target node and its anchor node (sensor), and then calculate the coordinates of the target node according to the geometric principles. The most commonly used indoor positioning algorithms of distance measurement are trilateration, time of flight, and maximum likelihood estimation methods, 11 and the calculation of the coordinates of the target node needs the help of at least 3 anchor nodes (three-coverage requirement), for example, by the trilateration algorithm. The principle of trilateration is shown in Figure 3(a). The coordinates of three points are known as (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) and their distances from target node are d 1 , d 2 , and d 3 , respectively. Assuming that the coordinates of target node are (x, y), the following equation exists The coordinates of target node to be measured can be obtained from the above formula as follows Time of flight or TOA exploits the signal propagation time to calculate the distance between the anchor node and target node. The time of flight value multiplied by the speed of light c = 3 3 10 8 m/s provides the  physical distance between the anchor node and target node. Similar to principle of trilateration, by using three different anchor nodes to estimate the distances between the anchor node and target node, the coordinates of target node can be obtained.
Maximum likelihood estimation requires k-coverage of target node, and k is more than three. With the coordinates of k sensors and the distances between each sensor and target, k equations can be established. The principle is shown in Figure 3(b): N anchor node coordinates are (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), ..., (x n , y n ) and their distances from point E (target node) to be measured are d 1 , d 2 , d 3 , and (...), d n . Assuming that the coordinates of E is (x, y), the following equations exist The method of standard minimum mean square error estimation is used in the maximum likelihood estimation to obtain coordinates of the target node.
Through the analysis of the above algorithms, some conclusions can be drawn. First, the indoor positioning by using those algorithms is a typical k-coverage problem. Second, distance estimation is one of the important factors of positioning accuracy. Furthermore, in calculating the distance from the target node to the sensor node, the received signal must be stronger than the disturbance signal amplitude of the sensor itself, otherwise the distance estimation will be inaccurate. 9,10 Thus, the distance estimation between the sensor and target may be the direction to improve the indoor positioning accuracy.
According to the principle of indoor positioning algorithm based on distance measurement, the higher the accuracy of the distance between sensor and target is, the more accurate the coordinate calculation of the target node is. As a result, a higher positioning accuracy will be obtained. Therefore, theoretically, ensuring the target node within the sensor range can improve the accuracy of distance between the target node and the anchor node, which is important to ensure the accuracy of indoor positioning.

Geometry model of critical-grid coverage
In indoor positioning application, the difference of definitions for critical grids (hot spots or interest points) has been obviously recognized due to the different objectives, users, and other facts of the application. 14 However, the decision points are the most critical grids in indoor positioning, since the navigation instruction is needed because of multiple routes such as entrances or exits to buildings, corridor crossing or road intersections, stairs or lifts, toilet, and other interest points. 13,27 Thus, this study assumes that the critical grids are those decision points, and n decision points (critical grids) are manually designated according to the rule that whether these decision points can help people make decisions. These points need to ensure their positioning accuracy, or the highest positioning accuracy in the whole indoor region. Figure 4(a) is an example of the critical areas of a concept indoor environment, and Figure 4(b) shows the critical-grid coverage model derived from Figure  4(a). Exits, road intersections, lifts, and toilet are considered as the decision points. In this example, the map of indoor environment is converted to the sensor coverage model described above, and a critical grid is covered by three sensors.

Description of problem
According to the geometry model of critical-grid coverage for indoor positioning, the optimization problem initially can be descripted as maximizing the critical-grid coverage, and minimizing the number of sensors. Assuming that the trilateration method is used in the indoor positioning, to ensure the positioning accuracy of critical grids, the target node should be kcovered, and k ø 3. In terms of topological rationality of sensor distribution, some restrictions will be considered in the optimization.
Thus, the optimization problem in this study can be descripted as follows: the 2-D plane space is divided into a number of grids, among which there are a number of critical grids. Assuming that only one sensor can be deployed in each grid, and no sensor can be deployed in the adjacent grids of the grid with a deployed sensor, this is the topological rationality of sensor distribution in this study. Figure 5 is an example for the topological restriction of sensor distribution. When a sensor is deployed in a grid, the eight adjacent grids of the grid prohibit the deployment of sensors as shown in Figure 5. This is to make the sensor deployment as decentralized as possible to adapt to the triangulation algorithm.
To represent the topological rationality of sensor distribution, the topological restriction need to be tested and verified. Thus, a case study in the real indoor environment is required by evaluating the positioning accuracy for the optimized scheme of sensor deployment, and then comparing the proposed method to traditional method. By this way, the advance of the proposed method can be clearly observed.
Due to that the different schemes of sensor deployment have different coverage for critical grids (i.e. distance between the sensor and target, topological structure of sensors and targets), which results in different positioning accuracy and deployment cost. How to deploy sensors to achieve the highest coverage (accuracy) and the lowest cost for critical grids is the focus of this article. The mathematical expression of the optimization model will be introduced in the next section.

Expression of optimization model
The critical-grid coverage problem is converted to the question, how many target nodes are located in the effective distance of a sensor? Meanwhile, the number of sensors for each target node must be not less than three within the effective sensing range. The first objective function of optimization model is as follows: Suppose i is the i-th sensor, J is the total number of sensors, S is the set of target nodes, P is a subset of S, and R is the maximum effective distance from a sensor. Assuming that a target node is located in the sensing range of a sensor, the sensor is called the effective sensor of the target node. We denote with T 1 the number of target nodes within the effective distance of each sensor. Therefore, the objective function, maximum target nodes per effective sensor is expressed as follows where N i is the number of target node within the range of the i-th sensor, and D (P, i) represents the distance between the i-th sensor and target node in P set. Each sensor has a corresponding P set, in which each target node in P set must satisfy the condition that the distance between the sensor and target node should be less than R. For a given sensor placement scheme, the P set for each sensor can be generated by the following steps: Distance calculation: Calculating the distance between each sensor and all target nodes. Effective sensor checking: If the distance is less than R, the sensor is the effective sensor of the target node, and put this target node in the P set of the sensor. Traversal circulation: Traversing all sensors in the given sensors set for distance calculation and effective sensor checking, all P sets have been generated for each sensor.
Due to the three-coverage requirement of indoor positioning, the constraint is considered as follows: Assuming that the number of target nodes is N and j is the j-th target node, the number of sensors within the distance R for each target node (S j ) is greater than or equal to 3, and its mathematical expression is as follows where M is the number of sensors for each target node. D (Q, j) represents the distance between the i-th target and the sensor in Q set. Q is a subset in set of all target nodes (T), and the method for calculating Q set is similar to P set in equation (4). For the second objective function, assuming that the cost of each sensor is Y and the number of sensor is J, the expression of minimum total cost of deployment scheme is T 2 . The second objective function of optimization model can be demonstrated as Model solving method As mentioned in the literature review, NSGA-II is used recently as a tool to solve many multi-objective optimization problems in WSNs. Deb et al. 42 propose an improved version of NSGA-I, named NSGA-II. The NSGA-II procedure starts by building a population of individuals. Next, the candidate solutions are sorted and ranked in fronts according to a non-dominance rule. Then, it applies the evolutionary operators, namely, the crossover and the mutation to find a new population of off-springs. The crowding distance determines how far each solution is from others in the same front, and it aimed at keeping the diversity among the solutions. This guarantees the diversity of the population and improves the exploration of the fitness landscape. 35 In this study, NSGA-II is used to solve the multiobjective optimization problem. In the GA, the decision variables are coded into chromosomes, and the fitness value (objective function value) of chromosomes is calculated. 32 Thus, the 2-D plane grids are transformed into gene chain of a chromosome through the grid ID, as shown in Figure 6. Suppose the 2-D plane is divided into K grids, and they are indexed by grid ID starting from 1. The binary coding is used to transform the decision variables into a chromosome. Since the sensor has K alternate locations, each scheme of sensor deployment can be represented by K binary digits in a chromosome. When the value of the gene is 0, there is no sensor in the position; when the value of the gene is 1, the sensor is present in that explicit position recognized by grid ID. For simplicity, the coordinates of a grid center stand for the grid. They can be figured out in the 2-D coordinates. The grid ID links the gene chain of a chromosome, as well as the coordinates of the grid center in the 2-D coordinates. By this way, the connection between chromosome and 2-D grid is established.
The cost of sensor deployment scheme, as the second objective, can be easily obtained by summarizing the number of gene with value of 1, while the calculation of the first objective function may not be uncomplicated. First of all, the chromosome needs convert to the location (coordinates) in the 2-D plane by the grid ID. Second, the calculation of distance between each sensor and each target node should be done. Third, the value of the first objective function and its constraint can be obtained. What's more, the topological restriction of sensor distribution should be considered beforehand in the generation of father chromosomes and off-springs.
The steps of NSGA-II algorithm for sensor placement optimization are as follows: Step 1: Random generation of original population with size of P. Each of these individuals represents a set of sensor arrangements, and each position represents a gene. Set t = 1.
Step 2: Updating each individual to satisfy the constraints. First, no sensor is deployed in its adjacent grids in the 2-D plane. To do this, examine all grids, if the value of gene is 1, find the site of the grid in the 2-D plane by the ID of the gene chain, and then find its adjacent grids (eight grids) and their IDs of the gene chain. If the value among the eight genes is 1, change it to 0. Second, according to formula (5), the number of effective sensors for each target node must be not less than three. To satisfy this, each target node will be checked. If its effective sensors is less than three, additional sensors should be added. Accordingly, the value of the gene in right position of gene chain is modified from 0 to 1. Figure 7 is an example for updating each individual to satisfy the constraints.
Step 3: Evaluation of objective function T1 and T2 of each individual according to formulas (4) and (6). In this case, each scheme of sensor deployment corresponds to two objective function values.
Step 4: Non-dominant sorting for each individual. According to the value of objective functions, the population is divided into different non-inferior layers by non-dominant sorting. The non-dominant solutions (non-inferior layers) will be reserved to an elite set with size of P by the crowded comparison operator 42 and updated for each iteration.
Step 5: Judgment of termination conditions. If the termination condition is reached (e.g. t . maximum generation), the iteration will stop; otherwise, go to Step 6. Maximum generation is one of the parameters commonly used in termination conditions of NSGA-II algorithms. As a known input parameter of the algorithm, it is specified by the user, for example, 100, 500, and so on.
Step 6: New population generation. First, establishing the optimization pool. Two individuals are selected randomly from parent population (elite set), and according to the non-inferior layers, the better one is chosen and put into the optimization pool; Second, generating child population through crossover and mutation operations. Crossover operations-two individuals are randomly selected from the optimization pool and a portion of the genes in the two individuals are randomly exchanged. Mutation operations-an individual is randomly selected from the pool of preferences, and a random exchange exchanges a portion of the genes in the individual. The method used in this study is similar to Li et al. 21 Finally, combining the parent population and child population to generate a new population, t = t + 1, and go back to Step 2.

Introduction of optimization experiment
The sensors selected in this experiment were iBeacons, as the anchor nodes. iBeacon technology was launched by Apple in 2014. 43 Based on Bluetooth 4.0 technology, iBeacon stands out in the crowd of positioning sensors because of its low power consumption, fast response, good performance, and low price. Moreover, even the cheapest smart phones embeds Bluetooth module, which makes iBeacon be widely used and deployed.
With the change of the distance between the smart phone and the iBeacon node, the change of the received signal intensity value (RSSI) is observed, which provides a theoretical basis for the maximum effective distance R from the sensor. Eight groups of data were measured in the experiment, as shown in Figure 8. It can be found from the graph that when the distance in the range from 0 to 3 m, the value of RSSI decreases sharply. When the distance is more than 3 m, the value of RSSI tends to change smoothly with the increase of the distance. To guarantee the positioning accuracy, the maximum effective distance R from the iBeacon was 3 m, and this was consistent with Ng 44 study.
In this experiment, the underground parking lot of a university was selected as the optimization placement of iBeacon, and the parking space was transformed into 2-D coordinate system, as shown in Figure 9.  According to the parking environment, some important areas and interest points were designated as the target nodes, a total of 24 target nodes. These nodes were evenly uniform distributed over the main road of the indoor parking lot for the purpose of indoor navigation application. The potential deploy space of iBeacon was within the range of red polygon as shown in Figure 9. The space outside of the red polygon was supposed to be not critical in the indoor parking lot. In the red polygon, the potential deploy space was divided into 1m-square grids, with a total of 495 grids.

Validation method
In order to validate the proposed method of sensor placement optimization for critical-grid coverage problem, it was compared with the traditional deployment method. Traditional deployment methods include uniform deployment and staggered triangular grid deployment. In this article, uniform deployment method was selected as comparison purpose (hereinafter referred to as the control scheme). The uniform deployment scheme is that the distance between two deployed iBeacon nodes is controlled at a fixed distance. In this article, the distance is 3 m, which is consistent with the maximum effective distance R of iBeacon.
In terms of quantitative indicators for evaluating sensor deployment schemes, this article evaluated them from two perspectives: 1. The two indicators of deployment scheme were the number of target nodes within the effective distance (R) of each sensor (T 1 ) and the cost (T 2 ); 2. Real environment comparison: comparing the average positioning error of the same observation points of the two schemes-the optimized scheme and the control scheme.

Results and discussion
Simulation results. In the parameter setting of the NSGA-II algorithm, the population size is 100, the crossover rate is 0.5, and the mutation rate is 0.3. Different evolutionary generations are used as termination conditions. The termination conditions are 100, 200, 500, 1000, 5000, and 10,000 generations, respectively. The experiment ran on a general computing server, whose processor was Intel (R) Xeon (R) CPU E3-1535M, 2.9 GHz with 16 GB of memory. The computation time and objective function values under different generations are shown in Table 1. As can be seen from the table, with the increase of generation, the target T 2 (cost) decreases gradually and the computation time multiplied increase within acceptable range, while the target T 1 is viewed an increase trend, which means maximizing utilization of each sensor for these 24 target nodes. In the worst case, with the increase of T 2 , T 1 remains unchanged or decreases. This indicates that the increased number of sensors is not fully utilized. Therefore, this shows the effectiveness of the optimization method.
The T 1 of the control scheme in Table 1 is slightly higher than that of all the optimized deployment schemes, but its deployment cost (T 2 ) is relatively high. In general, the deployment scheme can be seen as a cost-effective scheme when T 2 is low and T 1 is high. The reason for the higher values of T 1 in control scheme might be that it spends more money to increase the sensors, and the number of target nodes is also 24. With the increase of sensors, average number of sensor might increase for each target node, and then increase the value of T 1 . Nevertheless, the control scheme has certain advantages in general, but the optimized method may lead to cost-effective schemes in specific situations.  It should be highlighted that, according to Table 1, after 10,000 iterations of the NSGA-II algorithm, the cost T 2 (i.e. the number of nodes) is reduced by about 30% with respect to the control scheme, while the coverage (T 1 ) is reduced by 3.8%. This seemingly suggests that strong saving may be achieved without a significant coverage reduction.
Contrast experiment under real environment. In order to do further verification and evaluation of the proposed method, a LPS was built. The LPS was composed of a server and iBeacons deployed according to an optimized scheme and a control scheme. The server is a Dell computer, with an Intel CORE i5 processor, and 8 GB memory. Each iBeacon was set to a 10 Hz sample rate with 0 dBm transmit power. The mobile platform is a smart phone of Huawei with an Android 6.0 operation system. The trilateration method is used in the experiment to calculate the position of an unknown node. Considering the cost of LPS in real environment, this article chooses the optimized schemes with generation number of 100 (Scheme A with 65 sensors), 500 (Scheme B with 61 sensors) from Table 1 and the control scheme, as shown in Figure 10. In control scheme, the number of sensors was 76. Then, each point of 24 target nodes was tested three times. The positioning errors of these points are observed and the average errors are calculated to evaluate different schemes of sensor deployment. The error formula adopted in this experiment is as follows where E represents error, (x p , y p ) represents the location coordinates of the target node measured by LPS, and (x p0 , y p0 ) represents the real coordinates of the target node.
To describe the Error distribution of each target node in Scheme A, Scheme B, and Scheme C, the Geometrical Dilution of Precision (GDOP) is given in Figure 11. It can be seen that none of the schemes achieves minimum error for all target nodes. This may be related to the spatial location of the target nodes and anchor nodes. Different schemes have different spatial distribution of sensors. Each scheme has its advantages.
From GDOP in Figure 11 and the spatial distribution of iBeacon nodes in Figure 10, some rules or characteristics of sensor placement can be found. First of all, the closer the distance between the anchor nodes and the target node is, the smaller the positioning error of the node is. For example, the target node ID of 8 in Scheme B, and target node ID of 19 in Scheme A. Second, for some target nodes with large positioning errors, the spatial placement of sensors is characterized by the fact that the target nodes are not completely surrounded by the anchor nodes, or that the target nodes are outside the triangle surrounded by three anchor nodes. The examples are Target Node 4 in Scheme B, Target Node 9 in Scheme C, and Target Node 12 in Scheme A. Third, for some of target nodes with similar positioning errors in the three schemes, the spatial layout of sensors is characterized by the fact that in the optimized scheme, the distance between the anchor nodes and the target node is closer than that in the control scheme, and the target nodes are not completely surrounded by the anchor nodes, such as the target node IDs of 7, 10, and 24. This indicates that the topological structure of sensors and targets can significantly affect the performance of particular positioning algorithm, which is also mentioned in Bishop et al.'s 45 research. Table 2 shows the average errors of 24 target nodes under 100 generations (Scheme A), 500 generations (Scheme B), and control scheme (Scheme C). The average errors were more than 1 m for all of the three schemes, since iBeacon is not good at precise positioning and is suitable for proximity positioning. 46 The average error of Scheme B was 1.13 m, similar to Chen et al.'s 47 in which the average error was 1.11 m. The maximum errors of the two Scheme were 1.89 and 2.39 m, respectively, which was smaller than 2.77 m in Chen et al.'s study. 47 A smaller error of optimized scheme can be observed compared with control scheme. This indicates that the optimization of sensor placement for critical-grid coverage can reduce the error of indoor positioning. Furthermore, it provides a new direction for improving the accuracy of indoor positioning, besides the methods of signal filtering, multisignal fusion, and localization algorithm improvement.
In Table 2, it shows that with the increase of optimization generation, the average error of 24 target nodes may decrease, while the average error of the control scheme is the largest one. The standard deviation for optimized schemes is slightly smaller than the control scheme. This implies that the optimization method for critical areas is better than the traditional uniform sensor deployment in this study. The reason might be that the locations of iBeacon nodes were around the target nodes (at least three anchor nodes surround one target node) in the optimized schemes, while in the control scheme, sensors were evenly distributed and could not completely surround all target nodes, especially for such cases showed in Figure 12. The distance between target node to anchor node of C is larger than 3 m. This would increase the error of measured distance due to the attenuation characteristic of iBeacon, which was discussed in the ''Introduction of optimization experiment'' section. This might be the reason why average error of target node ID from 1 to 14 was a little higher than that of target node ID from 15 to 24 in Figure 8.   By analyzing Figure 10 and Table 2, the NSGA-II optimization method led to a reduction of sensors for optimized schemes compared to the control scheme. For Scheme A, the reduction is about 15% (number of sensors from 76 to 65), and for Scheme B, the reduction is about 20% (number of sensors from 76 to 61). At meantime, about 21.5% accuracy improvement was also obtained due to average error from 1.44 to 1.13 m comparing Scheme B to the control scheme. Thus, it can be stated that the proposed optimization method may lead a 21.5% accuracy improvement, with a cost reduction of about 20%.
In this study, the positioning error of some nodes in the optimized scheme is higher than that in the uniform deployment, which indicates that the optimized scheme is not globally optimal, or the optimization objective functions need further improvement in the future study. The sensitivity studies for parameters and the effect of scalarization on the optimization problem, discussed by Soman et al., 48 have not been conducted in this study, since the focus of this article is on sensor placement optimization for solving the critical-grid coverage problem. Sensor deployment in environments containing obstacles also attracted a lot of attention by WSN researchers. 49 Our future studies may focus on sensor placement optimization for critical-grid coverage problem considering obstacles, parameter sensitivity, and scalarization effects.

Conclusion
There is a great demand for indoor positioning. The indoor positioning accuracy of critical areas and interest points is becoming higher and higher. Starting from the related work of sensor optimization, the computing model for critical-grid coverage, and the optimizing model of sensor placement were introduced. NSGA-II algorithm is used to optimize the placement of iBeacon nodes. In order to verify the proposed method, a real example of the underground parking lot of a university was given, and the simulation and the comparative analysis were conducted. The results showed that the optimized schemes had a better accuracy than the control scheme in general within a relatively short period of computing time. This might be due to the fact that the optimized schemes have a more reasonable sensortarget geometry, which is more suitable for the indoor positioning algorithm of distance measurement, and thus the positioning error of optimized schemes was lower than the uniform deployment scheme. As a conclusion, the proposed method could reduce the sensor deployment cost while ensuring the positioning accuracy of indoor critical areas. In addition to the other common techniques, such as signal filtering, multisignal fusion, localization algorithm improvement, and so on, this work provides a new direction for improving the accuracy of indoor positioning.
In future, further research will be needed, such as (1) the complex indoor environment, such as the obstacles which will block sensor signal; (2) the spatial resolution (scalarization) which may increase the computation cost; and (3) inherent adaptability of the different methods of indoor positioning calculation (algorithms), such as maximum likelihood estimation, centroid algorithm and fingerprint algorithm, and different topological structures of sensor deployment.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by the Basic Research Plan for Public Welfare of Zhejiang Province (LGG21A010001), the Provincial Key Research and Development Program of Zhejiang (2018C01016), and the Scientific Research Project of Zhejiang Education Department (Y201942160).