A novel edge gradient algorithm for multiple mobile robots cooperative mapping in unknown environment

This article presented a cooperative mapping technique using a novel edge gradient algorithm for multiple mobile robots. The proposed edge gradient algorithm can be divided into four behaviors such as adjusting the movement direction, evaluating the safety of motion behavior, following behavior, and obstacle information exchange, which can effectively prevent multiple mobile robots falling into concave obstacle areas. Meanwhile, a visual field factor is constructed based on biological principles so that the mobile robots can have a larger field of view when moving away from obstacles. Also, the visual field factor will be narrowed due to the obstruction of the obstacle when approaching an obstacle and the obtained map-building data are more accurate. Finally, three sets of simulation and experimental results demonstrate the performance superiority of the presented algorithm.


Introduction
Mapping, localization, and path planning are three major tasks in robotic navigation. Autonomous mapping is one of the crucial and challenging problems for mobile robots. 1,2 Currently, the simultaneous localization and mapping (SLAM) method is proven to be an effective technique for achieving self-localization and map construction simultaneously, which can be successfully applied to many robotics fields such as soil mapping on farms, 3 monitoring of power lines, 4 and online robot navigation system. 5 In the last decade, many significant approaches in mobile robots localization and mapping have been performed. A fast algorithm of SLAM based on the ball particle filter is presented for mobile robot in the work of Jinwen and Qin, 6 the proposed SLAM algorithm is verified by a series of simulation experiments and exhibited good performance.
To improve the localization accuracy in SLAM, an improved iterative extended Kalman filter is developed, which is used to estimate the mobile robot position. Then, a perception-driven hierarchical SLAM method that can be applied to search and rescue environments is presented in the work of Hongling et al. 7 Usually, mobile robot map construction methods are divided into three methods: geometrical, topological, and hybrid. A kind of topological map is used to demonstrate modeling of indoor environment and a topological localization and mapping approach is presented for mobile robots via human memory model. 8 However, most of reconstructed environment models only include geometry information, which decreased the level of robot autonomy in complex environment. The authors developed a three-dimensional (3-D) reconstructed semantic map using a RGB-D image sequence in the work of Zhe and Chen. 9 Further, an indoor positioning system is developed by RGB-D mapping method and neural network training algorithm in the work of Chun and Su, 10 which can improve the association result in the intelligent mobile robot pose tracking and apply to multiple robots indoor positioning system. Unlike the static environment in traditional research results, map construction in dynamic environments is more challenging due to the uncertainty of the estimated state variables. [11][12][13] A new mapping method is presented for long-term mobile robot in dynamic indoor environments in the work of Tomáš et al. 14 In addition, by combining normal distribution transform technique and occupancy mapping, a generic SLAM system based on sensor independent graph is developed, which is suitable for 2-D and 3-D mapping in dynamic environment. 15 However, robot mapping in outdoor environments has caused extensive attention due to various uncertainties and unknown disturbances, [16][17][18] the authors consider the uncertainties arising from multiple sources and a set-theoretic algorithm is developed to realize real-time terrain mapping for mobile robots in outdoor environments. 19 Subsequently, for a mobile robot in outdoor environments, a new 3-D SLAM technique with terrain inclination assistance is presented by using iterative closest points (ICP) algorithm. 20 Nowadays, the elevation map becomes the most popular map in outdoor robot navigation. An outdoor localization system is developed using the presented elevation moment of inertia and Monte Carlo localization method in the work of Tae-Bum et al. 21 However, vision-based SLAM method may fail to the application of outdoor mobile robot navigation system due to image blur and low-cost hardware. Thus, the authors proposed a novel SLAM algorithm to solve the problem of low lighting, low cameras, and perceptual change via visual place recognition technique. 22 Accurate mapping in urban environments is a crucial topic for autonomous robots. Based on a sparse 2-D map, the article proposed global localization technique of 3-D point clouds, 23 which is verified on the collected large data sets in two different real environments. However, all of the above-mentioned SLAM algorithms are built on the static environments assumption, which limits the applications of these presented methods. Therefore, to improve Red Green Blue-Depth (RGB-D) SLAM algorithm in dynamic environments. The authors proposed a novel RGB-D data-based motion removal method with a freely moving RGB-D camera. 24 Subsequently, an online RGB-D data-based motion removal approach is proposed 25 and it is worth mentioning that the algorithm does not require prior knowledge from moving objects.
In generally, multiple mobile robots can improve the working capability and performance. Therefore, recently, research on localization and mapping of multiple mobile robots has become a hot topic. The authors focused on the problem of unknown environments cooperative mapping, 26 which developed a new unified methodology and cooperation architecture for heterogeneous mobile robots with inexpensive sensing ability. The SLAM problem is ingeniously decomposed into the recursive calculation. 27 Subsequently, a multiple mobile robots cooperative localization and mapping method is proposed based on hybrid dynamic belief propagation algorithm. However, each robot can build a map to complete its assigned task and multiple robots system can obtain the relative location by a shared map. So, the author developed an effective map construction method using a wave algorithm for a multirobot system. 28 Also, each mobile robot takes an independent task to avoid the problem of graphics redundancy in multirobot system. The authors presented a decentralized multirobot mapping and graph exploration approach with collision avoidance. 29 As multiple mobile robots mapping systems play a critical role in many robotic applications, distributed cooperative mapping techniques of multiple mobile robots are required in unknown environment.
Note that the search efficiency of the above-mentioned methods is relatively low for large and complex environmental maps. Therefore, to increase the robustness and efficiency in the problem of mobile robot mapping, the unknown environment is divided into four quadrants and we presented a cooperative mapping method using a novel edge gradient algorithm. The proposed algorithm with four different behaviors can effectively prevent multiple mobile robots falling into concave obstacle areas. The main contributions of the article are summarized as follows: (1) the proposed edge gradient algorithm can be divided into four behaviors, which can effectively prevent multiple mobile robots falling into concave obstacle areas; (2) comparing with the existing research results, in this article, a visual field factor is constructed based on biological principles, which make the obtained map-building data more accurately; and (3) simulation and experiment results in real environments verify the feasibility of the presented edge gradient as an online cooperative mapping approach for multiple mobile robots. The remainder of this article is organized as follows. The second section describes mobile robots model and problem formulation. The third section presents the novel edge gradient algorithm. The fourth section shows the results of simulation and experiment to demonstrate the performance of the proposed algorithms. The concluding remarks are given in the fifth section.

Mobile robots model and problem formulation
The current position of the robot is taken as the coordinate origin and the unknown environment is divided into four quadrants. Suppose each robot is equipped with ultrasonic and infrared sensors, the maximum sensing range is 5 m. The robot moves at a constant speed v. We can obtain the following motion equations of the robot at time k þ 1 where x k and y k are the position coordinates at time k, v and q denote the velocity and the angle between the velocity direction and the abscissa axis, and dt is sampling time. As shown in Figure 1, the field of view of the robot can be described as follows

A novel edge gradient algorithm
In this section, a novel edge gradient algorithm is proposed for multiple mobile robots cooperative mapping in unknown environment. The algorithm enables the robots to construct the environment map by collecting the relative coordinate information of the obstacle while exploring the unknown environment. The virtual coordinate axis is constructed to make the robots distinguish the difference between the obstacle and the edge of the obstacle, which plays an auxiliary role in constructing the map in the unknown environment. Also, the visual field factor is constructed based on biological principles. 30 It is worth mentioning that the field of view of the robot can be adjusted adaptively by the developed visual field factor. We divide the virtual coordinate system into four quadrants from the starting point. Each robot starts to move along the X axis or the Y axis. During the construction of the environment map, the robot responsible for edge exploration maintains a safe distance from the edge of the obstacle, while the robot for obstacle exploration collects the detected nonedge obstacle information. The edge gradient algorithm can be divided into the following four behaviors: adjusting the direction of motion, evaluating the safety of motion behavior, following behavior, and obstacle information exchange.

Adjusting the movement direction
The behavior of adjusting the movement direction occurs only when the following two situations are encountered. One is that the next moment of the current state is no longer safe, the robot may hit the obstacle or the distance between the robot and the obstacle is lower than the safety distance, the other is that the obstacle is lost in the field of view of the mobile robot. Therefore, we developed four kinds of the motion direction adjustment strategies when q ¼ 0, q ¼ pi or Àpi, q ¼ pi/2, and q ¼ Àpi/2, as shown in Figure 2.
In Figure 2(a) to (d), sgn_A and sgn_B are represented as the right and left sides of the moving direction of the mobile robot, respectively. A 2 A 1 and B 2 B 1 denote the original motion direction. A 2 A 1 and B 2 B 1 are the motion direction after adjustment. Suppose the coordinates of an obstacle coordinates are p k ðx k ; y k Þ and p kþ1 ðx kþ1 ; y kþ1 Þ at time k and k þ 1, respectively. The vertical distance between the obstacle and the direction of the robot motion are d 1 and d 2. When d 1 > d 2 , it indicates that the robot is constantly approaching the obstacle, and there is a possibility of collision. when d < d, the situation is reversed.
Therefore, a flag bit sgn is defined as follows: In this article, we design the following motion direction adjustment function In equation (5), sgn flexible ¼ À 1 when the obstacle is on the left side of the movement direction, contrarily, sgn flexible ¼ 1.

Evaluating the safety of motion behavior
The mobile robot uses the infrared sensor and the ultrasonic sensor to determine the distance between the current position and the obstacle within the field of view. At the same time, the robot predicts the position of the movement direction at the next moment to determine whether the distance between the position and the obstacle is safe. If it is not safe, adjust the angle of movement until the obstacle to be followed is still within the field of view. The flow chart for evaluating the safety of motion behavior is shown in Figure 3. Dist Barrier and save level denote the distance between obstacles and the safety distance coefficient between obstacles, respectively.

Following behavior
The basic principle of following behavior is that the mobile robot keeps the target obstacle in its field of view by adjusting the direction of motion and evaluating the safety of motion behavior. If the multiagent does not detect the obstacle information, it will move along the X or Y axis of the coordinate axis. When an obstacle is detected, it will judge whether it is an obstacle edge or an obstacle inside the edge according to the position of its coordinate axis. At the same time, the incremental position coordinates (Dx, Dy) are constructed to determine whether the intersection between obstacles is within the field of view, and if it is, it is stored as effective information.

Obstacle information exchange
Multiagent robots are numbered from agent 1 to agent 8. Agents with odd numbers (i.e. agents 1, 3, 5, and 7) perform unknown environment edge exploration tasks, and agents with even numbers (i.e. 2, 4, 6, and 8) perform internal obstacle bypass tasks. To quickly complete the map construction task of the four quadrants in the plane coordinate system, the agents are divided into four groups. The agentrobot responsible for exploring the edge of the obstacle is prone to detect obstacle information in other quadrants through a larger field of view, but the obstacle information has no effect on its task and it will be stored separately to maximize the efficiency of information usage. After receiving the obstacle information, the robot queries the obstacle information in the existing obstacle set and discards the information if it already exists; otherwise, it will be saved in a new obstacle set. Then, the obstacle information is sent to the corresponding quadrant, a circular area with the radius r is generated, which is used for a landmark to assist the path generation for the robot. When the robot completes the obstacle bypass and returns to the starting position, if all the circular areas of the received obstacle landmark are passed, the path is completely correct; otherwise, it indicates that there is still obstacle information that is not collected and the task needs to be executed again. The flow diagram of the presented edge gradient algorithm is shown in Figure 4.

Simulation and experiment results
In this section, we evaluate the performance of the proposed edge gradient algorithm by simulation analysis and real environment experiments.

Initialization
Update information x k , y k ,v,θ , β 1 ,β 2 , dt,DistBarrier Predict the position at the next time

Simulation results
Simulation experiments for multiple mobile robot cooperative mapping based on edge gradient algorithm are given in three different unknown environments. Figure 5 shows three different shapes obstacle environments which are constructed by linear programming method. Among the three different shapes obstacle simulation environments, we only place one obstacle in each quadrant for Figure 5(a) and (b) and simulation environments are relatively simple. Compared with Figure 5(a) and, more complex environment is considered in Figure 5(c). Eight mobile robots labeled as agent i (i ¼ 1, 2 . . . , 8) are used for map construction. Suppose the speed of each agent-robot v ¼ 0.2 cm/s. The intersection point between the obstacle and the field of view is used as landmark to generate a circular of radius r ¼ 1. Figure 6 demonstrates motion trajectories of eight mobile robots during map construction. The final map construction results in three different simulation environments are shown in Figure 7. As shown in Figure 7(a) to (c), the presented edge gradient algorithm can build highly precise map in unknown environment with complex shape concave or convex polygonal obstacles.
To demonstrate the superiority of the presented algorithm, simulation comparison results are given between our algorithm and the algorithm without the visual field factor. Actually, the reference algorithm used for comparison does not consider visual field in the process of map construction.
While a visual field factor is developed in our algorithm, which can improve the search efficiency of the algorithm. Figure 8(a) is a constructed simulation environment, Figure  8(b) and (c) are motion trajectories of the robots using the mapping algorithm without the visual field factor and the presented algorithm, respectively. In the second quadrant, we can clearly see that the iterative path obtained by our  method is shorter than the mapping algorithm without the visual field factor. In this article, a visual field factor is developed based on biological principles. The mobile robots can have a larger field of view when moving away from obstacles, which can improve the search efficiency of the algorithm. Further, performance comparison results are given in Table 1 for the present algorithm and the mapping algorithm without considering the visual field factor. We can see clearly that the developed visual field factor improves the search efficiency.

Experiment results
In real environment experiments, the practicability and rationality of the edge gradient algorithm is demonstrated by a single robot. TurtleBot is used to test the performance of the presented method in unknown environment, which is robot operating system (ROS) standard platform robot. As shown in Figure 9, The TurtleBot is equipped with a 360  lidar for slam and navigation, single board computer, Open-source Control module for ROS (OpenCR) and sprocket wheels for tire and caterpillar.
As shown in Figure 10, the novel edge gradient algorithm is tested in two different real environments. Figure   Figure 8. Simulation comparison results. The presented algorithm 52 10.4 5200 Figure 9. TurtleBot platform robot. 10(a) is an environment with carton obstacles and Figure  10(b) is a laboratory environment with some desks and chairs. The experimental parameters are the same as the multiple mobile robots parameter settings in previous section. Obstacles data in two real environments can be acquired by the sensors of TurtBot. Reconstructed maps can be obtained by Robots Operating System Visualizer (RVIZ) simulation platform. We can see the obstacle contour lines and the robot paths in Figure 11. The fan-shaped area with arrows in the figure is formed by adjusting the angle of motion. The mobile robot gradually accumulates the odometer data errors due to the actuator errors. Therefore, we can see the motion path of the mobile robot is not a straight line and the robot has to adjust the direction of motion many times, which is different from the obtained simulation results by MATLAB 7.0 software. In experiment, based on the principle of extended Kalman filter, the position and attitude of mobile robot are tracked according to the environmental map information, and the real-time feedback compensation for the accumulated errors can be realized.

Conclusions
This article focuses on the problem of exploring an unknown environment and constructing a map by multiple mobile robots. A novel edge gradient algorithm is proposed to realize cooperative mapping of multiple mobile robots in unknown indoor environment. The proposed algorithm with four different behaviors can effectively prevent multiple mobile robots falling into concave obstacle areas. In order for map-building data to be more accurate, a visual field factor is constructed based on biological principles. Simulation and experiment results in real environments verify the feasibility of the proposed edge gradient as an online cooperative mapping and positioning method for multiple mobile robots.