Leader–follower formation with reduction of the off-tracking and velocity estimation under visibility constraints

This article addresses the time-varying leader–follower formation control problem for nonholonomic mobile robots, under communication and visibility constraints. Although the leader–follower formation control under visibility constraints has been studied, the elimination of the off-tracking effect has not been widely addressed yet. In this work, a new method to eliminate the off-tracking effect, considering the time-invariant formation as a tractor–trailer system, for unknown and circular tractor paths, taking into account the visibility constraints, is proposed. For a time-varying formation with not circular tractor’s path, the proposed method significantly reduces the off-tracking. Only the relative position and the relative orientation, provided by the on board monocular camera, are required. Thus, both the leader robot’s absolute position and the leader robot’s velocities are not needed. Furthermore, to avoid explicit communication among the robots, an extended state observer is implemented to estimate both the translational and the rotational leader’s velocity. In this way, the desired tasks are executed and achieved in a decentralized manner. For a time-varying formation, with constant leader robot’s velocities, the proposed control strategy, based on the kinematic model, guarantees that the formation errors asymptotically converge to the origin. Based on the Lyapunov theory, the stability proof of the formation errors dynamics is shown. Simulation results, considering time-varying leader robot’s velocities, show the efficiency of the proposed scheme.


Introduction
The problem of convoy driving can be seen as a special case of group formation. 1,2 Military applications of convoy driving are the most obvious, where a given number of autonomous vehicles follow each other while keeping a safe constant distance. The convoy-like vehicle can be seen as an emulation of the multi-steered general n-trailer with the difference that the physical link does not exist between the tractor/leader and the trailer/follower, but an additional error dynamics equation is introduced to virtually represent this physical constraint. 3 A tractor-trailer mobile robot (TTMR) is a mechanical system composed of a known number of trailers pulled by a tractor.
The off-tracking phenomenon is the major problem in TTMR. This term refers to the deviation of the path of each articulated vehicle from the paths of preceding vehicles. 4 The reduction or elimination of off-tracking will result in much improved performance in terms of safety during turns, cornering, overtaking other especially small cars, and backward motion. If both the tractor and the trailers can track an identical geometric path, the overall width of the system is only equal to that of the tractor or the trailer. 5 Thus, the robot can perform transport tasks in a narrow space which is the most important problem when finding an obstacle-free path. One way to reduce off-tracking is by using the proportional navigation guidance law, more precisely deviated pursuits. The guidance laws used in Fethi and Boumediene 1 to model and control a robotic convoy are based on the geometry and the kinematics equations between two successive robots. With the deviated pursuits, the linear velocity of the follower robot is designed to keep a constant distance between robots. The rotational velocity is designed such as the follower robot moves pointing toward the leader robot but considering a small parameter in the relative angle to increase the curvature radius of the follower robot's path. Since a constant value for that small parameter is used, and a formal analysis is missing, the offtracking effects are not correctly reduced. To eliminate the off-tracking effect, authors' 5 proposed approach is the generation of the desired full-state trajectory from a desired geometric path, in the Cartesian plane along a timeparameterized path. This technique is useful only when the desired path, given in the configuration space, is known. The trajectory generation process will be more complex according to each desired path. One way to eliminate the off-tracking on physical trailers, for circular paths only, is by using the sliding kingpin mechanism, proposed in Deligiannis et al. 6 According to this technique, the kingpin hitch of the trailer slides in a perpendicular direction to the longitudinal axle of the tractor when the tractor-trailer turns. The displacement of the kingpin hitched off-axle along the rear axle allows to increase the radius of curvature of the path that the trailer travels. Notice that, for implementing this technique, a mechanic device is required which increases its cost.
By the other hand, in the formation control approaches, it is assumed that each robot in the formation can obtain its accurate global position information by the use of some global positioning sensors. To address this issue, some researchers have focused on the use of alternate sensors on board (e.g. laser, cameras, and sound ranging technologies). Compared to other traditional sensors, the visual cameras (monocular, stereo and omnidirectional cameras, kinect device) can provide richer information at lower cost, making them a very popular option for formation control using only available relative on board sensing. 7 However, the practical drawbacks of incorporating additional sensors include increased cost, increased complexity, decreased reliability, and increased processing burden. 8 Taking into account the previous disadvantages, many authors have chosen visual servo strategies, using a monocular camera, that rely on analytic techniques to address the lack of depth information. Visual servoing is classified into two groups: position-based visual servoing (PBVS) and image-based visual servoing (IBVS). PBVS control methods [9][10][11][12] use three-dimensional scene information that is reconstructed from image information. That is, the camera acts as a "Cartesian sensor," where pose estimation algorithms use camera data to generate an error signal in Cartesian space. This error signal is then used in a feedback control law. The main advantage of the position based approach is to perfectly control the robot's trajectory done in the Cartesian space using well-known path tracking techniques. As a bad consequence, certain trajectories defined in the Cartesian space can lead the target out of the field-of-view (FOV) of the camera. 13 In IBVS methods, [14][15][16][17][18] the controlled states are image features of the target, this means that the image data are used directly in the control loop. The major drawback of these methods is that they can only regulate the pose of the camera with respect to a reference pose where some reference image was taken. Dani etal. 10 proposed that control law requires only the knowledge of a single known length on the leader. The relative pose and the relative velocity are obtained using a geometric pose estimation technique and a nonlinear velocity estimation strategy, respectively. A Lyapunov analysis indicates asymptotic tracking of the leader vehicle. Xinwu et al. 19 proposed a time-invariant leader-follower formation tracking control scheme for nonholonomic mobile robots with on board perspective camera, in the image space. Measurements of the position and the translational velocity of the leader robot are not needed. An adaptive observer is used to estimate the linear velocity. The stability proof shows that the system is stable. The relative orientation angle between mobile robots is obtained by using a compass sensor. In Hasan et al., 12 a timeinvariant, state-feedback control law that allows one differential drive robot to follow another, at a constant relative distance, is presented. The proposed control law does not require measurement or estimation of the leader robot's velocity and has tunable parameters that allows one to prioritize the error bounds of either the relative polar angle or the relative orientation. In Dimitra and Vijay 20 andXiaomei et al., 21 to address the cooperative motion coordination of leader-follower formations of nonholonomic mobile robots in known polygonal obstacle environments, the visibility constraints are defined. In Jie et al., 13 an adaptive image-based visual servoing control strategy is proposed following the prescribed performance control methodology. Firstly, the leader-follower visual kinematics in the image plane and an error transformation with predefined performance specifications are presented. Moreover, the off-tracking effect is not addressed. In Qun et al., 22 a distributed leader-follower formation for nonholonomic mobile robots only using local interactions among the robots is proposed to solve the trajectory tracking problem. A distributed estimation strategy is presented for each follower robot to estimate the leader's states, since the dynamic model is used, the leader velocity is also considered as a state. In Fabio et al., 23 a method for estimating the relative distance between leader and followers by a reduced-order nonlinear observer is introduced. The leader mobile robot's velocities are not estimated since both leader robot and follower robot's velocities are considered as the control inputs of the system. Also, the off-tracking is not addressed. In Sida et al., 24 a formation controller based on model predictive control scheme is proposed to assure the desired formation and the position consistency of followers. Afterward, a dynamic controller based on adaptive terminal sliding mode control scheme is developed for the leader to observe and then compensate the external disturbance as soon as possible. However, any vision strategy is addressed. González-Sierra and Aranda-Bricaire 4 proposed the emulation of the so-called kingpin mechanism to reduce the off-tracking effects exhibited by the standard and generalized n-trailer systems. In turn, both systems are emulated by a group of differential-drive mobile robots using the leader-follower scheme. It is assumed that the absolute robots' position and the leader robot's velocity are known. Also visibility constraints are not addressed.

Related works
The main advantage of this work is that the absolute posture of the leader robot, its velocities, and the knowledge of its path are not needed. Only a monocular camera is used to estimate, on board the follower robot, both the relative position and the relative orientation. Thus, the use of another sensor either IMU or compass is not necessary. In addition to this, an extended state observer is implemented to estimate both the translational leader robot's velocity and the rotational leader robot's velocity. Therefore, the proposed time-varying leader-follower formation scheme is decentralized, computationally inexpensive and simple to be implemented on board. Also, the main difference of this work compared to the related works that address the vision-based leader-follower formation control is that also the off-tracking effect is significantly reduced for a time-varying formation. Furthermore, communication constraints are considered since a velocity estimator of the leader robot is implemented.
This article presents the following contributions. First, a new method to eliminate the off-tracking effect for a timeinvariant formation, considering a circular leader robot's path, is proposed. With this method, the off-tracking is significantly reduced either for a time-varying formation or for a time-varying leader robot's velocities. To guarantee that the time-varying formation control problem can be solved, the minimum curvature radius of the leader robot is defined. Also, this method considers both the visibility and the communication constraints. Second, a control strategy is proposed to solve the time-varying formation control problem with reduction of the off-tracking effect, using only both the relative measurements and the leader's velocity estimation. Based on Lyapunov theory, the proposed control law guarantees that the formation errors asymptotically converge to the origin, for the timevarying formation with a circular leader robot's path. To our knowledge, the reduction of the off-tracking for a timevarying leader-follower formation of nonholonomic mobile robots under communication and visibility constraints, using relative position/orientation measurements only, had not been studied.
The article is organized as follows: In the second section, the vision-based algorithm is described. The third section presents the formation kinematics, the visibility constraints, the minimum curvature radius condition, and the strategy to reduce the off-tracking. In the fourth section, the observer to estimate the leader robot's velocities and the controller design are described. Simulation results are shown in the fifth section. Finally, some conclusions are mentioned in the sixth section.

Vision-based relative position/orientation reconstruction
In this section, the vision-based relative posture reconstruction algorithm is briefly described. For simplicity in the notation, consider one robot only, with an ideal perspective camera, without distortion, with reference frame fF c g ¼ ½X c ; Y c ; Z c T , and one black and white rectangular pattern. It is assumed that the width and height of the pattern (W ; H) are known. The rectangular pattern's vertexes are the four feature points of the pattern, which are expressed as M k ¼ ½X k ; Y k ; Z k T , where k ¼ 1; :::; 4, with respect to the camera's frame. Based on the perspective projection model of the pinhole camera, the 3D points M k are projected onto 2D points m k through the optical center. Thus, all feature points in the image plane coordinates are expressed as m k ¼ ½u k ; v k T . All sides of the rectangular pattern are projected into the image plane. From m k , with k ¼ 1; :::; 4, the corresponding two vertical lines projections, in the image plane, are obtained (see Figure 1). The length of each vertical line projection, in pixels, is given as Dh l , Dh r , for the left and right line, respectively. As was mentioned, the length of both vertical lines, in the configuration space, is H. Omitting the Y c axis, consider the P jA and P jB points as the points which define the width of the pattern, left side and right side, respectively, (see Figure 2). Similar to Hasan et al., 12 from the Thales' theorem, the distance of each vertical line that composes the pattern, along the camera's optical axis (Z c ), is given by d zz ¼ Hf k z =Dh z , with z ¼ l; r, for the left and the right line, respectively. k z is a scaling factor which is the number of pixels per unit distance in image coordinates [pixels/m], and f is the focal length [m]. The distance of each vertical line projection (d zl , d zr ) corresponds to points, P jA and P jB , respectively.
From the Thales' theorem as well, the lateral offset of points P jA and P jB , in the configuration space, along the camera's X c axis, is given by H z ¼ Dw z d zz =ðf k x Þ, with z ¼ l; r, for the left and right point, respectively. Dw l , Dw r , are the distances, in pixels, along the horizontal axis (u c ) in the image plane, of each vertical line, respectively. k x is the corresponding scaling factor [pixels/m]. Thus, the relative position of the pattern's centroid, with respect to the camera's frame, is given by The relative orientation between mobile robots is obtained from the knowledge of W, d zl , and d zr , that is g ¼ ASIN ð DZ=W Þ, with DZ ¼ d zr À d zl . Therefore, another sensor either IMU or compass sensor is not required to estimate the relative orientation. In this work, only a perspective camera is used.
Similar to Gans et al., 9 the presented method is also distinguished due to the knowledge of two geometric lengths is much less restrictive than the complete geometric knowledge of all lengths. Furthermore, initialization is not required, and errors due to large motions are not propagated because of measurements are computed from the present frame only.

Preliminaries and strategy to reduce the off-tracking
In this section, the leader-follower formation kinematics and the visibility constraints are described. Also, the measurement of the off-tracking, the minimum curvature radius condition, and the proposed strategy to reduce the offtracking effect are presented.

Kinematics of the leader-follower formation
Let us consider a set of n unicycle mobile robots moving in a plane, r 1 ; :::; r n , where r i represents the ith nonholonomic unicycle mobile robot. The kinematics of the ith nonholonomic mobile robot is described by the following differential equations The coordinates x i ðtÞ and y i ðtÞ describe the position of the rear axle midpoint of the ith mobile robot, P i ¼ ðx i ; y i Þ, with respect to the global coordinate frame fGg ¼ ½X G ; Y G ; Z G T , (see Figure 3). The orientation q i is the angle between the heading of the ith robot and the X G axis of the fixed coordinate frame fGg. The translational velocity and rotational velocity are given by v i ðtÞ and ! i ðtÞ, respectively; these are the control inputs of the ith mobile robot. r i ði ¼ 1; :::; n À 1Þ is the follower robot of its local leader robot r j ðj ¼ i þ 1Þ. r n is the global leader robot that has the knowledge of the desired trajectory or is assigned the task of the environment exploration. It is assumed that the camera's optical center is located at the midpoint of the local follower (P i ). The rectangular pattern is mounted on the leader robot such as the pattern's centroid coincides with the local leader robot's midpoint (P j ) (see Figure 3).
The representation of separation-bearing parameters is often used by many formation control approaches. In addition, in many formation control approaches for mobile robots, the relative position between the mobile robots is defined with respect to the leader robot's frame. As done in Xinwu et al. 19 and Consolini et al., 25 the position of the leader robot, with respect to the follower robot's frame fF i g ¼ ½X i ; Y i ; Z i T , is chosen to describe the relative position between a pair of leader-follower robots. Then, the  formation states that describe a pair of leader-follower robots are given as x i ¼ ½r i ; i ; g i T , with i ¼ 1; :::; n À 1; where r i is the relative distance between P i and P j midpoints, i is the relative bearing angle, g i is the relative orientation between the mobile robots. From Figure 3, one can see that the separation between the mobile robots and the bearing angle, are, respectively given, with respect to the follower robot's frame, by where ATAN2 is a twoargument computational function that calculates a fourquadrant arc tangent with a range of ½Àp; p. d c iz ; d c ix ; g i are measured by the on board camera on mobile robot i using the vision-based algorithm, described in the second section. The formation states, with respect to the global frame, are given by Therefore, from Figure 3, it is implied that By differentiating (4) with respect to time, taking into account (2), (6), and using the following trigonometric identities: cosða+bÞ ¼ cosðaÞcosðbÞ Ç sinðaÞsinðbÞ, sinða+bÞ ¼ sinðaÞcosðbÞ+cosðaÞsinðbÞ, the kinematics of the leader-follower formation is given by where i ¼ 1; :::; n À 1, for the local follower robot, and j ¼ i þ 1, for the local leader robot. It is important to note that the leader-follower kinematics (7)-(9) requires only the relative position, the relative bearing angle, and the relative orientation angle, thus both the absolute position and the absolute orientation of all mobile robots are not needed. Furthermore, only the instantaneous local leader robot's velocities are used, also the leader robot's path is not required. Additionally, any velocity information is sent from the local leader robot to the local follower robot. To estimate the translational velocity and rotational velocity of the local leader robot, an extended state observer is implemented. 26,27 Therefore, the proposed vision-based time-varying leader-follower formation scheme is decentralized. Both the measurement of the off-tracking effect and the proposed strategy to reduce it will be addressed next.

Measurement of the off-tracking
For a unicycle mobile robot, the instantaneous curvature radius of its path is given by Remark 1. Notice that RðtÞ ! 1, when !ðtÞ ! 0, which implies that the path is a straight line.. Off-tracking is defined as the deviation of the path of each vehicle from the path of preceding vehicle. It has been pointed out that when the leader robot (r j ) is traveling along a circular path with a constant radius R j , then the follower robot (r i ) is traveling along another circular path with radius R i with the same center, where R j > R i , 28 considering a constant separation. Complementary, Jing et al. 5 proved that if the leader robot tracks a circular path with a constant curvature radius R j , the motion path of the follower robot will converge to a concentric circle with a radius given by Note that R i D is the minimum curvature radius of the follower robot when any reduction action of the offtracking is implemented. Thus, similar to Jing et al., 5 Deligiannis et al., 6 and Bushnell et al., 28 the measure of the off-tracking of a pair leader-follower robots is given by the difference between the curvature radius of each mobile robot, that is From (11), the maximum deviation, when any reduction action is implemented, is given by D iT ¼ jR j j À jR iD j. Therefore, the off-tracking is eliminated and reduced, respectively, when the following conditions are satisfied

Modeling of the visibility constraints
In Dimitra and Vijay 20 and Xiaomei et al., 21 the visibility constraints are introduced. It is assumed that each follower robot is equipped with a fixed on board camera of limited angle of view 2a < p. Furthermore, it is assumed that the local follower robot can reliably detect objects which lie within a limited region with respect to the forward-looking direction. The limited sensing region is modeled as a triangle of view for the follower, which essentially is an isosceles triangle in obstacle-free environments. Consequently, the follower's camera has a visual detection range such that the leader robot can only be detected if and only if the leader is within the triangle of view, that is, if the following visibility constraints are satisfied where L max is the length of the equal sides of the triangle of view (see Figure 4). It is important to note that, in the original work, both the relative orientation (g i ) and the minimum separation between the mobile robots are not taken into account. The main problem with this is that visibility constraints in (14) may be satisfied even when the relative orientation is ill-defined, for example, when jg i j ¼ p=2 and i ¼ 0, since the pinhole camera model is used (see Figure 4). Thus, the g i state adds an extra DOF to the visibility constraints.
Therefore, in this work, visibility constraints are modified such as, furthermore, the size of the mobile robots and the maximum relative orientation are taken into account. Notice that the modifications in the visibility constraints are made not only for the proposed vision-based algorithm but also for any vision-based strategy that uses a perspective camera.
Assume that the camera's FOV is given by 2a < p, L max is the length of the equal sides of the triangle of view and W is the width of the leader's pattern, for all mobile robots. Since an ideal pinhole camera model is used, notice that a singularity of the g i state occurs when g i ¼ p=2, with i ¼ 0, which implies that all three points (P jA ; P j ; P jB ) are line up. Hence, to avoid this singularity, the maximum relative orientation must be chosen as g iM < p=2.
The minimum separation between a pair of mobile robots takes into account both the minimum separation under visibility constraints and the size of the mobile robots. Thus, the minimum separation between a pair of leader-follower robots is given by where L min is the length of another smaller triangle of view, S k > W =2 is the radius of the security zone of the mobile robots. So, the distance L min cosðaÞ > W is the minimum distance, along the X i axis, which guarantees that all three points (P jA ; P j ; P jB ) are within the limited sensing region, 8 g i 2 ½0; g iM . The maximum separation between a pair of mobile robots (r iM ), is given similar to (14), that is Therefore, in this work, the leader robot's pattern can only be detected if and only if the following visibility constraints are satisfied j i j a jg i j g iM r im r i r iM (17) where r im and r iM are given by (15) and (16), respectively. Hence, the three states of the system, x i ¼ ½r i ; i ; g i T , are well defined in the domain

Minimum curvature radius condition
Let us now consider the steady state of a time-invariant pair of leader-follower robots, it implies that, both the leader robot and the follower robot are moving along the same circular path defined by the constant curvature radius R j , and with a desired constant separation, denoted as r id . This steady state is depicted in Figure 5. From the law of cosines, the desired relative orientation is given by According to the ACOS function, omitting the case when r id ¼ 0 which is impossible in a real application and without considering visibility constraints yet, the variable g id is well defined in the domain D ¼ fr id 2 R þ j 0 < r id 2R j g. Then, the range of g id is given as I ¼ fg id 2 S 1 j 0 < g id pg. Since r id is a given value, both the curvature radius of the local leader robot's path and the desired relative orientation are restricted by the desired separation between the mobile robots.
Due to the visibility constraints, to guarantee that the formation control problem can be archived, the leader robot's path is restricted by the minimum curvature radius condition. Thus, to define the minimum curvature radius condition taking into account visibility constraints, from Figure 5, substituting g id and R j by g iM and R jmin , respectively, from the law of cosines, r 2 id ¼ 2R 2 jmin À 2R 2 jmin cosðg iM Þ; and using the trigonometric identity: sin 2 ð2aÞ ¼ ð1=2Þ½1 À cosðaÞ, the minimum curvature radius of the leader robot's path, is given by To solve the formation control problem considering the reduction of the off-tracking effect, the instantaneous curvature radius of the leader robot's path (R j ) must satisfy the following condition Remark 2. In this work, to guarantee that the time-varying formation control problem, with reduction of the offtracking effect, can be solved, the minimum curvature radius of the local leader robot is defined. Visibility constraints only guarantee that the local leader's pattern can be detected by the local follower's camera. The minimum curvature radius condition guarantees that the pair of mobile robots can fit on the leader's circular path. Otherwise, the off-tracking effect cannot be eliminated for a time-invariant formation in the steady state even using absolute positions.

Proposed strategy to reduce the off-tracking
For simplicity of notation, consider the auxiliary variable q id ¼ g id À id , as shown in Figure 5. In the steady state, the translational velocity of each mobile robot is tangent to the circular path, thus b Remark 3. Notice that there are two problems if equations (22) and (19) are used to compute id : (1) the desired curvature radius of the local leader robot (R jd ) is required, thus the local leader robot's path is needed, either parametrized by the time or in the configuration space, (2) otherwise, to obtain g id , using the instantaneous curvature radius of the local leader robot (R j ðtÞ) in (19), the desired bearing angle ( id ), can be obtained, but when computing the corresponding time derivative ( _ id ), the time derivative of the local leader robot's velocities ( _ v j ; _ ! j ) are required, which cannot be estimated by the observer. Then, to avoid the use of both the knowledge of the desired local leader robot's path and the time derivatives (_ v j ; _ ! j ), in this work, it is proposed not to use the desired relative orientation in the steady-state case (19). Instead, the desired relative bearing angle ( id ) is chosen as a function of the instantaneous relative orientation between the mobile robots (g i ).
From (22), the desired bearing angle and its time derivative are chosen as The formation errors of a pair of mobile robots are given by where id is the corresponding desired value of i , e ri is the separation error, and e i is the bearing angle error. Notice that, since the desired value of the states r id and id is timevarying, the formation control problem is translated into a pure tracking control problem. The considered input vector of a pair of mobile robots is u i ¼ ½v i ; ! i T with output vector y i ¼ ½r i ; i T . Furthermore, the nonholonomic nature of the system makes it impossible to control all three states continuously. 12 The system (7)-(9) has a vector relative degree ½r 1 ; r 2 T ¼ ½1; 1 T . Since r ¼ r 1 þ r 2 ¼ 2, the system has a first-order dynamics associated with the relative orientation, g i . Thus, the proposed control law guarantees that the left state (g i ) remains bounded. Additionally, if the global leader robot follows a desired trajectory, then the trajectory tracking problem of the time-varying formation is solved as well.
In this work, the following assumptions are considered: Assumption 1. The desired separation between a pair of mobile robots, r id ðtÞ, is C 1 and bounded. Thus, its time derivative is also C 1 and bounded; that is j _ r id ðtÞj _ r iM , with i ¼ 1; :::; n À 1, where _ r iM 2 R þ .
Assumption 3. The global leader robot is able to move in an obstacle-free environment. Furthermore, its velocities (v n ; ! n ) are constant, with v n strictly positive 8t ! 0, and are chosen arbitrarily such as the minimum curvature radius condition (21) is satisfied.
Assumption 5. Each local follower can reliably detect its local leader which lies within a limited region with respect to the forward-looking direction. Problem statement. Given an instantaneous global leader robot's velocities (v n ; ! n ) which satisfies (21), design proper an observer to estimate the local leader robot's velocities (v j ; ! j , with j ¼ 2; :::; n) and a controller (u i ¼ ½v i ; ! i T , with i ¼ 1; :::; n À 1), by using only feedback from the camera, such that the formation errors (24) asymptotically converge to the origin considering the timevarying formation with constant local leader robot's velocities. Furthermore, the observer and the control law must guarantee that the visibility constraints (17) are satisfied 8t ! 0, which implies that the relative orientation dynamics (9) also remains bounded 8t ! 0. Additionally, the off-tracking must be eliminated for a time-invariant formation considering a circular leader robot's path.

Remark 4.
By using only the instantaneous local leader robot velocities, in the transient state for a time-invariant formation, this is, for a time-varying curvature radius of the leader robot, the local follower robot will try to converge to the instantaneous curvature radius of the leader, neglecting the path that was traveled by the leader robot, since the proposed method to eliminate the off-tracking is based on the steady state and the local leader robot's path is omitted. Notice that, it is assumed that g i is soft, since it is given by the mobile robot's orientation. Therefore, the measurement of the off-tracking (jd iT j) depends on the rate of the curvature radius of the local leader robot, the leader robot's accelerations, the desired separation between robots, and its time derivative. With this method, for a time-invariant formation, the off-tracking effect is eliminated in the steady state. Otherwise, in the transient state for a time-varying formation, the off-tracking effect is significantly reduced.

Observer and controller design
In this section, the extended state observer (ESO), implemented to estimate the local leader robot's velocities, is described. Also, the proposed control law to solve the time-varying formation control problem with reduction of the off-tracking is shown.

Extended state observer
The ESO was first proposed in the context of the active disturbance rejection control (ADRC). 26,27 The main ability of ESO is to estimate both the internal dynamics and external disturbances of the plant. Thus, in this work, an ESO is implemented to estimate the translational velocity and the rotational velocity of the local leader robot. Now, consider the 2ðn À 1Þ differential equations, given by the SI-SO first order dynamics (7) and (9), rewritten as is the external unknown disturbance of (25), z i ðtÞ ¼ ! j ðtÞ is the unknown disturbance of (26). Hence, v i and ! i is the input of each subsystem, respectively, with r i and g i as the output, respectively. Notice that, since the order of each subsystem is 1, for each subsystem the state and the output is the same variable. Due to a state observer is implemented, letr i andĝ i be the estimation of the states r i and g i , respectively. Furthermore, letê ri andê gi be the state estimation errors, which are given bŷ Any state observer will estimate the state and the external disturbance since the latter is now a state in the extended state model. Such an observer is known as ESO. A particular ESO of (25) and (26) is, respectively, given as wherex i andẑ i , with i ¼ 1; :::; n À 1, are the estimations of the corresponding disturbances, l 0i ; l 1i > 0 and k 0i ; k 1i > 0 are the observer gains to be chosen. From the estimation of the disturbances, both the translational velocity and the rotational velocity of the local leader robot can be obtained. That iŝ v j ¼x i =cosðg i À i Þ;! j ¼ẑ i (31) Then, the velocities estimation errors are given by By differentiating the state estimation error of r i , _ e ri ¼ _ r À _ r i , and substituting (7) and (28), yields _ e ri ¼ x i Àx i À l 1iêri (33) By differentiating once (33), and using (29), the secondorder differential equation is obtained In a similar manner for the other state estimation error (ê gi ). In the domain of the frequency, the relationships between the disturbances and the state estimation errors are given bŷ where gains l 0i ; l 1i , and k 0i ; k 1i , with i ¼ 1; :::; n À 1, are selected, such that the corresponding characteristic polynomial is Hurwitz. Then, the estimation errors are bounded by the time derivative of the disturbances.

Controller design
Similar to Ricardo et al. 11 and Hasan et al., 29 the proposed control law is based on a control strategy which linearizes the dynamics of the variables (r i , i ). Furthermore, the control law also takes into account their time derivative ( _ r id , _ id ), and the proposed method to reduce the offtracking, described in section "Proposed strategy to reduce the off-tracking." Thus, the translational velocity and the rotational velocity of the local follower robot are, respectively, given by with f ri ðe ri Þ ¼ K vi tanhðK pi e ri Þ; where K vi ; K pi ; K !i ; K i , i ¼ 1; :::; n À 1 are positive gains and the pair (v j ;! j ) is given by the ESO. Notice that Unlike related works, since the reduction of the offtracking is addressed, the proposed control law (36)-(37) requires not only the estimation of the local leader's translational velocityv j but also the estimation of the local leader's rotational velocity! j .
It is important to note that the proposed controller requires both the states of the system, x i ¼ ½r i ; i ; g i T , and the local leader robot's velocities, ðv j ; ! j Þ. In this work, the vision-based algorithm is implemented to compute the states of the system, thus, any other method to obtain the relative distance between the mobile robots, the relative bearing angle, and the relative orientation can be used with the proposed control law. Furthermore, an ESO is implemented to estimate the local leader robot's velocities. Any other method to obtain leader robot's velocities can be used. Also, for simplicity, explicit communication may be used, that is, the leader robot may send its velocities information to its follower robot. Notice also that, another control law could be implemented, for example, the one implemented in Dongyu et al., 30 where all the elements of all the agent states reaching formation at the same time.
By differentiating once the formation errors (24) yields and taking into account the proposed assignment (23), yields By substituting the proposed control signals (36)-(37) into (39) and (40), the time derivative of each formation error, in closed-loop, is given by The necessary and sufficient conditions to solve the stated problem, using the proposed control law with the ESO, are summarized in the following lemma. Lemma 1. Consider n unicycle mobile robots (2), which compose n À 1 pairs of leader-follower formations (7)-(9), take into account Assumptions 1-5 and assume that a timevarying formation is required. Then, the ESO (28)-(30) with the control law (36)-(37), with l 0i ; l 1i , k 0i ; k 1i , K vi ; K !i , K !i , K i > 0, where i ¼ 1; :::; n À 1, solve the problem statement, that is, lim t!1 e ri ! 0, lim t!1 e i ! 0.
Proof. Considering the Lyapunov function candidate given by V ðe ri ; e i Þ ¼ 1 2 e 2 ri þ 1 2 e 2 i ¼ V r þ V , with i ¼ 1; :::; n À 1 and substituting (41) and (42), the time derivative is given by with j ¼ i þ 1. By using the ESO, under Assumption 3, constant leader robot's velocities, and according to (35), the velocities estimations errors (Dv j ; D! j ) converge to zero after a transient state. Therefore, both _ V r , and _ V are negative definite. Hence, when a circular leader robot's path is required, even for a time-varying formation, the formation errors (e ri ; e i ), with i ¼ 1; :::; n À 1, asymptotically converge to the origin. Therefore, the control law ensures that the states of the system (r i , i ) converge to their desired values (r id , id ), either time-varying or timeinvariant, which are defined by (17). Thus, the proof is concluded. & Remark 5. Since a time-varying formation is addressed (r im r id ðtÞ r iM ), from (17), the g i state remains bounded (jg i ðtÞj g iM ). Hence, the equilibrium point of the relative orientation (g i ) cannot be obtained. However, the convergence of the control inputs of each local follower, when e ri ! 0 and e i ! 0, can be obtained by replacingv j ¼ v j , i ¼ id ¼ g i =2, according to (23), in the proposed controller (36). Thus, the translational control input (v i ðtÞ) is given by Notice that, when following a straight line (g i ¼ 0), the local follower's translational velocity is given by the sum of the local leader's translational velocity (v j ), and the time derivative of the desired separation ( _ r id ), as it is expected. Now, according to the proposed controller (37), replacing r i ¼ r id ,! j ¼ ! j , and using (45), the rotational control input (! i ðtÞ) is given by Every particular case will be discussed next. Consider the time-invariant formation problem in the steady state, that is _ r id ¼ 0. Thus, lim t!1 e ri ¼ 0, lim t!1 e i ¼ 0, 8 i ¼ 1; :::; n À 1. In this case, the relative orientation's equilibrium ( g i ) can be obtained. Since _ r id ¼ 0, from (45), yields v i ! v j , independently of the curvature radius of the local leader robot. From (9), due to _ g i ¼ 0, ! i ! ! j . Furthermore, from (46), the control input ! i converges to By substituting (47) into (9), yields Notice that (48) can be rewritten in a similar manner to (20), that is R j ¼ r id =½2sinð g i =2Þ. Hence, the relative orientation's equilibrium is given by The measurement of the off-tracking of a time-invariant leader-follower formation is given by (12). Since v i ! v j and ! i ! ! j impl that d iT ¼ jR j j À jR i j ¼ 0. Therefore, the off-tracking is eliminated for this particular case.
Finally, from (10) and (49), the convergence to zero for all three variables (e ri ; e i ; g i ) is only possible if the local leader robot's path is a straight line, that is ! j ! 0. Figure 6 shows the information flow of the proposed scheme.

Simulation results
In this section, to validate the proposed time-varying leader-follower formation control scheme, simulation results are shown.
The simulation platform for the formation control of mobile robots is developed using Gazebo simulator on Robot Operating System (ROS) implemented by Cþþ. 31 To study the performance of the control strategy presented in the fourth section as well as the vision-based pose estimation algorithm introduced in the second section, a simulation interface is designed as Figure 7, with two subwindows illustrating each camera view of two followers and a main window demonstrating the motion of three mobile robots in the environment. Thus, two pairs of leader-follower mobile robots are used. So, the red mobile robot (r 1 ) is the local follower of the blue robot (r 2 ). The blue mobile robot 2 (r 2 ) is the follower of the global mobile robot in gray color (r 3 ), which has the knowledge of the desired trajectory.
Simulations were performed with three TurtleBot3 mobile robots, 32 using a Raspberry Pi Camera Board Two simulation experiments were developed. For both simulation experiments, the desired separation between robots 1 and 2 was chosen as

Simulation 1
Similar to related works which address the vision-based formation control problem, the first experiment was carried out considering a circular path. 21,29,33 In open-loop, the global leader robot's velocities were given as v 3 Figure 8 illustrates the trajectory of each mobile robot. As can be seen, the off-tracking phenomenon of the pair composed of robot 2 and robot 3, after a transient, is eliminated since both a time-invariant formation and a circular leader robot's path are required. In Remark 5, it is demonstrated that for this particular case, v 2 ! v 3 and ! 2 ! ! 3 , which implies that the measure of the off-tracking is given by d 2T ¼ jv 3 =! 3 j À jv 2 =! 2 j ¼ 0, according to (12) and (13). In a similar manner, for the pair composed of robot 1 and robot 2, the measure of the off-tracking can be compute, using (11)- (13) and (45)-(46). From Figure 8, the off-tracking is significantly reduced, since a time-varying formation is required.
To show the efficiency of the proposed scheme, in Figure 9, the trajectories of all mobile robots without reducing the off-tracking, are depicted. To avoid the reduction of the off-tracking, it is necessary to consider id ¼ 0, in (23).
In Figure 10, the formation errors and the relative orientation of each pair of robots are shown. As is demonstrated in Lemma 1, formation errors asymptotically converge to zero, independently whether a time-varying formation is required. Also, according to visibility constraints, the relative orientations are kept bounded by the maximum relative orientation, jg i j g M , with i ¼ 1; 2. Furthermore, to the time-invariant leader-follower formation, the relative orientation (g 2 ) converges to its equilibrium point, given by (49), that is g 2 % 0:3518 [rad]. To the pair composed of robot 1 and 2, the relative orientation is bounded in accordance with (17). Figure 11 shows the translational velocity of each mobile robot for the first simulation. Due to the desired separation between r 2 and r 3 is constant, v 2 converges to the global leader robot's velocity (v 3 ). Since the desired separation between r 1 and r 2 is time-varying, the linear velocity of the follower robot 1 (v 1 ) is oscillating around its local leader robot's velocity (v 2 ), according to (45).
In Figure 12, the rotational velocity of each mobile robot, for simulation 1, is shown. In a similar manner to the translational velocities, the rotational velocity of the follower robot 2 (! 2 ) converges to the global leader robot's velocity (! 3 ). Since a time-varying separation is required, between r 1 and r 2 , the ! 1 rotational velocity is oscillating around its local leader robot's velocity (! 2 ), according to (46).

Simulation 2
Unlike related works which address the vision-based formation control problem, the efficiency of the proposed scheme was tested on a difficult scenario, that is, not to consider only straight lines, 12 circular/parabolic segments, [10][11][12] and circular paths with constant radius of curvature. 21,23,29,33 Similar to González-Sierra and Aranda-Bricaire, 4 where any camera is used and only the reduction of the off-tracking is addressed, to show that the time-varying formation control problem can be solved while significantly reducing the off-tracking effect, a lemniscate trajectory is considered. Additionally, to test many time-varying curvature radii until reach out the minimum radius, allowed by the visibility constraints, a spiral path for the global leader robot is chosen. Also, to test a sudden change between two different curvature radii, a straight line segment is added to connect both paths. According to the maximum relative angle (g M ¼ 0: The initial posture of each mobile robot is the same posture used in Simulation 1. Figure 13 illustrates the trajectory of each mobile robot. As can be seen, although the off-tracking effects are significantly reduced during all test, the largest deviation between the paths occurs when the global leader robot suddenly changes its path from the straight line to the circular path (at 80 s). As was mentioned before, this situation cannot be eliminated by using the proposed scheme, since the proposed control law only requires the instantaneous velocities of the leader robot; the path traveled by the leader robot in the configuration space is unknown by the local follower robot. Furthermore, the proposed strategy to reduce the off-tracking is based on the steady state for a time-invariant leader-follower formation. In Figure 14, both the formation errors and the relative orientation of the pairs of robots are shown. Formation errors converge to a small band within 20 s. As is mentioned in section "Extended state observer," the estimation errors (27), will be bounded by the time derivative of the disturbances (35), which are the global leader robot's velocities (v 3 ; ! 3 ), in this case. Therefore, the formation errors depends on the rate of the curvature radius of the desired trajectory, due to the global leader robot's velocities are not constant. However, the relative orientations are kept bounded by the maximum relative orientation, jg i j g M . Notice that at 80 s, when the global leader robot suddenly changes its curvature radius, the formations errors e 1 and e 2 are increased. This sudden change, in the curvature radius (R 3 ), can be seen as an external perturbation that is compensated by the control law after a certain time.    Figure 15 shows the translational velocity of each mobile robot, for Simulation 2. In Figure 16, the rotational velocity of each mobile robot is shown. Since the global leader robot's velocities are not constant, from (35), the velocities estimation errors (D! 2 , D! 3 ) are nonzero. Therefore, the velocities of the follower robot 2 (v 2 ; ! 2 ) do not converge to the global leader robot's velocities (v 3 ; ! 3 ), even for the time-invariant formation. In a similar manner, to the time-varying formation composed of robot 1 and robot 2. The velocities estimation errors depends on the rate of change of the global leader robot's velocities. Note that, after 80 s, the rotational velocity of the global leader robot (! 3 ) was setup as a constant, thus the rotational velocity of the follower robot 2 (! 2 ) converges to ! 3 .
Due to the reduced camera's FOV, taking into account the visibility constraints, to guarantee that the timevarying formation problem can be solved, it is necessary to start either with a small relative orientation between the mobile robots and with a small formation errors or on a straight line.
From the simulation results, it can be concluded that the obtained development, in this time-varying leaderfollower formation scheme with reduction of the offtracking under communication and visibility constraints, is similar to the one that can be obtained by using both the absolute positions and the absolute orientation of the mobile robots with explicit communication. However, due to the reduced camera's FOV, to guarantee that the time-varying formation control problem can be achieved, the desired trajectory of the global leader robot must satisfy the minimum curvature radius condition (21), which implies that the desired trajectory must be large enough compared to the desired separation between the mobile robots.

Conclusion
In this article, a time-varying leader-follower formation control scheme for unicycle mobile robots to reduce the off-tracking effect, using only a perspective camera, is presented. The off-tracking effect is eliminated in the steady  state for a time-invariant leader-follower formation, using only feedback from the monocular camera, and it is significantly reduced for a time-varying formation. Based on Lyapunov theory, it is demonstrated that the formation errors converge to zero even for a time-varying formation, considering a circular leader robot's path. Also, the relative orientation dynamics is stable. To guarantee that the formation control problem can be achieved, both the visibility constraints and the minimum curvature radius condition are defined. Simulation results, tested on a difficult path, with a time-varying curvature radius, show the efficiency of the proposed time-varying formation control with reduction of the off-tracking effect under communication and visibility constraints.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was partially supported by CONACyT, Mexico, through scholarship holder No. 553972, and also by the Project CB-2015-01, 254329.