Design of a new Robot Operating System-MATLAB-based autonomous robot system and trajectory tracking experiment

The trajectory tracking performance of an actual robot is usually used as an important standard to evaluate a nonlinear controller algorithm. The aim of this article is to design a new experimental robot system based on the Robot Operating System-MATLAB framework. To validate the performance, a nonlinear tracking controller includes the following features. Firstly, the surface robot model is in Lie algebra S O ( 3 ) format, which keeps the potential of cooperation control between space vehicles. Secondly, the nonlinear controller has good robustness under unknown disturbances. Thirdly, the control algorithm focuses on the practical robot, Turtlebot2. The robot system experimental results demonstrate the effectiveness of the proposed controller for trajectory tracking under unknown disturbance. The controller’s potential applications include autonomous robots for indoor examination, extreme condition rescuing, and cooperation between unmanned aerial vehicle and unmanned surface vehicle autodrives.


Introduction
The researchers have worked on mobile robots' motion control for many years.It is very important to find a suitable experiment system for validating the control algorithm.However, many pieces of equipment cannot afford many institutions requirements because of price, capability, or space-size limitations.It indeed prevents many researchers to take part in the relative research work.Therefore, it is necessary to find a popularization robot experiment system.
Generally, an autonomous robot experiment system includes two main modules, location estimator and motion planning controller.According to performance classification, there are many kinds of location sensors, including QR codes, infrared ray localizer, laser radar, and vision-based camera.In autodrive vehicle projects, the cars' location and navigation are usually based on laserradar. 1,2The sensor is also called multi-threaded Lidar, which are often used on high accuracy requirement projects.On the other hand, unmanned aerial vehicle (UAV) location equipment is a kind of high-speed camera, VICON. 3,4However, the laser-radar and VICON are not only expensive but also needs typical computer support.They are lack of commercial margin because of high manufacturing cost.Therefore, in industrial application, mobile robots are usually located by encoders, QR codes or infrared ray localizers.For example, in automation warehouse, unmanned transporting robots are located by the QR code labels fixing on the ground. 5,6ince the microelectronics hardware revolution, the computer image-processing ability has been upgraded by graphics processing unit (GPU) module.Consequently, the mobile robot generates an environment map 7 and estimates its position in the global frame based on a real-time image database. 8The technique is called visual odometry (VO), a kind of robot location technique by real-time image single. 9,10In addition, mobile robots generate environment maps by VO, and image data are called vision-based simultaneous location and mapping (vSLAM).The robot vSLAM system has been widely used on UAV motion planning projects. 11,12The embedded GPU computer and vSLAM technique were widely used on mobile robot tracking and mapping missions, for example, an autonomous robot with real-time SLAM, 13 stereo vSLAM method on outdoor mobile robot, 14 the quadruped robot mapping and locomotion, 15 outdoor mobile platform with multi-lines lidar, 16 and indoor multi-robots navigation. 17A worldwide top robot laboratory, GRASP at the University of Pennsylvania, uses Kalman filter to optimize quadrotor locating algorithm, named vision inertial odometry. 18The lab has finished the short-range and long-term planning with obstacle avoidance algorithm design and implementation. 19Recently, the GRASP lab is focusing on the application of a UAV motion planning system with mathematical graph theory. 20n despite of the robot locating ability, the robot tracking control algorithm is more significant.There are many classical controllers, for example, proportionalintegral-derivative (PID), linear quadratic regulator (LQR), and Lyapunov theory-based backsptepping control (BKSP).At present, the researchers prefer multi-robot autopilot algorithm optimization and implementation.For instance, mobile robot, 21,22 UAV motion planning, 23 unmanned surface vehicle controller, 24 robot operating system (ROS)-based network optimization for multiquadrotors control 25 and search-based UAV motion planning by linear quadratic theory. 26A new research field is search-based motion planning for UAV flight in SEð3Þ. 27,28Lie Group SEð3Þ is the mathematic foundation of the UAV motion model and controller in space.Therefore, the cooperative navigation and autodriving between UAV and unmanned surface vehicle (USV) motion is still a tough research topic.
This article aims to design an adaptive nonlinear controller and an experimental mobile robot system with the following features.Firstly, the robot dynamic and kinematic model is structured by Lie Algebra SOð3Þ and Lie Group SEð3Þ, same as the dynamic model of USV 29,30 or hovercraft. 31The mathematics model keeps the potential for cooperative control between UAV space motion and USV surface movement.Secondly, the experiment system includes a practical robot, Turtlebot2, and ROS-MATLAB framework.ROS provides libraries and tools to help software developers create robot applications.It provides hardware abstraction, device drivers, libraries, visualizers, message-passing, package management, and more.Thirdly, to ensure good robustness under complex environments, the mobile robot controller is based on the backstepping method 32 and Lyapunov direct theory. 33he main structure of this article is summarized as follows.The first chapter makes a statement about the whole research.The second section introduces the major symbols in this article, including the saturation function, skew-symmetric matrix, rotation matrix, and unit vectors.The mobile robot model and problem are introduced in the third section.An adaptive tracking controller is expressed in the fourth chapter.To validate the controller, a kind of mobile robot tracking experiment system is designed on a novel ROS-MATLAB framework in the fifth section.In the future, the experiment system could be used on relative academic or industrial application projects.When ROS2-Industry put into operation, the ROS-MATLAB experiment system could be applicated to many practical cases, for instance, formation control between UAV and USV, autonomous robots indoor patrol and examination in a warehouse, rescuing robots in extreme conditions, or auto-drive unmanned surface vehicles under space-earth network.

Special variables definition
To classify different variables, scalar, vector, and matrix are defined as follows.A scalar is a normal character, for instance, mass m, angular velocity r, and weight mass g.A vector character is written in bold, such as linear velocity v, position p, and force f.A matrix is a capital letter, for example, rotation matrix R, damping matrix D, and skewsymmetric matrix S.
A prime f 0 ðxÞ represents a partial derivative of a time varying function f ðtÞ, with following features f 0 ðxÞ ¼ @f @x ðxÞ.An upper dot symbol _ f ðxÞ ¼ f 0 ðxðtÞÞ_ xðtÞ represents a total time derivative.
A saturation function is sðxÞ, for example, a position error s ðe p Þ.In addition, if a saturation function sð'Þ is differentiable, it satisfies following prerequisite In general, there are many functions were used as saturators.For example, arctan and s 1 ðsÞ ¼ s= ffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 þ s 2 p .An unknown variable b, b represents the estimator result.Character b means unknown variable reality and hypothesis error.In addition, the relationship between error b, reality constant b and hypothesis b is b ¼ b À b, and the time derivative is _ b ¼ À _ b.There are typical characters that represent the space vector's norm and direction.kfk represents force vector's norm, which is a scalar f u .r 1 is the direction vector about force components in x-axis.r d is the desired direction vector about desired force components in x-axis.There is a relationship function between force norm and direction, f ¼ f u r 1 ; where f u ¼ kfk, r 1 ¼ f=kfk.

Rotation matrix
A rotation matrix Rð Þ is important to calculate a robot's orientation in initial frame fIg from the fix-body frame states ½x; y; z.The matrix R 2 SOð3Þ object represents an SO(3) rotation in 3D in a right-handed Cartesian coordinate system, which is usually a 3Â1 vector.In addition, during space vector transformation, the rotation matrix SO(3) can be written as R x ðgÞ; R y ðbÞ; R z ðaÞ, which are about the transform in x; y; z axes, respectively Because of the rotation matrix R 2 SOð3Þ is special orthogonal, it includes following features, R T ¼ R À1 ; RR T ¼ I; where I is a unit diagonal matrix.
Different from the pure rotation calculation in SO(3), the SE(3) object represents a 3D homogeneous transformation matrix consisting of a translation and rotation.
A SE(3) function in space is usually a 6Â1 vector, which includes translation and rotation vectors.

Time derivative of rotation matrix
Because a mobile robot moves on a surface, the time derivative of its rotation matrix Rð Þ is as follows where r ¼ _ is angular velocity, R is the rotation matrix, and S is the skew-symmetric matrix.In addition, the time derivative of a rotation matrix and its transfer has the following features:

Skew-symmetric matrix
During rotation matrix R time derivative calculation, there is a special variable called skew-symmetric matrix, which includes angular velocity vector During space vector calculations, the skew-symmetric matrix has several special performances.
Firstly, there is a skew-symmetric matrix that includes a force's direction r 1 and desired r 1d vector.It is necessary to solving the controller function needs exchanging the matrix inner with the outer part, by the following equation, r T 1 Sðr 1d Þ ¼ Àr T 1d Sðr 1 Þ.Secondly, if the direction vector is in the format as e 1 ¼ ½1; 0; 0 T , the direction vector r 1 can be written as where R is the rotation matrix.
Thirdly, the cubed skew-symmetric matrix equals to negative of the original function as If the mobile robot is moving in surface, skewsymmetric matrix S could be simplified as SðxÞ 2 R 2 , SðrÞ ¼ ½0; Àr; r; 0; S ¼ ½0; À1; 1; 0:

Turtlebot2 robot
In this project, the trajectory tracking controller, identification name BKSP-V-TB2, is designed based on a practical robot, Turtlebot2 (TB2).The robot had been widely used by many research institutions because its platform, KOBUKI, has five kinds of programmable data ports for different kinds of additional equipment requirements.For example, the robot upgrades its computational vision calculation ability by adding an embedded GPU computer (Nvidia Jetson TX2).To achieve the simultaneously location and mapping function, the robot equips a laser radar (Rplidar-A2), a depth camera (Kinect V1), and a powerful WIFI module for real-time graph signal communication.As Figure 1, the mobile robot's fix-body frame in space includes three components in x, y, and z axes.

Mathematics model in space
In general, a mobile robot only revolves around surface motion, as the classic textbook by Craig 34 and Fossen. 35owever, to prepare for the space-earth cooperation control task, the mobile robot is defined in space vector format.The robot position and orientation are defined in inertial frame fIg.The robot's linear and angular velocities are expressed in fixed-body frame fBg.As Figure 2, the robot can be simplified as a circle driven by two active wheels, fixed on brushless motors with two acculturated encoders.
The robot orientation is marked at the circle center.The robot width length is w.The driven force in the surge direction is f u .As Table 1, the mobile robot's fix-body frame in space includes three components in x, y, and z axes.
According to the general definition, the Turtlebot2 mobile robot's kinematic and dynamic models are where p ¼ ½p x ; p y ; 0 T is the mobile robot actual position space vector.is the robot orientation.Rð Þ is a rotation matrix with attitude angle in z-axis.v and ! are the mobile robot's linear and angular velocity in fixed body frame fBg, respectively.m is a scalar that represents the mobile robot mass.Sð!Þ is the skew-symmetric matrix about angular velocity !during robot rotation transformation calculation.D is the resistance parameters matrix.I zz is the rotation initial coefficient.f and t are added force and torque on the robot, respectively.b is an unknown parameter.

Problem formulation
Robot motion model in space.The first challenge of this article is establishing a space model for the practical robot, Turtlebot2.To expand the autonomous system to airplanevehicle cooperation location and navigation, the robot model was defined in space vectors.The robot dynamic model, equation ( 2), and kinematic function (1) have been introduced in "Mathematics model in space" section.In the mathematics aspect, three dimensions function will be friendly for rotation matrix R calculation and transformation between space frames.However, the 3D model's calculation is much more complex than surface models.

Controller design and stability analysis
The Lyapunov direct method is a fundamental theory for studying nonlinear systems.Most practical robots are not ideal linear models.2][43] The practical robot, Turtlebot2, is a nonlinear control system whose algorithm is based on Lyapunov direct method and backstepping control law.The robot movement is driven by left and right wheels with encoders.Therefore, the control variables are relative with linear u d and angular u r velocity, as a vector v Ã ¼ ½f ðu d Þ; f ðu r Þ T .Generally speaking, the robot backstepping tracking problem is specifically calculating a solution for the control variables, v Ã .The objective is about minimizing robot state errors, such as position e p , velocity e v , and orientation e .

Position error
In general, a trajectory tracking controller's basic objective is driving the robot following a desired path.It means that the first target is canceling the error, equation (3) between robot's actual and desired position in initial frame fIg (or Global Frame) where actual position is p ¼ ½p x ; p y T , and desired position According to mobile robot kinematic equation ( 1), the time derivative function of the position error is where b is an added unknown disturbance.

First Lyapunov function
Based on the position error e p and a disturbance estimation error b1 , the first Lyapunov function is where estimation error b1 ¼ b À b1 .According to position error equation ( 4), the first Lyapunov time derivative function is where W 1 is a positive definite parameter k 1 e T p e p .

Desired linear velocity
According to the robot features, the control variable is related to velocity v. Therefore, the time derivative of the Lyapunov function can include a velocity error, and the desired speed function is Because the mobile robot is an underactuated system, which cannot directly controlled by a space vector, the deconstruction is necessary.

Space vector deconstruction
In an underactuated system, the active control vector needs deconstruction before calculating its solution.In general, there are two ways to separate a space vector v Ã , by components (case 1) and by direction (case 2), as Figure 3.
It is obvious that, in case 1, the projection on robot surge u and sway v direction construct the space vector v Ã ¼ ½u d ; u v T .In case 2, the vector's components are different, which are based on the vector's norm kv Ã k and direction r Ã d .In general, the controller space vector v Ã represents velocity, force, or acceleration.When the space vector v Ã has a physical propose, the advantage of the second case is significant.Because the norm and direction way is closer to the practical phenomenon.Therefore, the desired velocity space vector v Ã components by its norm and a direction as following equations There is an important conclusion that the robot desired linear velocity u Ã d expression function is

First Lyapunov function time derivative
Based on vector deconstruction equations ( 6) and ( 7) definition, the time derivative of the first Lyapunov function becomes where u 1 and r 1 represent actual controller v vector's norm and direction.u d and r d represent desired controller v Ã vector's norm and direction.W 1 is a positive definite parameter k 1 e T p e p .There are relationship functions between u 1 , u d , r 1 and r d .

Direction error
Based on the actual and desired velocity vector v and v Ã , equations ( 6) and ( 7), the robot direction error z d is defined as follows where the direction error z q is replaced by a crossproduction between actual r 1 and desired r d direction vectors.

Second Lyapunov functions
After direction error definition z q , the second Lyapunov function includes first Lyapunov function V 1 , direction vector z d , and the second unknown parameter b 2 as follows The second Lyapunov time derivative function is where W 2 is a positive definite function as k 1 e T p e p þ k 3 sin 2 z q .Ffb 1 g and Ffb 2 g represent the first and second unknown disturbance estimator functions, respectively.

Desired angular velocity
Based on the second time derivative of Lyapunov function _ V 2 , the equation can be transformed to a new format as _ V 2 ¼ A! þ B, where A and B are !parameter matrix and a constant vector.To obtain the new expression function, it is necessary to extracting the angular velocity common factor in _ V 2 equation (11).The time derivative of second Lyapunov function transforms as follows There is a fundamental hypothesis in the Lyapunov direct method that, if the time derivative Lyapunov function is negative definite as time variable t !0, the whole states space equilibrium point should be asymptotic stability.It means following desired angular velocity equation is reliable in order to guarantee the controller effectiveness where there is a function be independent of angular velocity !
In addition, after replacing unknown disturbance b by second estimation value b2 , the time derivative of direction vector _ r d becomes There is a new generated unknown disturbance part b2 after time differential calculation Finally, the desired angular velocity !Ã expression function could be determined based on the last calculation results, equations ( 12)-( 16).

Unknown parameter estimator
In general, the unknown parameter _ b estimator integral function is similar with the following format As equation (11), there are two unknown disturbance estimator function, Ffb 1 g and Ffb 2 g.After replacing the unknown part b with estimation value b, the first and second estimator functions are According to the Lyapunov direct method requirement, the time derivative of the Lyapunov function must be seminegative definite.It means all components of the _ V 2 function are negative-definite or zero.Therefore, the first Ffb 1 g and second Ffb 2 g estimator satisfy After substituting equation ( 17) into ( 19), an integrator format estimator functions b1 and b2 are established.

Controller stability analysis
If the kinematic parameters in equation ( 1) are certainly known, a perfect control law is where u d and r d are controller variables related to the linear and angular velocity.Detailed proof for the stability of the backstepping controller can be found in Krstic et al. 32 Proposition.The control law u d (20), r d (21) can achieve global trajectory tracking, by guaranteeing that all robots state errors e p , z q converge to zero for any initial condition.
Proof.Based on the control law u d (20), r d (21) and the disturbance integral estimation law, equations ( 18) and ( 19), the final system's Lyapunov function V is and its time derivative _ V _ V ¼ Àk 1 e T p e p À k 3 sin 2 z q is a negative semidefinite function.Based on the condition, the time derivative of V is negative definite, meaning that the origin is uniformly asymptotically stable._ V 0 indicates that the control law achieves trajectory tracking by guaranteeing the errors ðe p ; z q Þ converging to uniformly asymptotically stable equilibrium ð0 0Þ or ð0 pÞ as t ! 1.On the other hand, the controller performance needs to verify on a practical robot.Therefore, a mobile robot tracking experiment system is designed by the ROS-MATLAB framework in the next section.

Experiment system design
In general, a robot tracking control system includes three parts, locating, controlling, and adding function modules.However, most of the robot locating equipment is very expensive, such as multi-lines radar, stereo vision camera, or high-speed industrial visual sensors.To validate the controller performance without expensive equipment, a kind of robot tracking test system was designed based on the ROS-MATLAB framework.The implementation robot, Turtlebot2, components are a KINECT camera, JETSON TX2 embedded GPU, KOBUKI platform, and an Rplidar-A2.The original target is tracking control of the mobile robot TB2, which was widely used in academic research, especially on ROS-based simulation systems.As Figure 4, the TB2 robot components include a KOBUKI mobile robot with two active wheels, two-speed encoders, a Kinect-V1 depth camera, a laser radar RPlidar-A2, and an Nvidia TX2 GPU-embedded computer.
The experiment will be introduced based on two aspects, hardware, and networks, in the next section.

Embedded computer
As Nvidia company's official datasheet was introduced, the Jetson TX2 is a full-featured development platform for visual-based computing.The powerful graph calculation unit, 256-core NVIDIA Pascal GPU, makes it an ideal equipment for applications requiring highcomputational performance in a low-power envelope.The module was pre-flashed with a Linux environment, included many common APIs, and was supported by NVIDIA's official development tools and apps.The computer will be installed with ROS and related TB2 robot packages in this project.

Depth camera
The Kinect V1 camera is a kind of developed vision-based human body sensor.Based on the depth camera, the system could calculate the target model states in 3D space.The camera will be used on the mobile robot self-position and orientation estimation task.

Laser radar
The Rplidar-A2 is a kind of accurate radar, which the longest reflection distance is up to 20 m.In this project, the radar will provide an environment map by scanning surrounding obstacles.Based on the initial-frame map and  robot self-odometry data, the system could calculate its position and location in the global frame.

Simulation software
The ROS-MATLAB framework is based on a calculation computer with Matlab software as Figure 5, a Wi-Fi-based Ethernet, and several location sensors, such as vision-based radar, laser, and motors encoders.
When running the trajectory tracking experiment, the mobile robot sends odometry data to the embedded computer, Jetson TX2.In addition, the Rplidar generates a local map by scanning the surrounding environment.The vision data, which was captured by the Kinect camera, were also sent to the TX2 computer.The robot publishes all data in the ROS-MATLAB system and receives the command in Ethernet.The tracking controller is running in MATLAB software on host personal computer (PC) and sending them to practical robot by ROS system, as Figure 6.

Parameters setting
The robot tracking controller is validated on the ROS-MATLAB experiment system.In this project, the simulation assumed that the robot running in an ideal environment, which ignores many additional disturbances, such as ground friction, damping force, motor voltage delay, and mobile robot speed limitation.However, after practical pretesting experiments, there is a conclusion that it is impossible to cancel the mistake between practical and ideal robot models.Therefore, to minimize the errors, the initializing process is necessary before practical experiments.One of the most important tasks is setting control algorithm parameters.The experiment requirement, all controller gains, and robot parameters are defined in Table 2. Before the practical experiment, communication must be established between host PC and the mobile robot, Turtlebot2.Generally, the preparation work includes ROS host PC and master computer network setting, Turtlebot bring-up programming, host PC and robot microcomputer connection testing, mobile robot odometry, and radar scanning data publishing code compiling.The preparation work is finished until the mobile robot could be controlled on Matlab software (PC side) manually.

Trajectory tracking performance
During the practical experiment, the mobile robot desired path is a circle, radius of which is about 1 m, center locates at the initial position p d0 ¼ ½0:2; 0:4 T .The desired velocity is, v d ¼ ½u d ; 0 T , where cruise speed is u d ¼ 0:2 m=s.The simulation robot initial position is p m0 ¼ ½À0:1; À0:1 T m.The practical robot position is different within each time experiment.In this article, the actual initial position is p a0 ¼ ½À0:092915; À0:33452.As Figure 7, there are desired, simulation, and actual mobile robots' tracking paths.The experiment is assumed as under disturbance in the initial frame, b ¼ ½0:002; 0:001 T .The simulated and practical robots both follow the desired path smoothly with initial position errors.
The practical robot tracking path is much more complex than the simulation model.In the beginning, the mobile robot, TB2, turns left suddenly, which is different from the simulation mobile robot.The reason is the simulating robot model facing to target direction in the beginning.However, the practical robot's initial orientation cannot be defined, which is one of the ROS-MATLAB limitations.Therefore, the mobile robot TB2 has to turn suddenly in the beginning until the controller calculated the desired target direction and orientation.Position errors.As Figure 9, the position errors between reference and model e mx ; e my , desired path and actual e ax ; e ay are presented in two subfigures.The errors are getting stable and converging to zero within 5-8s.

Position signals and errors
As time passes by, all of the errors are converging to zero without sudden shock.It means the controller drives the robot's approach to the destination normally.In addition, the system is working effectively under the unknown disturbance b.Simulation and experiment comparation.The practical robot working environment is much more complex than the simulation.The track mobile model is under the assumption that the robot works under ideal conditions.It can be found in Figure 7 that the practical robot's initial position is a complex number.Even though there are many unknown parameters or disturbances, the BKSP-TB2 robot controller is still working normally.In the mathematics aspect, the control algorithm can not only decrease the error between the assumed robot and desired path but also could work on a practical robot.The controller's robust ability under nonlinear models and unknown disturbance is much better than other linear classic controllers.

Lyapunov function performance
As the most important variables, the Lyapunov function and its time derivative are presented in Figure 11.The Lyapunov function represents all errors "energy," which is always converging to zero if the system is effective.It means the Lyapunov function should be decreasing (the first and second subfigures) and its time derivative must be negative definite (the third subfigure).When the Lyapunov function is positive and the time derivative is semi-negative definite, the control law achieves trajectory tracking by guaranteeing all of the robot's states errors ðe p ; z q Þ converging to uniformly asymptotically stable.

Controller features comparison
To validate the robustness of the BKSP-TB2 control law, it is necessary to contrast the backstepping controller with a widely used control algorithm, PID.Firstly, to quantity the performance effect between backstepping and PID controller in this case, a simulation system was built in Matlab-Simulink toolbox.Secondly, the mobile robot dynamic model needs linearization before PID controller simulation.Facing double linearized mobile robot models, both controllers work under the same circle-like trajectory.Finally, both controller performances are presented within states errors, as shown in Figure 12.
Both orientation error signals under both controllers are as follows.denotes, the Backstepping makes the error approach to equilibrium faster with less vibration.To validate the difference between backstepping and PID, the controller states errors are calculated in ISE (Integral of Square Error), ITSE(Integral of Timed Square Error), and IAE (Integral of Absolute Errors), as Table 3.The simulation signal result is denoted in Figure 13.
As Table 3 demonstrates, the backstepping controller effect errors in ISE, ITSE, and IAE are much less than PID, which represents that the backstepping controller has better performance and robustness.

Convergence point
Backstepping PID  In conclusion, the backstepping controller is much more suitable for the nonlinear system and has good performance in adaptive features, as Table 4.The PID controller is simple and used widely.However, single PID gives poor performance when the loop gains must be reduced.They also have difficulties in the presence of nonlinearities and have lag in response to large disturbances.The LQR algorithm is just an automated way of finding an appropriate state-feedback controller.It is many times difficult to find the settings of a regulating controller by using a mathematical algorithm which minimizes a cost function with weighting factors.

Conclusion
In this article, a novel experimental mobile robot system based on the ROS-MATLAB framework has been introduced.The aim of the system is to validate the surface trajectory tracking controller designed using Lie Algebra S Oð3Þ to enable potential cooperation navigation with UAVs.In addition, an integrator estimator has been designed for handling unknown disturbances.The simulation results show that the proposed control system performs well, even under a large initial position error and unknown disturbances.This experimental system and controller have the potential to be utilized by other institutions for similar robot projects, such as autonomous industry vehicles, outdoor automatic logistics, and extreme conditions of human rescue.Furthermore, the proposed technique can be extended to unmanned robots for space-ground cooperation navigation, tracking control, and landing assignments between UAVs and USVs.In conclusion, the work can be applicated in the field of developing more robust and versatile mobile robot systems.
Position signals.As Figure 8, there are reference p d , model p m and actual p a mobile robots' position signals.Because the desired path is a circle, the position signals are similar to trigonometric function sinðtÞ and cosðtÞ.The signals amplitude ½0; 0:8 represents tracking circles' diameter 0:8 m.

Figure 11 .
Figure 11.The Lyapunov functions and its time derivative.

Figure 13 .
Figure 13.Controllers effects in ISE, ITSE, and IAE.ISE: integral of square error; ITSE: integral of timed square error; IAE: integral of absolute errors.

Table 1 .
39nematics and dynamics model symbols.39Lyapunovdirect method.In addition, according to the mobile robot, TB2, practical features and the motion controllers are linear u d and angular velocity !.
Experiment system.The fourth problem is designing a suitable experiment system to test the mobile robot control algorithm.The traditional robot motion and location experiment equipment are based on high-speed cameras (indoor) or multi-location sensors (outdoor), such as GPS, multi-lines radar, and vision-based sensors.All the equipment above have common features that are expensive price and hard to protect.In this article, the experiment system is based on a low-cost robot, Turtlebot2, and the ROS-MATLAB framework.The TB2 robot is lightweight, modular, and extensible equipment.The ROS system is good at distributing framework calculation and communication.It is the reason why ROS system becomes one of the most popular programming platforms in many institutions and industrial factories.Signal transformation.The last problem comes from the ROS-MATLAB framework in the practical experiment.The control algorithm in MATLAB m-files needs to transform into a recognizable signal in practical mobile robots and embedded microcomputers.There are many hidden problems during ROS-MATLAB establishment, such as communication signal transformation, time-delay cancellation, and time-varying variables singularity avoidance.