Improved singular robust inverse solutions of redundant serial manipulators

To address the Jacobian matrix approximation error, which usually exists in the iterative solving process of the classic singular robust inverse method, the correction coefficient α is introduced, and the improved singular robust inverse method is the result. On this basis, the constant improved singular robust method and the intelligent improved singular robust inverse method are proposed. In addition, a new scheme, combining particle swarm optimization and artificial neural network training, is applied to obtain real-time parameters. The stability of the proposed methods is verified according to the Lyapunov stability criteria, and the effectiveness is verified in the application examples of spatial linear and curve trajectories with a seven-axis manipulator. The simulation results show that the improved singular robust inverse method has better optimization performance and stability. In the allowable range, the terminal error is smallest, and there is no lasting oscillation or large amplitude. The least singular value is largest, and the joint angular velocity is smallest, exactly as expected. The derivative of the Lyapunov function is negative definite. Comparing the two extended methods, the constant improved singular robust method performs better in terms of joint angular velocity and least singular value optimization, and the intelligent improved singular robust inverse method can achieve a smaller terminal error. There is little difference between their overall optimization effects. However, the adaptability of the real-time parameters makes the intelligent improved singular robust inverse method the first choice for kinematic control of redundant serial manipulators.


Introduction
Kinematics is the basis of manipulator control technology, which focuses on inverse kinematics solutions. [1][2][3][4][5] However, due to the high nonlinearity of redundant manipulators, it is quite difficult to obtain a suitable solution. The classic method is used to obtain the pseudoinverse solutions of various target functions. [6][7][8] Many related research papers have been published. 1,2,6,[9][10][11] The singular value decomposition generalized inverse method (SGIM) is a classical and common scheme to obtain inverse kinematics solutions for redundant manipulators, which provides a well-defined Jacobian generalized inverse matrix by singular value decomposition. 5,9,12 Although the least norm under least-squares solutions can be easily obtained by the SGIM, 9,12 the practicality of the solution will suffer, because the norm of the angular velocity tends to infinity when the least singular value approaches zero. Therefore, another new method, the singular robust inverse method (SRIM), has been proposed. [6][7][8]10,11,[13][14][15][16][17] The SRIM introduces the damping coefficient l to the SGIM to solve the singular problem caused by the small scale of the least singular value, and it reconstructs a new optimization target function. However, the optimization effect depends largely on the value of the damping coefficient l. So far, scholars have proposed various definitions of l. Wampler 10 suggested that the damping coefficient l was a constant. Kelmar and Khosla, 13 Chan and Lawrence, 14 Nakamura and Hanafusa, 7 and others 6,15 have proposed definitions of the piecewise functions for l. In addition, intelligent algorithms 16,18 have also been used to find better damping coefficients in recent years. All these schemes can treat l as a function of singular values of the Jacobian matrix, which is obviously nonlinear.
Artificial neural networks (ANNs) and particle swarm optimization (PSO) are widely used in solving inverse kinematics of manipulators. [18][19][20][21][22][23][24][25] ANNs adopt the parallel computing method, which is efficient, fast, and suitable for dealing with the nonlinear relationships among a large number of data. 20 However, the disadvantage is also obvious: It is essential to prepare the original data set for training. The acquisition of data usually relies on data recording of motion history, 19 data collection with trial-and-error methods, 18 and formula deduction under restricted conditions. 21 The data obtained by these methods cannot guarantee sufficient optimization.
On the other hand, there are also some problems in solving the inverse kinematics of a manipulator using PSO. Considering iteration with a wide range and a large number of random seed particles, although there is a high probability of obtaining globally optimal data, it consumes many computing resources, which is not suitable for online computing. To solve this problem, Dereli and Köker, 23 Shen et al., 24 and other scholars 25 adopt approaches such as limiting an articular angle independent variable or narrowing the overall initial search range to improve the speed of calculation. Clearly, these methods make PSO unable to obtain the optimal solutions.
Combining the advantages of the above two intelligent algorithms and utilizing PSO to obtain the optimal data set of a wide range and a large number of random seed particles off-line while training the data with an ANN. The obtained training model not only realizes the optimization as much as possible but also meets the real-time prediction of the damping factors. And it is a feasible scheme.
Compared with the SGIM, the SRIM has achieved better results in solving the inverse kinematics problem of redundant manipulators; however, there are also some shortcomings, as follows: (i) In the iterative solution of inverse kinematics, to obtain the joint angle variable Dq from t k to t kþ1 , the Jacobian matrix is usually considered to be unchanged, and the value at t k is used as an approximation substitution value, as follows Once the manipulator passes a singular point between t k and t kþ1 , the singular values of the Jacobian matrix will change significantly, and the approximation will produce a larger terminal error. Employing a smaller sampling interval can eliminate the error, but more computing resources will be consumed. Obviously, neither the SRIM nor the SGIM considers the effect of approximation.
(ii) There are many factors affecting the values of the damping coefficient, such as operability, the least singular value, and the matrix condition number. However, in the final analysis, the damping coefficient can be regarded as a mapping function of Jacobian matrix singular values that is timevarying and nonlinear. Classical constant coefficients and traditional damping functions cannot reflect the highly nonlinear relationship and struggle to provide real-time optimal damping parameters.
To overcome the defects of the SRIM, a pre-factor a is employed to eliminate the error caused by the approximation of the Jacobian matrix. An improved singular robust inverse method (ISRIM) with double coefficients a-l is formed. The influences of the introduction of a on the terminal error and joint angular velocity are analyzed, and the Lyapunov function is also introduced to analyze the stability of the process. On this basis, the corresponding data set of damping factors and singular values of the Jacobian matrix are obtained by PSO and further trained by an ANN to obtain the simulated Simulink fitting model; the obtained model is used to predict the a-l factors, which are essential for the real-time optimization of inverse kinematics. In addition, the simulations of the example confirmed the effectiveness of the proposed methods.
The double coefficients introduced in the ISRIM provide more adjustment measures for obtaining inverse kinematics solutions. The real-time prediction of the coefficients by PSO-ANN supports an innovative scheme for the online solutions of inverse kinematics. The numerical method is used in the solving process, and it is more easily extended to multi-axis redundant manipulators. The proposed method has made an exploratory study on the inverse kinematics solution of redundant manipulators, which lays a theoretical foundation for the precise control and wider use of series redundant manipulators.
The present work employs a pre-factor a to overcome the mentioned shortcomings of the SRIM and introduces intelligent PSO-ANN algorithms for real-time inverse kinematics solutions. The basic preparation and the SGIM and SRIM models are presented in the second section. The new methodology, the ISRIM, is illustrated in the third section. The modified methods are applied to a seven-axis manipulator for validation and comparison in the fourth section. The article ends with the conclusion in the final section.

Preliminaries
The mapping between the joint space and operation space is the main research of kinematics. 3 It is highly nonlinear for redundant manipulators. Therefore, it must be linearized to meet the motion control requirements.
The classical linearization method is to build the equation using the Jacobian matrix. 26 If it is divided by time, we can obtain the corresponding speed linear relationship as follows where J(q t ) is an m Â n partial derivative Jacobian matrix. The differential form of equation (2) can be expressed as Then, we have where J(q)* (abbreviated J*) is the inverse of the corresponding Jacobian matrix J(q) (abbreviated J).

Singular value decomposition generalized inverse method
J is a rectangular matrix whose inversion is different from that of the general square matrix. In 1955, based on the research of Moore, 27 the British mathematician Penrose 28 defined the generalized matrix inverse, which is known as the M-P generalized inverse (or "þ" inverse). The M-P inverse can provide all matrices' inversions, including that of the rectangle matrix, and it is unique; this result can be obtained by singular value decomposition. 9 After singular value decomposition, the matrix J can be expressed as Furthermore, the corresponding inverse matrix J* can be written as where U is an m Â m order orthogonal matrix. V is an orthogonal matrix of order n Â m. m is the row number of the Jacobian matrix and n is the column number. S ¼ diag(s 1 , s 2 , . . . , s r ), and S þ ¼ diagðs À1 1 ; s À1 2 ; . . . ; s À1 r Þ. Both of them are diagonal matrices. s 1 , s 2 , . . . , and s r are the singular values of the matrix J, and they are decreasing. r is the rank of J, where r m.
For a row full rank matrix, the optimization of formula (2) with the SGIM is the least-norm solution under the least-squares condition, which can be expressed as follows ( min jj _ qjj 2 subject to minjjdjj 2 The least-norm solution norm and the least terminal error norm corresponding to equation (7) are The other parameters are defined as above.

Singular robust inverse method
The SRIM, also known as the damped least-squares method, 11 improves the membership of the least norm and least squares in the optimization target in equation (7) so that it is a parallel relationship. It also employs a damping coefficient l to balance them. Therefore, a new optimization target function G _ q À Á l is proposed.
Then, the optimization of the solution of formula (2) can be expressed as The solution norm jj _ q l jj 2 and the terminal error norm jd l j jj 2 with damping l can be written as where all of the parameters are as above.

New singularity-avoidance schemes
In the present work, based on the SRIM, we propose a new scheme, the ISRIM. The optimization of equation (2) in the ISRIM can be expressed as In equation (14), a pre-coefficient a is employed for the target function, whose purpose is to eliminate the terminal error caused by the approximation of the Jacobian matrix when the manipulator passes singular points. The optimization target function is reconstructed as In equation (15) Since a 2 J T J þ l 2 I n is positive definite in formula (15), al is a positive-definite quadratic form of _ q, and has a minimum value only if _ q ¼ _ q al . Then, the solution of formula (2) for the optimum configuration (14) is as follows The improved singular robust inverse of the Jacobian matrix corresponding to equation (17) is Furthermore, J Ã al can be expressed as a simple calculation 7 as follows Then, for the full-rank Jacobian matrix J, the least-norm solution norm jj _ q al jj 2 and the least terminal error norm During the tracking trajectory process, _ q can be ultimately expressed as Figure 1. Flowchart of the PSO algorithm. PSO: particle swarm optimization.
where _ x d is the desired terminal velocity of the manipulator. _ x is the actual terminal velocity. d al is the terminal position error.
According to the stability criterion, 29-34 the stability of the proposed method was analyzed and the Lyapunov function 29 was defined as Then, we have where K i and K p are positive constant coefficients, and they are determined by the actual speed. From formula (23), V is always positive definite. _ V determines the stability of the proposed method whether it is positive or not. Obviously, we can satisfy the stability requirement of _ V being negative definite by appropriately selecting the values of parameters a and l.
Taking a and l as simple constants, the obtained solutions will maintain good coherence. However, the selection of a and l requires multiple attempts. This method is called constant improved singular robust inverse method (CISRIM). To obtain better real-time parameters, intelligent algorithms can be used. It is convenient to refer to this method as the intelligent improved singular robust inverse method (IISRIM). According to the previous literatures, [6][7][8]10,13,14 we know that the values of the parameters a and l, which satisfy the objective function formula (14), are functions of the singular values of the Jacobian matrix. However, it should be pointed out that the function is nonlinear and multivariable and suitable for adopting the PSO algorithm. [23][24][25]35 The core algorithm of PSO relies on the updating of velocity v and position p x . Assuming that the total number of particles is N and each particle is a D-dimensional variable, the classical update functions 35 for the position and velocity of the jth component of the ith particle can be expressed as where v is the speed of the particle. p x is the displacement value of the particle. w is the inertia weight. c 1 and c 2 are acceleration constants. r 1 and r 2 are acceleration weight coefficients.
The objective function represents the optimization purpose of PSO. According to the previous solution and stability analysis, the goal of a and l optimization can be set as min a; l jd al j jj 2 þ jj _ q al jj 2 According to the literature 25,32 and actual needs, we set the parameters of the PSO algorithm as follows: The total number of particles N is 50.0. The dimension of the particle is 2.0. The acceleration constants c 1 and c 2 are 1.0 and 2.0, respectively. The acceleration weight coefficients r 1 and r 2 are random numbers between 0.0 and 1.0. It should be noted that both parameters a and l are within the range 0-10 based on experience.
The flowchart for obtaining the Jacobian matrix singular values and the corresponding a and l values satisfying equation (27) is shown in Figure 1.
The data set consists of singular values of the Jacobian matrix and the corresponding damping coefficients, which are acquired by the PSO method and should be fitted and trained by the ANN. According to the literature [18][19][20][21][22] and simulation experiences, the design of the ANN is as follows: (1) To avoid "over-adaptation" of the training, the obtained Jacobian matrix singular value data set is normalized by the mapminmax function. (2) A back-propagation (BP) neural network is adopted. The input is six singular values of the Jacobian matrix, and it has six neurons. The output is the corresponding a and l values, so there are two neurons. For one hidden layer, according to the test, when the number of neurons is 50, better optimization results can be obtained. (3) The Levenberg-Marquardt algorithm is adopted. The other main parameters are set as follows: The number of epochs is 1000, min_grad is 1eÀ07, mu is 0.001, and max_fail is 6.
The established fitting neural network structure is shown in Figure 2.
After training (Figure 3), it can be seen that the iterative process is completed after 13 iterations, and the obtained mean squared error is 1.3456eÀ11, which satisfies the requirements.
The training of the neural network and the simulation of the "Numerical simulation and analysis of results" section are implemented with the Neural Network Toolbox in MATLAB R2018b on a Lenovo T460 laptop computer with an Intel(R) Core(TM) i7-6500U CPU @ 2.50 GHz (4 CPUs), approximately 2.6 GHz.
The result of the ANN training is derived into the Simulink (R2019 of MATLAB) model and added in the overall Simulink model of inverse kinematics. Then, the parameter values of a and l and the real-time inverse kinematics solutions can be obtained. The whole Simulink solving model is shown in Figure 4.

Numerical simulation and analysis of results
Taking the seven-axis manipulator "YL101" designed by the laboratory as the research object, the optimization of the CISRIM and IISRIM was studied, and the SGIM and SRIM were compared; these are well-known classic methods providing inverse kinematic solutions.
The structure of the manipulator "YL101" is shown in Figure 5, and its denavit-hartenberg (DH) 36 structural parameters are also listed in Table 1.
Due to the error accuracy requirements, the step size is set to be 0.05 mm. There are no parameters set for the SGIM. The damping coefficient l of the SRIM is set as l ¼ l 0 ||d|| 2 . According to the recommendation of Chan and Lawrence, 14 the best results of the SRIM can be achieved when l 0 ¼ 1.8. In the proposed CISRIM, after many attempts, better data can be obtained with a ¼ 2.1 and l ¼ 0. 35. In contrast to the CISRIM, another proposed method, the IISRIM, is an intelligent algorithm that does not require parameter settings. In addition, the time of a single step is set as 0.002 s. K i and K P are 1.0 and 0.008, respectively. The simulation results of the four methods are shown in Figures 6 to 11.
The terminal errors of the four different methods are compared in Figure 6. When approaching the first singularity point, the SGIM results begin to oscillate and diverge. Due to the employed damping factor l, the SRIM finds the solution of the first singular point, but the terminal error is large, and it always oscillates in subsequent iterations. When approaching the second singularity, the tolerance is also exceeded. The performance of the CISRIM and IIS-RIM based on the SRIM are much better, and both successfully obtain the inverse motion solutions. In contrast, the IISRIM's solution converges quickly, with smaller amplitudes and fewer terminal errors, but appears to be slightly oscillating. The solutions of the CISRIM have better coherence, and no fluctuation appears after stabilization.
The prerequisite for obtaining correct solutions is solving stability. Figure 7 shows the Lyapunov objective function derivative value _ V with different optimizing methods. _ V should be negative, which is necessary for stability. Clearly, during the solving process, the _ V functions of the SGIM and SRIM produce positive values in turn. In contrast, the proposed CISRIM and IISRIM always have a negative _ V and satisfy the stability requirement. In addition to the terminal error, the norm of joint angular velocity is also an indicator of manipulators, which should be small enough to be practical. As seen from Figure 8, the joint angular velocity of the SGIM is increasing and rapidly diverging, breaking the allowable value. Although the SRIM obtains a finite velocity value at the first singularity, it continues to oscillate and diverges at the second singularity place. In comparison, the angular velocities of the CISRIM and IISRIM are almost completely combined and maintain low values. However, it should be noted that the IISRIM's angular velocity value has a small fluctuation, whose amplitude is not too large to affect the solving process.  Equation (8) indicates that smaller singularity values will lead to a larger joint angular velocity and less flexibility of the manipulator. Therefore, to obtain stable solutions, the least singular value s m of the Jacobian matrix should be as large as possible. The s m of the SGIM is the smallest. The SRIM's s m continues to oscillate and diverge. Only the        In terms of the terminal error, joint angular velocity, and least singular value, the two new methods, the CISRIM and IISRIM, have their own advantages and disadvantages. However, it should be pointed out that the parameters of the CISRIM are obtained by the trial process. In contrast, the IISRIM without adjusted parameters is more suitable for engineering practice. Figures 10 and 11 show the track of the manipulator and the trend of the joint angle as well as the track of the spatial line L S by the IISRIM. There are no drastic changes, and the IISRIM can meet the actual operational requirements.
In the process of tracking the spatial linear trajectory L S , the ISRIM has a better optimization effect than the traditional SGIM and SRIM, with a more stable solution, smaller terminal error, smaller joint angular velocity, and larger least singular value. The CISRIM and IISRIM, extended on the basis of the ISRIM, have their own advantages. The CISRIM performs better in optimizing the least singular value s m . The IISRIM performs well in reducing terminal error, and there is little difference in terms of joint speed. In addition, it should be emphasized that the IISRIM does not have to adjust parameters. Once the training is completed, it can easily track the spatial trajectory, while the CISRIM does not.
The arclength step size is also set to be 0.05 mm. The parameter l 0 of the SRIM is set to be 0.1. The parameters of the CISRIM are a ¼ 3.2 and l ¼ 3.0, and the other parameters are the same as in Example 1. The simulation results are shown in Figures 12 to 17.
In addition to straight lines, tracking a curving trajectory is also a common basic task. As shown in Figure 12, similar to the straight line, only the proposed CISRIM and IISRIM can correctly solve the trajectory of the entire curve L C . Comparing the two methods, the IISRIM can obviously obtain smaller terminal error solutions. The _ V values of the two methods are shown in Figure 13. Both of them are not positive and satisfy the stability requirement.
In addition, the changes in the joint angular velocity and the least singular value of different optimization methods are compared in Figures 14 and 15. The traditional SGIM and SRIM have larger joint angular velocities, smaller least singular values, and oscillating divergences, which is not satisfactory. Both the proposed CISRIM and IISRIM perform better. Furthermore, the CISRIM can achieve larger   joint speeds and smaller least singular values, which is expected.
In tracking the trajectory of the curve L C , the classic SGIM still performs the worst-it diverges early and has a large terminal error, large joint angular velocity, and small least singular values. Due to the introduction of the damping factor l, another traditional method, the SRIM, shows some improvement, but the effect of improvement is limited. Instead, the proposed CISRIM and IISRIM also show better optimization results. Both steadily find the inverse solution of the spatial trajectory L C . The CISRIM performs better in terms of joint angular velocity and least singular value optimization, and the IISRIM can achieve smaller terminal errors; the overall optimization effect of the two methods is not much different. To avoid the trouble of parameter adjustment in the CISRIM, the IISRIM is still the first choice for actual manipulator control. Figures 16 and 17 show the track of the manipulator and the trend of the joint angle in tracking the spatial trajectory L C with the IISRIM. There are no drastic changes and these methods meet actual operational requirements.

Conclusions
The conclusions of this study are as follows: (1) To compensate for the error caused by the Jacobian matrix approximation, which usually exists in an iterative control solving process, the correction coefficient a is employed, and the ISRIM is proposed. On this basis, according to different selection of parameters, the CISRIM and the IISRIM are proposed. (2) The parameters of the CISRIM directly adopt simple constants, which would require multiple attempts to obtain. PSO is used in the IISRIM to    obtain the relational data set between the damping coefficients and the singular values of the Jacobian matrix. Then, the fitting training of the data set is performed by an ANN, and the Simulink parameter prediction model is obtained, which achieves real-time acquisition of the parameters. (3) Compared with the traditional SGIM and SRIM, the simulation results show that the CISRIM and IISRIM can obtain a smaller terminal error, lower joint angular velocity, and larger least singular value, which are better indicators for manipulator operation. In addition, the two proposed methods satisfy the Lyapunov stability requirement during the whole solving process. Although the CISRIM and IISRIM have their own advantages in different indicators, the overall difference is not large. To avoid the adjustment of CISRIM parameters, the IISRIM is the first choice for solving redundant serial manipulator kinematics solving, and it meets the control requirements. (4) In addition, although the IISRIM has better adaptive parameters and can obtain a smaller terminal error, there is still some oscillation during the solving process, and it is not as good as the CISRIM in joint angular velocity and least singular value optimization. All of these factors need further improvement.
Inverse kinematics is the first problem to be faced in the control of series redundant manipulators. In this article, an improved robust inverse kinematics method with double damping factors is given for the special solution of this problem. The composite intelligent algorithm of PSO and a neural network is used to achieve the real-time prediction of damping factors, which provides an innovative scheme for the online solution of inverse kinematics. It lays the foundation for theoretical research in inverse kinematics theoretical research and the practical application of series redundant manipulators.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.