Hand–eye calibration and grasping pose calculation with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot

Due to motion constraint of 4-R(2-SS) parallel robot, it is difficult to calculate the translation component of hand–eye calibration based on the existing model solving method accurately. Additionally, the camera calibration error, robot motion error, and invalid calibration motion poses make it difficult to achieve fast and accurate online hand–eye calibration. Therefore, we propose a hand–eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot by improving the existing eye-to-hand model and solving method. Firstly, the eye-to-hand model of single camera is improved and the robot motion error in the improved model is compensated to reduce the influence of camera calibration error and robot motion error on model accuracy. Secondly, the vertical-component of hand–eye calibration is corrected based on vertical constraint between calibration plate and end effector in parallel robot to calculate the pose and motion error in calibration of 4-R(2-SS) parallel robot accurately. Thirdly, the nontrivial solution constraint of eye-to-hand model is constructed and adopted to remove invalid calibration motion poses and plan calibration motion. Finally, the proposed method was verified by experiments with a fruit sorting system based on 4-R(2-SS) parallel robot. Compared with random motion, the existing model, and solving method, the average time of online calibration based on planned motion decreases by 29.773 s and the average error of calibration based on the improved model and solving method decreases by 151.293. The proposed method can improve the accuracy and efficiency of hand–eye calibration of 4-R(2-SS) parallel robot effectively and further realize accurate and fast grasping.


Introduction
The automatic sorting of fruits based on robot is of great significance to the automated, large-scale, and accuracy development of agricultural production and agriculture product processing. [1][2][3] During the automatic sorting of fruits, the accurate and reliable calculation of grasping pose is the precondition to realize the accurate, fast, and nondestructive grasping control of robot. 4 Currently, the main methods of grasping pose calculation include infrared image analysis, spectrum analysis, and machine vision. Compared with other methods, the machine vision has the advantages of noncontact, good adaptability, costeffectiveness, and so on. It is more suitable for grasping pose calculation of fruits. 5 However, there is a challenging problem in grasping pose calculation based on machine vision, which is online hand-eye calibration with high accuracy and efficiency. [6][7][8] In addition, the parallel robot, which features strong rigidity, stable structure, and high precision, places greater demands on accuracy and efficiency of online hand-eye calibration. 9,10 Currently, the main hand-eye systems include eye-to-hand and eye-inhand according to the pose relationship between camera and end effector of robot. The camera of eye-to-hand system is fixed outside the robot. 11 The camera pose in the world coordinate system remains unchanged. The pose relationship between camera and end effector changes with the robot motion. 12 The camera of eye-in-hand system is fixed on the end effector and moves with the robot. 13,14 The pose relationship between camera and end effector remains unchanged. The eye-in-hand system can move the camera close to the object to acquire a clear image, but it is difficult to ensure that the object appears in the field of view of camera. In addition, the shake of camera while moving and the image smear caused by acceleration while acquiring image would affect the accuracy of calibration. The eyeto-hand system with stationary camera has the advantages of high detecting accuracy and good stability, which is more suitable for the fruit sorting system of parallel robot with limited work space. 15 The main methods of hand-eye calibration include measuring method, active motion calibration method, self-calibration method, and calibration method based on calibration object. Because the measuring method needs the aid of measuring equipment with high accuracy, such as an optical coordinate measurement machine, 16 the cost is high and the process is complicated. The active motion calibration method has strict requirement for the motion accuracy of vision system. It is not suitable for eye-to-hand system with stationary camera. 17 In addition, it is difficult to realize the active motion calibration because the robot motion control and trajectory calculation are difficult to achieve accurately and efficiently. 18,19 The self-calibration method only uses the camera parameters and the constraint of robot to realize hand-eye calibration, which makes the calculation complex and the accuracy difficult to improve. 20 The calibration method based on calibration object establishes the relationship between camera and robot by calibration object to reduce the difficulty of pose calculation. It can improve the accuracy and speed of hand-eye calibration effectively. 21,22 So this article uses the calibration method based on calibration object to calibrate the eye-to-hand system.
Currently, the solving methods for hand-eye model mainly include homogeneous transformation method, 23,24 matrix rearrangement method, 25 method based on matrix vectorization and Kronecker product, 26 dual quaternion method, 27 and Lie group method. 28 The rotation matrix is expressed by rotating an angle around an axis in homogeneous transformation method, which is obtained by calculating the unit vector and rotation angle of rotation axis. In the matrix rearrangement method, the rotation matrix is obtained by rearranging matrix elements to transform implicit equation of rotation into explicit equation in hand-eye model. However, the obtained rotation matrix is not orthogonal matrix. Solving the translation vector based on nonorthogonal rotation matrix would lead to greater translation error. The dual quaternion method uses the dual quaternion to represent the motion in space. The Lie group method adopts the Lie group theory to transform the minimization problem into the least square fit problem in solving equations based on multiple sets of observations for obtaining a simple and clear solution.
The method based on matrix vectorization and Kronecker product transforms the hand-eye model equations into explicit equations that are easy to solve. Compared with other methods, the method based on matrix vectorization and Kronecker product can calculate the rotation matrix and translation vector in hand-eye model simultaneously, which can reduce the transfer error. It has the advantages of good robustness, high accuracy, and calculation speed. It is more suitable for the online hand-eye calibration and grasping pose calculation of 4-R(2-SS) parallel robot based on machine vision.
The existing solving methods for hand-eye model require the robot to perform multidirectional rotation and large-scale translation during calibration. For robots with degree of freedom (DOF) constraint, it is difficult to meet this requirement. Therefore, some scholars proposed a calibration method for the robots with DOF constraint by limiting the rotation angle of camera to nonzero during calibration. However, for the robots with rotational constraint, it is difficult to calculate the Z-direction translation component accurately. 29 Some scholars used the three orthogonal translations of robot to linearize rotation component for reaching the solution conditions, but the translation error would also affect the accuracy of model solving. 30 Some scholars constructed constraint matrices to achieve hand-eye calibration of 4-DOF and 6-DOF robots based on rotational and translational constraint information, but the Z-direction translation component has not yet been taken into account. 31 Then some scholars researched the robots with rotational constraint, such as Cartesian robot and Selective Compliance Assembly Robot Arm robot. The hand-eye calibration model of robot is solved by setting the camera as an orthogonal projection pose and reducing the number of unknown parameters of hand-eye calibration matrix based on a two-dimensional hand-eye calibration model. However, the calculation of Z-direction translation component needs to be corrected based on additional reference object. The accuracy would be affected by the thickness and machining error of reference object. 32 Therefore, for the 4-R(2-SS) parallel robot with 4-DOF of rotational constraint, it is difficult to calculate eye-tohand model parameters based on existing hand-eye calibration methods accurately. Additionally, the motion error of robot, calibration error of camera, and invalid calibration motion poses of robot make it difficult to achieve fast and accurate online hand-eye calibration and grasping pose calculation of 4-R(2-SS) parallel robot. 33 In this article, an online hand-eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot is proposed and applied to grasping pose calculation. An improved eye-to-hand model of stereovision with motion error compensation is proposed to reduce the influence of camera calibration error and robot motion error on the accuracy of eye-to-hand model. The existing model solving method based on matrix vectorization and Kronecker product is improved to calculate the pose and motion error in hand-eye calibration of 4-R(2-SS) parallel robot with rotational constraint accurately. A calibration motion planning method based on nontrivial solution constraint of eye-to-hand model is proposed for improving the accuracy and efficiency of online hand-eye calibration and further realizing accurate and fast grasping of parallel robot.

Materials and methods
Due to motion constraint of 4-R(2-SS) parallel robot, it is difficult to calculate translation component of hand-eye calibration based on the existing model solving method accurately. Additionally, the camera calibration error, robot motion error, and invalid calibration motion poses make it difficult to achieve fast and accurate online hand-eye calibration. Therefore, we propose a hand-eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot by improving the existing eye-to-hand model and solving method.

Model solving method based on matrix vectorization and Kronecker product and limitation analysis
The model solving method based on matrix vectorization and Kronecker product transforms the hand-eye model equations into explicit equations. For calculating the rotation matrix and translation vector in hand-eye model, it is necessary for the robot to move with some specific requirements.
Model solving method based on matrix vectorization and Kronecker product. To reduce the influence of measuring error of calibration plate pose g H b in end-effector coordinate system on the solving accuracy of hand-eye calibration model, the hand-eye calibration model AX ¼ XB is represents the pose matrix of calibration plate in the basic coordinate system of camera. The w H g represents the pose matrix of end effector in the basic coordinate system of robot. The d H w represents the pose matrix of the basic coordinate system of robot in the basic coordinate system of camera. Therefore, the matrices A, B, and X can be decoupled into the rotation matrix and translation vector as follows: The sizes of rotation matrices R, R A , and R B are 3 Â 3 and the sizes of translation vectors t A , t B , and t are 3 Â 1. Taking X as an example, the rotation matrix and the translation vector are, respectively: Therefore, the hand-eye model AX ¼ XB can be expressed as follows: Then: The rotation and translation parts of equation (4) are transformed respectively. The rotation matrices R, R A , and R B are the unit orthogonal matrix, so: The I 9 and I 3 are ninth-order and third-order unit matrices, respectively. The 0 9 is zero matrix with 9 Â 1. Based on the two motions of end effector of parallel robot during hand-eye calibration, we can obtain as follows: The R A1 , R B1 , R A2 , and R B2 are rotation matrices of two motions, respectively. The t A1 , t A2 , t B1 , and t B2 are translation vectors of two motions, respectively. The 0 18 is zero matrix with 18 Â 1. The equation (7) is transformed into Q Á vecðRÞ ¼ 0 and solved by singular value decomposition. Then the results are introduced into equation (8) to calculate the rotation matrix R and translation vector t based on the least square method. Finally, based on equation (1), the pose matrix X of hand-eye model is solved.
Limitation analysis of model solving method based on matrix vectorization and Kronecker product for 4-R(2-SS) parallel robot. If the end effector does not rotate, in which R A ¼ R B ¼ I, the translation part of the decoupled eyeto-hand model equation (4) may be transformed into: Then: Therefore, three equations can be obtained at each motion moment of end effector. For the nine unknowns of the rotation matrix R, it is necessary for the three translations without rotation of end effector, which are not in the same plane. Then the rotation matrix R of eye-to-hand model can be obtained. The calibration plate is fixed on the top of the end effector. The Z-axes of calibration plate coordinate system and end-effector coordinate system are collinear. The end effector of 4-R(2-SS) parallel robot with 4-DOF only can rotate around the Z-axis, so the calibration plate only can rotate around the Z-axis. Based on the motion constraint, the rotation matrix R A in the pose transformation matrix A is expressed as follows: The q A is angle rotating around the Z-axis. Equation (11) and obtained translation vectors t A and t B are introduced into equation (4): The Rotðk ; qÞ represents a rotation around arbitrary axis k at angle q. An arbitrary rotation matrix can be expressed by Rotðk; qÞ as follows: where versq ¼ ð1 À cosqÞ, k ¼ ðk x ; k y ; k z Þ. Equation (12) is transformed based on equation (13): To make equation (14) have nontrivial solution, the matrix Rot A ðZ; q A Þ À I would be not a zero matrix, so q A 6 ¼ 2kp. From equation (14), we can see that it is difficult to calculate the Z-direction translation component t z based on the rotation around Z-axis only. So to obtain all translation components, it is necessary for the end effector to rotate at least twice around two nonparallel axes. However, the 4-R(2-SS) parallel robot with 4-DOF cannot meet the motion requirement. Therefore, in this article, the existing model solving method based on matrix vectorization and Kronecker product is improved to propose an improved solving method with verticalcomponent correction for the eye-to-hand model. Additionally, an improved eye-to-hand model of stereovision with motion error compensation is proposed. The calibration motion of parallel robot is planned based on the nontrivial solution constraint of eye-to-hand model to improve the accuracy and efficiency of hand-eye calibration of 4-R(2-SS) parallel robot.

Improved eye-to-hand model and model solving method
The eye-to-hand model of single camera is improved and the robot motion error in improved model is compensated to reduce the influence of camera calibration error and robot motion error on model accuracy. The verticalcomponent of hand-eye calibration is corrected based on vertical constraint between calibration plate and end effector in parallel robot to calculate the pose and the motion error in calibration of 4-R(2-SS) parallel robot accurately.
Improved eye-to-hand model of stereovision with motion error compensation.
(1) Eye-to-hand model of stereovision. The existing eye-to-hand model is usually based on the single camera in the basic coordinate system of camera. It is difficult to improve the calibration accuracy.
Therefore, this article constructs the hand-eye models of all cameras according to the pose relationship between cameras in stereovision. Based on the hand-eye model, the two cameras are modeled, respectively: where bi are the pose transformation matrices of calibration plate in the two camera coordinate systems, respectively. The X d ¼ d H w and X c ¼ c H w are the pose matrices of the basic coordinate system of robot in the two camera coordinate systems, respectively. The eye-to-hand model of stereovision can be obtained by transforming equation (15) based on pose relationship between cameras, as shown in equation (16).
(2) Eye-to-hand model with motion error compensation. The hand-eye calibration error mainly comes from the pose error of end effector in the basic coordinate system of robot and the pose error of calibration plate in the basic coordinate system of camera, which is derived from the motion error of robot. The motion error of robot can be considered to be caused by differential transformation of robot coordinate system, which can be calculated by differential motion model of robot. The differential motion of robot can be considered to consist of differential rotation Rðd x ; d y ; d z Þ and differential translation Tðdx; dy; dzÞ. Taking into account the differential motion dH in the motion of end effector, the new pose matrix H þ dH can be obtained, where the dH can be calculated based on equation (17).
Although the calibration plate is fixed on the top of the end effector and the motion of calibration plate is consistent with that of the end effector in the eye-to-hand system of parallel robot, the pose matrix and differential motion matrix of calibration plate and end effector are different in the different coordinate systems. Therefore, the hand-eye model AX ¼ XB can be improved by motion error compensation based on differential motion, as shown in equation (18). The DA is the differential motion of calibration plate in the camera coordinate system. The DB is the differential motion of end effector in the basic coordinate system of robot. Due to the structural stability and motion constraint of the 4-R(2-SS) parallel robot with 4-DOF, the Z-axes of the end-effector coordinate system O g À X g Y g Z g and the basic coordinate system of parallel robot O w À X w Y w Z w can be kept parallel, as shown in Figure 1. Because the calibration plate is fixed on the top of the end effector, the Z-axes of the O g À X g Y g Z g and the calibration plate coordinate system O b À X b Y b Z b can be kept collinear. The Z-direction translation component t z of pose matrix X can be calculated by transformation relationship between other coordinate systems. The pose matrix d H w of the O w À X w Y w Z w in the basic coordinate system of camera O d À X d Y d Z d obtained by the existing model solving method based on matrix vectorization and Kronecker product is: According to the orthogonality of rotation matrix in the pose matrix d H w , the inverse matrix w H d of d H w can be obtained as follows: The Z-coordinate w z d of O d in the O w À X w Y w Z w can be calculated according to equation (21).
The vertical component w z g of end-effector pose in the O w À X w Y w Z w can be calculated according to the pose matrix Then, according to equations (20) and (21), the Z-direction translation component t z of pose matrix X can be obtained as follows: Thus, the vertical component of pose matrix X obtained by the existing model solving method based on matrix vectorization and Kronecker product is corrected by calculated t z to realize the accurate solving of eye-tohand model for 4-R(2-SS) parallel robot with 4-DOF.
Hand-eye calibration and grasping pose calculation based on improved eye-to-hand model and model solving method for 4-R(2-SS) parallel robot The nontrivial solution constraint of eye-to-hand model is constructed and adopted to remove invalid calibration motion poses and plan calibration motion. The pose transformation matrix of end effector from current pose to optimal grasping pose in O w À X w Y w Z w is calculated. The improved eye-to-hand model and model solving method for 4-R(2-SS) parallel robot are used for hand-eye calibration and grasping pose calculation.
Fruit sorting system based on 4-R(2-SS) parallel robot. The 4-R(2-SS) parallel robot with 4-DOF for automatic sorting of fruits is shown in Figure 2. The parallel robot body includes parallel mechanism consisting of 4-R(2-SS) (R represents the revolute joint and S represents the spherical joint) side chains with the same kinematic structure and end effector which is a clamping mechanism. It can realize three-dimensional translation and one-dimensional rotation around Z-axis. According to the requirement of fruit sorting, the eye-to-hand system is adopted, in which the camera is fixed outside the robot and does not move with the end effector. The end -effector of robot moves in the field of view of camera. To obtain the three-dimensional pose of fruit, the Kinect camera based on time of flight measuring method is used for acquiring images. 34 The Kinect camera is composed of a color camera and an infrared camera. To achieve online hand-eye calibration, the calibration plate is fixed on the top of the end effector. The parallel robot grasps the fruit based on the fruit pose obtained from the image during fruit sorting. Therefore, it is necessary to calibrate the cameras and the parallel robot to obtain the relationship between the basic coordinate system of camera and the basic coordinate system of parallel robot before sorting, so as to realize the transformation from the pose of fruit in camera coordinate system to grasping pose of end effector in parallel robot coordinate system.
Motion planning and hand-eye calibration of 4-R(2-SS) parallel robot based on nontrivial solution constraint of eye-to-hand model.
(1) Constructing nontrivial solution constraint of eyeto-hand model. To calculate the pose of the basic coordinate system of parallel robot in the basic coordinate system of camera, the model equations constructed by calibration data obtained from the calibration motions of parallel robot need to have nontrivial solutions. Therefore, the nontrivial solution constraint of eye-to-hand model is constructed. It is adopted to remove the invalid calibration motion poses and plan hand-eye calibration motion of end effector of parallel robot for improving the accuracy and efficiency of online hand-eye calibration. The specific constraints are as follows: (a) The pose transformation matrix B between two calibration motions of end effector needs to satisfy B 6 ¼ I. For the constraint (a), if B ¼ I, there is no change between two calibration motions of end effector and the model cannot be solved. For the constraint (b), if q A ¼ 2kp, the translation component of pose matrix X cannot be calculated based on equation (14). For the constraint (c), the necessary and sufficient condition for AX ¼ XB to have nontrivial solution is that A and B have common characteristic roots, which are proved as follows: If J and L are the Jordan normal forms of A and B, respectively, and A ¼ TJT À1 and B ¼ HLH À1 , then AX ¼ XB , JZ ¼ ZL, where Z ¼ T À1 XH. If characteristic roots of J and L are l i ði ¼ 1; 2; ::: ; kÞ and h j ðj ¼ 1; 2; ::: ; kÞ, respectively, then the elements on the principal diagonal of k 2 th order square matrix obtained by JZ ¼ ZL based on the Jordan normal form are l i À h j ði ¼ 1; 2; ::: ; k; j ¼ 1; 2; ::: ; kÞ. Then the end effector does small-scale rotation and large-scale translation randomly near the ideal hand-eye calibration positions. The random motion poses of end effector are chosen according to the proposed three constraints in this article. As shown in Figure 3, the motion poses of end effector, which satisfy the nontrivial solution constraint, are adopted to construct the improved eye-to-hand model of stereovision with motion error compensation. The improved solving method with vertical-component correction is adopted to solve the eye-to-hand model. Then the accurate and fast hand-eye calibration for 4-R(2-SS) parallel robot with 4-DOF is realized based on the motion planning.
Grasping pose calculation of 4-R(2-SS) parallel robot with 4-DOF. To achieve accurate and stable grasping of fruit, the end effector of parallel robot needs to move to the position of fruit and grasp the fruit with an optimal grasping pose in the fruit sorting system based on 4-R(2-SS) parallel robot with 4-DOF. Assuming that the optimal grasping pose is H p , to make the end-effector change from the current pose H g to the optimal grasping pose H p accurately, the H p needs to be transformed and represented as w H p in the O w À X w Y w Z w , as shown in equation (25).
The g H p and d H p are the optimal grasping poses in O g À X g Y g Z g and O d À X d Y d Z d , respectively. The w H g is the current pose of end effector in the O w À X w Y w Z w . The grasping model equation (25) of 4-R(2-SS) parallel robot is also improved by motion error obtained from hand-eye calibration based on differential motion as follows: The current pose w H g of end effector in the O w À X w Y w Z w can be obtained by forward kinematics of 4-R(2-SS) parallel robot. The kinematics equation of the 4-R(2-SS) parallel robot with 4-DOF in this article is as follows: x 2 þ y 2 À 2ðe þ l 1 cosq i Þðxcosg i þ ysing i Þ þðe þ l 1 cosq i Þ 2 þ ðz þ e i s þ l 1 sinq i Þ 2 À l 2 2 ¼ 0 where i ¼ 1; 2; 3; 4. As shown in Figure 4, the ðx; y; zÞ is the coordinate of point P 2 in the O w À X w Y w Z w . The norm of vector e i ¼ eðcosg i ; sing i ; 0Þ T , which is the vector from O to A i , represents the incircle radius difference between the moving and stationary platform. The g i ¼ ði À 1Þp=2 represents the structural angle of stationary platform. The l 1 and l 2 represent the length of master arm and slave arm in side chain i, respectively. The q i represents the angle of the master arm i. The s is the Z-direction translation of auxiliary platform relative to moving platform. The s ¼ pðq=2pÞ, where p is the lead screw pitch and q is lead screw angle. Then: Then, according to equation (27), the pose ðx g ; y g ; z g ; q g Þ of end effector can be calculated as follows: where the c is the distance between P 1 and P. The g is the distance between P and O g . Based on equations (1), (2), and (13), the obtained pose parameters ðx g ; y g ; z g ; q g Þ can be transformed into a pose matrix w H g . Then, the pose matrix X obtained from hand-eye calibration is inverted to calculate the pose matrix w H d of the infrared camera in the O w À X w Y w Z w . The cameras are calibrated by the camera calibration method based on single plane checkerboard to calculate the optimal grasping pose d H p . Finally, according to the grasping model of 4-R(2-SS) parallel robot in equation (26), the pose relationship matrix g H p between the current pose and the optimal grasping pose of end effector can be calculated to realize the accurate and stable grasping of fruit.

Results and discussion
The proposed method was verified by experiments with a self-developed novel fruit sorting system based on 4-R(2-SS) parallel robot with 4-DOF. The hardware of experiment included the calibration plate with 7 Â 7 circles with a diameter of 3.5 mm and spacing of 7 mm, and Intel(R)-Core(TM)i5-6600 CPU with 8 GB memory.
Comparison of online hand-eye calibration according to random motion and planned motion based on nontrivial solution constraint of eye-to-hand model The end effector fixed with calibration plate does random motion and planned motion based on nontrivial solution constraint of eye-to-hand model in the field of view of Kinect camera, respectively. At each calibration position, the calibration plate images are acquired by color camera and infrared camera in Kinect camera, respectively. According to equation (27), the pose of end effector at each calibration position is calculated and transformed into the pose matrix w H g . The online hand-eye calibration process based on random motion is as follows: a. The end effector randomly moves to 15 calibration positions for calibration data acquisition. b. The hand-eye model equations are constructed to solve the pose matrix X based on the 15 calibration motion poses. c. If there is no solution to the model equations, the end effector removes and processes (a) and (b) are repeated. Otherwise, the online hand-eye calibration is completed.
The planned motion based on nontrivial solution constraint of eye-to-hand model is shown in Figure 5. The orange line represents the motion path of end effector. The blue dots represent the ideal calibration positions. The boxes represent the actual calibration positions after the random motion of end effector near the ideal calibration positions. The cylinder represents the work space for fruit sorting in 4-R(2-SS) parallel robot.
The improved eye-to-hand model of stereovision with motion error compensation and the improved solving method with vertical-component correction for the eyeto-hand model are applied to the experiments of online hand-eye calibration based on random motion and planned motion. The average error Err, average calibration time t, and orthogonality of rotation matrix R in multigroup  experiments are presented in Table 1. The average calibration time t includes motion time of end effector, time of calibration data acquisition, and time of model solving. The average error Err can be calculated based on equations (30) and (31), where the Err q is an error in the qth experiment, i ¼ 1; 2; :::; 15, q ¼ 1; 2; :::; Q, and Q is the number of experiments.
Err ¼ The obtained rotation matrix R is expressed by R ¼ ½R 1 ; R 2 ; R 3 . Then the orthogonality of rotation matrix R can be analyzed based on equations (32). The OR 1 , OR 2 , and OR 3 are average dot products of column vectors of R.
and OR 3 ¼ From Table 1, we can see that compared with the random motion, the average error and average time of online hand-eye calibration based on planned motion decrease by 0.096 and 29.773 s, respectively. The planned motion based on nontrivial solution constraint of eye-to-hand model has the advantages that the motion distribution in the work space is uniform, and the invalid calibration motion poses can be removed in advance. Therefore, the planned motion can improve the calibration accuracy and reduce the calibration time, so as to lay the foundation for accurate and fast grasping of parallel robot.

Comparison of hand-eye calibration based on existing and improved methods
The end effector fixed with calibration plate does planned motion based on nontrivial solution constraint of eye-tohand model in the field of view of Kinect camera. At each calibration position, the calibration plate images are acquired by color camera and infrared camera in Kinect camera, respectively, as shown in Figure 6. According to equation (27), the pose of end effector at each calibration position is calculated and transformed into pose matrix w H g . The matrices w H g of some valid calibration motion poses after choosing are presented in Table 2, where the unit of rotation is and the unit of translation is mm. At each ideal calibration position, multigroup calibration data corresponding to different poses of end effector are acquired and saved for testing hand-eye models and model solving methods.
Based on the acquired multigroup calibration data, the existing eye-to-hand model, the improved eye-to-hand model of stereovision with motion error compensation, the model solving method based on matrix vectorization and Kronecker product, and the improved solving method with vertical-component correction are applied to hand-eye calibration experiments of 4-R(2-SS) parallel robot with 4-DOF, respectively. The average error Err, average time t of model solving, and orthogonality of rotation matrix R are presented in Table 3, which can be calculated according to equations (30)- (32).
From Table 3, we can see that compared with the existing eye-to-hand model, the average errors of hand-eye calibration according to the improved eye-to-hand model combined with existing and improved solving methods decrease by 0.002 and 0.108, respectively. The orthogonality of the obtained rotation matrix R is also better. Compared with the model solving method based on matrix vectorization and Kronecker product, the improved solving method with vertical-component correction for the eye-tohand model can accurately calculate the Z-direction translation component. The average error decreases by 151.293. It can improve the orthogonality of the obtained rotation matrix R and the accuracy of hand-eye calibration.
Comparison of grasping pose calculation according to existing and improved hand-eye calibration methods To verify the advantage of the hand-eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot when it is applied to grasping pose calculation, the existing and improved hand-eye calibration methods are applied to the experiments of grasping cluster fruit, respectively. The images of cluster fruit are acquired by Kinect camera firstly, as shown in Figure 7. The optimal grasping pose d H p in the Then, the hand-eye calibration results according to the existing eye-to-hand model, the improved eye-to-hand model of stereovision with motion error compensation, the model solving method based on matrix vectorization and Kronecker product, and the improved solving method with vertical-component correction are applied to grasping pose   calculation, respectively. According to the obtained w H g and w H d , the g H p is calculated based on the grasping model of 4-R(2-SS) parallel robot in equation (26). Finally, the pose of end effector is transformed from the current pose to the optimal grasping pose based on the g H p to grasp the cluster fruit. Then the transformed pose of end effector is compared with the optimal grasping pose measured by the Leica absolute tracker system (Leica AT402) of Leica Geosystems AG and the digital compass (HMR3100) of Honeywell. As shown in Figure 8, the errors of pose parameters ðx; y; z; a; b; gÞ are analyzed, respectively. From Figure 8, we can see that compared with existing eye-to-hand model and model solving methods, the average errors of x, y, z, a, b, and g according to the improved eyeto-hand model and model solving method decrease by 0.626 mm, 0.712 mm, 152.182 mm, 0.724 , 0.656 , and 0.689 , respectively. The average translation error of grasping pose decreases by 51.173 mm, and the average rotation error of grasping pose decreases by 0.690 . Because the Z-direction translation component of pose matrix X is set to zero in the model solving method based on matrix vectorization and Kronecker product, the error of z is affected by the Z-direction distance between the O d À X d Y d Z d and the O w À X w Y w Z w . Therefore, the average z-error of grasping pose calculation according to the hand-eye calibration results of the model solving method based on matrix vectorization and Kronecker product is larger. The improved eye-to-hand model and model solving method proposed in this article can reduce the pose errors and improve the accuracy and efficiency of grasping pose calculation effectively.

Conclusions
Because of the 4-DOF motion constraint of robot, the motion error of robot, the calibration error of camera, and the invalid calibration motion poses of robot, it is difficult to improve the accuracy and efficiency of online hand-eye calibration of 4-R(2-SS) parallel robot. Therefore, an online hand-eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot is proposed and applied to grasping pose calculation in this article. The following conclusions can be drawn:   In the future work, we will explore how to further improve the accuracy and efficiency of online hand-eye calibration of 4-R(2-SS) parallel robot, so that it can be applied to more complex environment. Additionally, we will also study how to improve the reliability of online hand-eye calibration to lay the foundation for accurate and fast grasping based on machine vision and parallel robot.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.