Research on solving heading attitude of airdrop cargo platform based on line features

The present study envisages the development of an improved line features method to accurately estimate the attitude of the airdrop cargo platform during airdrop landing. Therefore, this article uses the geometric characteristics of the line features to improve the traditional line features extraction and removes the locally dense line features in the image, which greatly reduces the number of line features in the image. Then, the improved random sample consensus is used to remove the mismatching of line features, which improves the real-time performance of the algorithm and the accuracy of the attitude angle, and makes up for the problem of difficult extraction of point features or low matching accuracy in the airdrop environment. Finally, a constraint equation is established for the line features that are successfully matched, and using homography to obtain attitude of the airdrop cargo platform. This article also meets the requirements of accurate calculation attitude of airdrop cargo platform. The experiment shows the significance and feasibility of the airdrop cargo platform heading and attitude calculation technology based on the line feature, and it has a good application prospect.


Introduction
With the development in the large-scale military transport aircrafts, the airdrop of heavy equipment and materials is the basic capability of large-scale transport aircraft. Airdrop equipment uses airdrop cargo platform as carrier. To achieve the smooth landing of the airdrop cargo platform, it is important to calculate the three-axis attitude angle of the airdrop cargo platform in real time to avoid the overturning accidents. [1][2][3][4] Attitude estimation is to measure the position and attitude of the aircraft through one or more sensors such as inertial measurement unit (IMU), global positioning system (GPS), vision, and laser. Research scholars have done a lot of research on the problem of multi-sensor data fusion to estimate the aircraft's attitude. [5][6][7][8][9][10][11] With the development of computer vision, it has attracted the attention of research scholars to estimate the attitude of aircraft by vision. During the landing of the airdrop cargo platform, haze or clouds may appear at high altitudes, the clarity of the collected images is reduced, and the ground point features are masked and difficult to identify. However, the line features can still be identified according to the local tracking direction in the images, and the line features can better represent the image information. Therefore, the line features are more stable in complex environment, so as to the algorithm based on line features has attracted more and more attention. [12][13][14][15][16] Xu et al. [16][17][18] independently estimated the landing attitude of unmanned aerial vehicle (UAV) by using the actual shape of the ground cooperative target and shadow line features and verified that the attitude estimation using line features has higher accuracy and reliability than point features. Li et al. 19 proposed a point-line stereo visual simultaneous localization and mapping (SLAM) system with semantic invariants. The system improved the accuracy of line features matching by fusing line features with image semantic invariant information. Yu et al. 18 proposed a non-iterative method to solve the perspective-n-line problem. Firstly, the objective function of nonlinear least squares was first expressed by Cayley as the parameterized rotation matrix. A third-order equation was derived from the optimal conditions, and then the camera attitude was directly solved based on Grobner technique. Zuo et al. 20 tried to use Plück coordinates to represent the line features in the front-end algorithm of the SLAM system and to express parameterized line features in the back-end optimization algorithm by minimum orthogonality. Pumarola et al. 21 added line features on the basis of ORB-SLAM, and put forward the method of system initialization by using line features. Wang et al. 22 added the line angle to the construction of line features re-projection error. He et al. 23 put forward point and line features visual -inertial odometry (PL-VIO) system by adding line features on the basis of visual inertial navigation system (VINS)-Mono. Gomez-Ojeda et al. 24 proposed to use geometric constraints to construct a method of minimizing the L1 norm to complete line feature matching. Compared with the traditional method based on descriptors, this method improved the tracking efficiency and matching quality of line feature. Wang et al. 24,25 used point and line features to fuse in SLAM and established two projection models to estimate the motion state of the camera.
For the attitude acquisition of airdrop cargo platform, only the obvious features in the scene need to be extracted as the ground reference, and there is no need to extract all the features in the scene carefully. Using line segment detector (LSD) 26 and line band descriptor (LBD) 27 to detect line features in the image is a more mainstream method. LSD directly groups adjacent pixels into line segment regions according to the consistency of pixel gradient directions and can obtain sub-pixel-level precision detection results in linear time. Although LSD has achieved good results in terms of speed and accuracy, it also has its own shortcomings. Because it does not have a screening and merging mechanism when extracting features, it collects a large number of similar line features in local dense areas of the image, and it is prone to over-segmentation of line features. Too many short line features greatly increase the time for line features detection and matching and also increase the probability of mismatching line features, which inevitably affect the operation efficiency of algorithm.

System overview
To reduce the number of line features, gradient density filter is used to eliminate local dense gradient regions to avoid a large number of invalid features, thus improving the efficiency of features extraction and matching, and reducing the computational complexity and mismatching rate of the algorithm. The improved LSD restricts the length of line features, which improves the quality of extracted line features, increases the probability of the same features appearing between adjacent frames, and saves the time of detecting too many short line features. The improved LSD groups the line features that have not been eliminated, and then roughly screens them according to the length, angle, horizontal and vertical distance, and divides the adjacent line features into a group. In the line segment merging stage, line features angle, endpoints of line, and angle are introduced as merging standards, and line features merging is completed. The LBD matches the line features. The same line has corresponding geometric relationship (parallelism, length approximation, and overlap) in the matching of two adjacent frames. This relationship can effectively reduce the mismatching rate of line features and save the calculation time of random sample consensus (RANSAC) to eliminate mismatching. Finally, a constraint equation is established for the successfully matched line features, and the homography matrix is solved and then to obtain the triaxial attitude angle of the airdrop cargo platform. The system overview is shown in Figure 1.

Eliminate local dense line features
Line features usually appear in dense gradient areas. If the number of pixels with high gradient in the target area is too high, it means that the target area is too dense, so it is a dense line features area, which is eliminated to reduce feature mismatch.
In the airdrop environment, there may be dense grass or soil texture on the ground. A large number of line features are detected in the dense texture areas. Excessive density of line features in the same area usually lead to mismatching. To eliminate the line features of the local dense gradient region, the local gradient filter 28 is used to filter out the region. According to different airdrop environments, the adaptively pixel gradient density threshold is set as the filtering standard.
The pixel gradient density is defined as the percentage of pixels with higher gradients in the unit pixel gradient area, which is used to measure whether the area is featureintensive. Set the gradient value IG ij of pixel point ði; jÞ, when the gradient value IG ij is greater than or equal to the preset intensity threshold IG, the value o ij is marked as 1, otherwise it is 0 The dense gradient of the local target area r ij is the sum of the values P m i¼1 P m j¼1 o ij in the M Â M area centered on the pixel ði; jÞ, which represents the percentage of pixel points whose gradient is greater than the set threshold. Initially set M ¼ 5, look for a small range of highgradient dense areas in the image, and then gradually expand the value of M until M l ¼ 30 To reduce pixel areas with too high-gradient density, filter is performed to reduce the dense feature areas in the image. r gd is the average gradient density threshold of the whole image, and adaptive threshold r gd can be obtained according to different scenes to measure the line density of the whole image. The areas with gradient density r ij greater than r gd are regarded as line dense areas, while are regarded as invalid areas and the invalid areas are not processed. Only the line features areas where the gradient density r ij is less than r gd in the image are retained After the filter is added, all line features in the area are eliminated and replaced by the contour edges of the area, so as to avoid a large number of invalid features, which reduces the calculation cost, and improve system accuracy.

Improved line features extraction algorithm
The line features detected by LSD contain a large number of short line features, so the length of short line features needs to be constrained. Short line features are filtered out, the line features that occupy more pixels are retained, the remaining line features are grouped, and merge and connect the adjacent effective short line features in the group, as shown in Figure 2. These improved the quality of line features and made the line features evenly distributed in the image, so as to provide stable line features.

Line features grouping
Line features grouping needs to filter the length of the line feature, and then filter the angle and distance. The angle calculation of adjacent line features is relatively simple, so it is used as the first screening condition. The horizontal distance and vertical distance only involve the addition and subtraction of absolute values of formulas, which make the screening process more efficient. The process is shown in Figure 3. Geometric relationship between Group GL1 inner line features and longest line feature L1 is shown in Figure 4.
Line features length screening. Long line features could be detected by multi-frame images and have a longer life span. Moreover, the length of the line features between two adjacent frames is almost unchanged, which is considered as a more stable feature. The basis for screening the length of line features is constrained according to the size of different images collected by airdrop cargo platform, the depth of scenes, and the number of extracted line features and sets an appropriate threshold to constrain the length of the line feature. The difference in image size determines the length of effective line features. The higher the resolution of images, the more pixels should be included. Remove short line features based on the number of pixels contained in the line feature. Set L min as the lower limit of the length of the line features in the detection image, the length and width of the image size are h i and w i , respectively, and take the shortest side of the image size as a reference to determine the length of different effective line features.
h is the ratio coefficient of the long line features in the entire image and is set to 0.245. Ensure that the number of extracted line features in each frame of image is between 20 and 300, so as to meet the line feature merging in the subsequent steps Line features angle screening. The line features retained after length filtering are sorted in descending order by the pixel length to obtain L i ¼ L 1 ; L 2 ; . . . ; L n f g , where L n represents the nth line feature. Longer line features come from areas with large continuous gradient differences, which made the extracted line features more accurate and stable, so the line features grouping starts from the longest line feature L 1 .
To express the line features conveniently, the line features in the image are represented by vectors. The starting point A i ðx A i ; y A i Þ of a line features L i points to the endpoints B i ðx B i ; y B i Þ, and the angle with the horizontal is q i . The starting point A j ðx A j ; y A j Þ of a line features L j points to the endpoints B j ðx A j ; y A j Þ, and the angle with the horizontal is q j . After angle screening, the candidate line features group L a with close angles in the group is obtained as q s is the set angle threshold.
Line features plane distance screening. Screen the plane distance by the endpoints of different line features in the group. After horizontal distance screening, the candidate line feature groups L hd are obtained as Through vertical distance screening, the candidate line features L vd are obtained as d s is a screening threshold that measures the closeness of the horizontal distance and the vertical distance of the line feature. We set the distance threshold d s to 2 to obtain a line segment group G L 1 ¼ L 2 ; L 3 ; . . . ; L n f g that is close to the long line feature L 1 , so that the filtered  line segment is guaranteed to be near L 1 , which provides high-quality line features for the subsequent merging of line features.

Line features merging
After line features grouping, we get a line features grouping G L 1 that matches the longest line features L 1 in the image. It is necessary to merge the line features in this grouping to reduce the number of line features in the image and improve the quality of line features. In the process of merging, further screening of line features in the group are required. The screening steps included angle correction of line features, endpoint screening, merging, and finally rechecking the angle of the newly merged line features, as shown in Figure 5. The geometric relationship of merging L1 and Li into Lz is shown in Figure 6.
Angle correction. The angle deviation, distance and length between Group G L 1 inner line features and L 1 are important factors for the new line features to be merged. These factors need to modify the angle threshold in G L 1 , add different weighting coefficients, balance the influence of different factors on the angle correction, and finally get the angle correction threshold The distance between two line features is expressed as follows where 1 u is the weighting coefficient of the angle of two line features, 0 < v < 1, u ¼ q z q 1 is the proportional relationship between the included angle of the new composite line feature q z and q 1 . 1 v is the weighting coefficient of the distance between two line features, 0 < v < 1, v ¼ d s l 1 is the proportional relationship between the longest line feature length L 1 and the distance between two line features. Through the analysis of the above factors, the angle correction value of the line features grouping G L 1 relative to the long line features is d is the adaptive proportional coefficient function, where equation (12) is fitted by experimental curve data. It can be seen from the formula that the smaller the angle between the line features to be merged and L 1 , the shorter the length, the smaller the distance to L 1 , the larger the adaptive coefficient d, the greater the mergeability. q Z is the angle screening threshold that measures the similarity of the line feature angles, and its value is set to p=90 through experimental analysis. After angle screening, candidate line feature groupings G L 1Ài are obtained as Endpoint screening. Restore L 1 into the screening line feature group G L 1Ài to become a new line feature group and filter the endpoints of all line features in the group. First, select the line features L i closest to L 1 and calculate the average position of the endpoints of the two line features, so as to obtain the head endpoints ðx A z ; y A z Þ and the tail endpoints ðx B z ; y B z Þ. Calculate and compare the distance between all the endpoints and the origin, and select the minimum for the first endpoint, and the maximum for the endpoints, so as to obtain the endpoints of the synthesized new line feature. According to experience, at least one of the endpoints of the newly synthesized line features comes from the longest line features ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Check the characteristic included angle of synthetic line. The angle between the newly synthesized line feature L Z , and the horizontal direction is q Z . The absolute angle difference between the angle q Z and the included angle q 1 is less than or equal to q s 2 , it is considered that the composite line feature is successfully merged and replaces the positions of the original line features L i and L 1 . If the absolute difference of the angles is greater than q s 2 , it means that the synthesized line feature L Z deviates from the longer line feature L 1 , so L Z is discarded Repeat the above steps until all line features in the group are filtered.
To verify the performance of this algorithm, the LSD and the improved algorithm are used to extract the number of line features in two different scenes for comparison, as shown in Figure 7. And for scene 1, the number of line features extracted from images with different heights is counted, which proves the effectiveness of improved algorithm, as shown in Table 1.
It can be seen from Figure 7 that there are more long line features (roads and riversides) in scene 1 and dense weeds in scene 2. In the two scenarios, this article uses the line geometric characteristics to filter the line features, which could merge the shorter lines into long line features, and at the same time uses the local dense filter to filter out the short line features around them (such as the box area of Figure 7(e) and (f)), which effectively reduces the number of line features extracted from the image, and makes the line features more stable. Although the line features are still discontinuous in the image, it is due to the removal of too many dense short line features in this area, which causes the distance between similar long line features and could not be merged. However, the merged long line features are stable enough in the image to match successfully and effectively improve the accuracy of the heading attitude of the airdrop cargo platform.
According to Table 1, the number of line features extract by improved algorithm of different heights has decreased, and the maximum decrease relative to the number of line features extract by LSD can reach 27.24%. When the  airdrop cargo platform collected ground features, the same feature is displayed differently in the image at different landing heights. The higher the airdrop cargo platform is from the ground, the wider the field of view of the airdrop camera, and the more features on the ground. However, some line features are displayed smaller in the image, so they are rejected as short line features. With the decrease of height, the line features are clearer and more stable in the image and could not be eliminated, so the number of line features that are eliminated at 30 m from the ground is the least.

Improved RANSAC to remove mismatching
The accuracy of airdrop cargo platform attitude estimation based on line features depends on the accuracy of features matching. If there are many wrong matches, there are errors in the attitude calculation of the airdrop cargo platform. Therefore, before the attitude calculation of the airdrop cargo platform, the RANSAC 29,30 is used to remove the wrong matching in the image. RANSAC eliminates outliers through step-by-step iteration. If there are too many wrong matching line features in the observed data, the iteration accuracy decreases. The geometric constraints between the line features are used to prescreen out the line features that could not be matched successfully. The movement of the airdrop camera between two adjacent frames is small, and the constraint is completed by the geometric characteristics between the line features. For example, the position, length, and angle of the line features of two adjacent frames should be approximately the same.
We use the parallel relationship between the two line features, the length approximation and the overlap to eliminate the mismatch of line features, so as to remove the unmatched line features in advance and reduce the number of iterations of the RANSAC.
Matching line features have a parallel relationship Matching line features have length similarity Matching line features have length overlap To test whether the improved algorithm can effectively delete the mismatching, the line features of the image of shooting scene 1 are extracted and matched, as shown in Figure 8.
It can be seen from Figure 8 that the improved algorithm in this article can reduce the number of matching line features and effectively reduce mismatches during the matching process. Using LBD to directly match the screened effective line features, it can be seen that the line features are distributed in all directions. Figure 8(b) shows the result of using the RANSAC to remove mismatches. Most mismatches can be deleted, but there are a few mismatches with inconsistent line feature directions. In this article, the improved algorithm constrains the geometric characteristics of the line feature and further removes the mismatch. It can be seen from Figure 8(c) that the matched line feature pairs appear in the same direction, which greatly reduces the number of mismatches.
As shown in Table 2, because there are many line features in the scene, the matching accuracy without mismatching screening is 41.18%, and the number of mismatching is large. The matching accuracy of RANSAC to delete mismatching is 60.43%, and the matching accuracy rate after the geometric constraint screening can reach 75%. The improved RANSAC algorithm can obtain accurate matching results, which greatly improves the accuracy of attitude calculation of airdrop cargo platform.

Line features constraint equation
In the image sequence, there will be a large number of line features in each frame. It is necessary to establish the constraint equation of line features to solve the homography matrix H and decompose the H to obtain the final attitude of the airdrop cargo platform.
In the ith image, the linear equation of the a i th line features is expressed as bx þ cy þ 1 ¼ 0. The line equation is converted into a matrix, expressed as a constraint equation It can be further expressed as It can be further expressed as According to homography, the two line features should meet the constraints of lx iþ1 ¼ H L x i . The two ends of the equation are left multiplied by the normal vector q T According to q T x iþ1 ¼ 0, it can be known that formula (25) can be equal to According to formulas (21) and (26), it can be deduced that The two ends of formula (27) are transformed to obtain  To eliminate the scale factor l, cross multiply the two ends of formula (28) and get The homography H directly describes the transformation between image coordinates p 1 and p 2 , and the middle part is denoted as H. The definition of homography is related to rotation matrix, translation matrix, camera internal parameters, and plane parameters There are five unknowns, and if there are more than five line features that are successfully matched, then each element of the homography H can be solved by the above equation and decomposed to get the corresponding rotation matrix R and translation vector t

Experiments
In this part, the experiment is to verify the effectiveness and real-timeness of the method for solving the heading attitude of airdrop cargo platform based on line features. The improved algorithm in this article, traditional line features LSD extraction, and LBD matching are compared with point features. And calculate the deviation between the estimated value output by the vision algorithm and the real value output by the IMU. Finally, the performance of the vision algorithm in terms of processing time is discussed in the Time performance experience section.

Materials and equipment
Since the drop of the airdrop cargo platform requires a military assistance and is confidential, it is not convenient to disclose the test data. Therefore, the DJI Mavic Air 2 UAV is used to simulate the landing process of the airdrop cargo platform after the parachute is opened. The DJI MAVIC 2 UAV is equipped with a camera (model FC6310; DJ-Innovations), GPS/GLONASS dual-mode satellite positioning module, IMU, and barometric altimeter sensors. The angular rate sensor integrated in the IMU module uses the (model ADXRS620; Analog Devices), and the most important feature of this series of sensors is that it consists of vibration suppression and high-impact resistant properties. Furthermore, it has a full-range gyroscope of 300 /s, which could provide accurate UAV three-axis attitude values for the experiments. The altitude barometer and IMU output data are provided by DJI þ Assistant software. The position of the IMU in the UAV relative to the camera reference system is represented by the transformation matrix T CAM IMU , as shown in Table 3. Since the current research is not fully implemented the integration of vision algorithms with flight control and is under further study, the following experimental procedure only captures the aerial images through the UAV onboard monocular camera, followed by simulation experiments by the computer. All the experiments are conducted on a computer equipped with an AMD Ryzen 7 4800H 4.20 GHz CPU (AMD, California, USA), and NVIDIA GeForce RTX2060 GPU (Nvidia, California, USA), and memory 16G devices. The CPU programs are executed in a single thread. The image processing is based on OpenCV3.3.0, which is programmed in Microsoft Visual Studio2015.
In the experiment, the image sequence of scene 1 is evaluated. Scene 1 contains a large number of linear features, such as riversides, roads, and house edges, which can provide a large number of stable linear features. During the landing of the UAV, data statistics are made on different landing heights and the experimental results are recorded. The roll , pitch q, and yaw of the UAV calculated by the visual are used as the estimated values, and attitude measured by airdrop IMU is used as the true value. Finally, the root-mean-square error (RMSE) between the solution attitude and the three-axis attitude of the IMU is calculated to measure the deviation between the observed value and the true value. The RMSE is taken as the standard to evaluate the accuracy, and then the average value of RMSE after five experiments is used as the final angle error result of the image sequence Equation (39) gives the calculation formula of RMSE, where i represents n frames of images to be processed, q i represents the attitude of the real airdrop cargo platform, andq i represents the three-axis attitude calculation based on visual algorithm. And in the formula, trans is the rootmean-square of Euclidean distance.

Validity experiment of image geometric transformation
During the landing process of the airdrop cargo platform, the acquired images are subject to affine transformation, perspective transformation, and scale transformation. Setting up this experiment is to prove the effectiveness of this attitude calculation in geometric transformation. The experimental image is taken at a height of 120 m from the ground. The affine transformation and perspective transformation function in OpenCV are used to transform the image. The scale transformation image is taken at a height of 160 m from the ground. The image processing result is shown in Figure 9.
It can be seen from Figure 9 that the affine transformation and perspective transformation of the image affect the distribution of line features (angle, length, and position in the images), which makes some line features unstable, resulting in a small reduction in the number of line features detected in the image. However, the longer line features are not affected by geometric transformation. In the image geometric transformation, the improved algorithm can still detect abundant line features, which is enough to support the attitude calculation of the airdrop cargo platform. It can verify the robustness of the improved line features in the scene and make the detection line features more reliable.

Attitude estimation experiment based on line features
To verify the effectiveness and accuracy of the visual algorithm for UAV attitude estimation based on line features. UAV landed at a height of 160 m above the ground, the flight distance is long, and a vertical landing speed is about 1.6 m/s. The camera's sampling frequency is low, with a resolution of 960 Â 540. Compare the visually calculated attitude angle and the real attitude of the airborne IMU output over time, and calculate the RMSE value, as shown in Figure 10. Figure 10 illustrates the relationship between UAV attitude error and landing time. The red line represents the fitting curve of the real-time attitude angle of UAV landing

Line features and point features comparison experiment
In scene 1, the UAV is used to collect the image sequence of landing. For the improved algorithm in this article, the traditional line feature extraction (LSD þ LBD) and the oriented fast and rotated brief (ORB) feature point extraction, 31 and performing the brute force matching 31 is calculated to obtain the angle accuracy for comparison. The results of point feature extraction and matching are shown in Figure 11. The RMSE values of roll , pitch q, and yaw at different landing heights of the airdrop cargo platform are counted, as shown in Figure 12. As shown in Figure 12, the improved line feature extraction algorithm in this article is compared with the algorithm that only extracts point features, RMSE is reduced by 48.71%. Compared with LSD and LBD, RMSE is reduced by 22.85%. The accuracy of calculating the attitude based on the geometric characteristics of the line features is significantly improved compared with other methods, which proves that the algorithm in this article has a higher accuracy and can be used for the attitude calculation during the landing of the airdrop cargo platform.
When the airdrop cargo platform reaches the initial landing height, imaging at a high altitude, and the slight changes of attitude angle and the terrain fluctuation affect the imaging bottom point to generate a large deviation, which led to poor point features extraction and track failure, and the accuracy error of solving attitude angle is too large to ensure accurate landing. There are a lot of interference information such as short line features in the airdrop scene, and there are too many line features with inconspicuous gradient in the high altitude, which increases the mismatching of line features and the amount of calculation. The improved algorithm in this article can eliminate short line features in local dense areas and use geometric features of line features to group and merge the remaining line features, which makes the long line features more stable in the image, avoids the interference of short line features in the matching process, and effectively improves the matching accuracy of line features.

Time performance experience
To verify the real-time performance of the improved algorithm, the time performance of the vision algorithm is tested to avoid the imbalance of UAV. The traditional line feature extraction (LSD þ LBD) and the improved algorithm are used to calculate the landing time of the UAV, respectively. The real-time performance of the algorithm is measured according to the single-frame pose solving time, as shown in Table 4.
It can be seen from Table 4 that in processing the image resolution of 960 Â 540, the average single-frame line features extraction takes 301.56 ms, the line features matching takes 38.49 ms, and the total single-frame attitude angle settlement takes 348.69 ms, including line feature extraction, matching, and attitude angle resolution time of 13.7 ms. Compared with the traditional line feature extraction (LSD þ LBD), the single-frame solution time is saved by 314.54 ms, and 287 frames of images can be processed  In the aspect of time-consuming for line features matching, this article uses the geometric characteristics of line features between adjacent frames to further eliminate mismatching. Although it does not save too much time, it reduces mismatches and improves the accuracy of attitude solution. In the aspect of time-consuming for line features extraction, more line features will be processed in highresolution images, and the extraction of line features accounts for most of the time spent in single-frame attitude solution. A large number of line features increase the time for extracting line time per frame, which makes the airdrop system unable to meet the real-time requirements. At a resolution of 960 Â 540, some short line features cannot be displayed in the image, which naturally reduces the number of line features. Through the screening of line features, a large number of short line features are eliminated, and some short line features are merged, which reduces the number of line features to be extracted in the image again, saves the time of attitude solution, and improves the realtime performance of the system.

Conclusions and future work
This article proposes a vision algorithm for airdrop cargo platform attitude estimation and autonomous landing based on line features. The line features in the airdrop landing scene are extracted as visual goals, but too many invalid line features in the scene increase the computational complexity of the algorithm. Therefore, this article uses the geometric characteristics of line features to improve the traditional line feature extraction and eliminates the locally dense line features in the image, which greatly reduces the number of line features in the image. Then, the improved RANSAC algorithm is used to remove the mismatching of line features, which improves the accuracy of attitude angle, and makes up for the problems of difficult point features extraction or low matching accuracy in airdrop environment. Finally, a constraint equation is established for the successfully matched line features, and the homography matrix is solved and decomposed to obtain the realtime triaxial attitude angle of the airdrop cargo platform.
To illustrate the performance of the airdrop cargo platform attitude estimation system based on line features, outdoor experiments are conducted and analyzed. Comprehensive experimental results show that the maximum reduction of the number of line features extracted by the improved line features algorithm compared with LSD can reach 27.24%, the matching accuracy after geometric constraint screening can reach 75%, and the line features extraction and matching effects have been significantly improved. The RMSE values of roll, pitch, and yaw of UAV are 1.715 , 1.698 , and 0.886 , respectively, based on line features, and the average single-frame attitude angle resolution time is 348.69 ms, which saves 314.54 ms compared with the traditional line feature extraction (LSD þ LBD). Although this article is a preliminary work, it demonstrates the feasibility of the attitude angle based on visual algorithm of the landing process of airdrop cargo platforms.
Future work will consider the poor real-time processing of images for the airdrop cargo platform, optimize the algorithm, and divide the two processes of feature detection and pose estimation into two independent threads. The segmentation/line/transformation prediction model based on convolutional neural network will be studied later. The combination of the vision algorithm and the flight control system further proves its reliability and accuracy during airdrop landing.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.