棉田施药机器人视觉导航方法研究与田间试验

    Research and field experiment on visual navigation method of cotton spraying robot

    • 摘要: 视觉导航是田间作业机器人的主流导航方法,为解决因棉苗稀疏、缺苗和杂草等因素导致作物行路径提取困难这一问题,该研究建立了基于改进RANSAC(Random sample consensus)算法的视觉导航方法,并开展田间视觉导航路径跟踪试验。首先获取棉田苗期多个生长阶段的前视图像,利用改进超绿灰度特征变换区分作物行与背景,结合自适应阈值分割方法将作物行从背景中分割出来得到二值化图像后进行形态学滤波去噪;然后根据图像中作物行目标区域的分分布特征,利用改进RANSAC算法对特征点检测和聚类并去除离散点,以保证最终提取的作物行中心线的准确性,最后采用最小二乘法拟合得到导航路径。试验结果表明,传统RANSAC方法的行识别率为96.5%,平均误差角为1.41°,单幅图像平均耗时0.087 s;改进RANSAC方法去除离群点后的行识别率为98.9%,平均误差角为0.53°,路径提取性能得到大幅提升。利用自主开发的施药机器人及视觉导航系统在棉花苗期开展田间路径跟踪试验,施药机器人在3种不同初始状态和3种不同运动速度下自主导航行驶,最大横向偏差量均不超过2.59 cm,且无轧苗现象发生,满足“一膜三垄六行”种植模式下的自主导航作业要求,研究结果可为其他农业机器人的自主导航方法开发提供参考。

       

      Abstract: Visual navigation has become the mainstream navigation method for agricultural field robots. The real scene in the field is very complicated by the influence of light changes and plant growth differences. In order to solve the problem of poor crop path extraction and navigation accuracy caused by sparse cotton seedlings, lack of seedlings and weeds in cotton seedling stage scenario, a visual navigation method based on improved RANSAC (Random sample consensus) was established and cotton field navigation path tracking experiment was carried out. Firstly, the images of multiple growth stages of cotton seedling stage were obtained using agricultural robot with camera. Then the crop rows and background were distinguished by improved Extra green gray feature transformation. The crop row and background were completely separated by adaptive threshold segmentation method, and the binary images were denoised using morphological filtering. According to the distribution characteristics of the target region of the crop row in the image, and the outlier points that do not belong to the crop row were eliminated by improved RANSAC algorithm. The feature point detection and clustering were carried out to ensure the accuracy of the final extracted crop row center line. Finally, the navigation path was obtained by by the least square fitting method. The path fitted by modified RANSAC to remove outliers is more in line with the actual position of the crop line center line, while the center line fitted by direct least squares without RANSAC algorithm has obvious deviation. The objective index of experimental results show that the line recognition rate with traditional RANSAC algorithm is 96.5%, the average error angle is 1.41°, and the average image processing time is 0.087 s. After removing outliers by improved RANSAC method, the line recognition rate is increased to 98.9%, the average error angle is only 0.53°. The performance of center line extraction has been significantly improved when modified the original RANSAC algorithm. The comparison experiment between proposed method and traditional Hough transform also illustrates the effectiveness of our method for navigation path extraction. For better validate the practical application effect of this method in complex environment, the self-developed spraying robot and its visual navigation system were used to carry out path tracking experiments in the field of cotton seedling. The navigation experiment was conducted autonomously under three different initial states and three different moving speeds including 0.4, 0.5 and 0.6 m/s. Configuring Open CV in robot ROS gives robots image processing capabilities, an open source library was equipped to provide support for machine vision and image processing. In order to improve the tracking control accuracy, a path tracking algorithm based on adaptive sliding mode control is utilized. When the speed is 0.4 m/s, the maximum lateral deviation of the robot is 1.53 cm. When the speed is 0.5 m/s, the maximum lateral deviation is 2.29 cm. When the speed is 0.6 m/s, the maximum lateral deviation is 2.59 cm, which can meet the precision requirements of the application robot for line operation. The maximum lateral deviation of field path tracking experiment is less than 2.59 cm, and no rolling phenomenon occurs, which meets the requirements of autonomous navigation operation of the cotton field under the planting mode of "1 film, 3 ridges and 6 rows". The visual navigation method established in this paper provide theoretical support and technical basis for autonomous navigation and mobile operation of agricultural robots in farm.

       

    /

    返回文章
    返回