棉田施药机器人视觉导航方法与田间试验

    Visual navigation method and field experiment of cotton spraying robot

    • 摘要: 视觉导航是田间作业机器人的主流导航方法,为解决因棉苗稀疏、缺苗和杂草等因素导致导航路径提取困难这一问题,该研究建立了基于改进RANSAC(Random sample consensus)算法和最小二乘拟合的视觉导航方法,并开展田间视觉导航路径跟踪试验。首先获取棉田前视导航图像,利用改进超绿灰度特征变换提高作物行与背景的区分度,结合自适应阈值分割方法将作物行从背景中分割出来并进行形态学滤波去噪得到二值化图像;然后根据图像中作物行目标区域的分布特征,利用改进RANSAC算法去除离散点,并对最优特征点聚类,以保证最终提取的作物行中心线的准确性,最后采用最小二乘法拟合得到导航路径。试验结果表明,传统RANSAC方法的行识别率为96.5%,平均误差角为1.41°,单幅图像平均耗时0.087 s;改进RANSAC方法去除离群点后的行识别率为98.9%,平均误差角为0.53°,导航路径提取性能得到大幅提升。利用自主开发的施药机器人及视觉导航系统在棉花苗期开展田间路径跟踪试验,施药机器人在3种不同初始状态和3种不同运动速度下自主导航行驶,最大横向偏差不超过2.59 cm,且无轧苗现象发生,满足“一膜三垄六行”种植模式下的自主导航作业要求,研究结果可为其他农业机器人的自主导航方法开发提供参考。

       

      Abstract: Visual navigation of robots has been widely used to extract information from their surroundings to determine subsequent activity in agricultural fields. However, the navigation accuracy has been limited to the very complex scene in the field, particularly under light variation and plant growth. It is also difficult to extract the crop path using sparse cotton seedlings, missing seedlings, and weeds in the cotton seedling stage scenario. In this study, a visual navigation method was established using improved RANSAC (random sample consensus). A series of experiments was also carried out on navigation path tracking in the cotton field. Firstly, the images were captured at multiple growth stages during cotton seedlings using agricultural robot with a camera. Then the crop rows and background were distinguished to fully separate by adaptive threshold segmentation. The binary images were also denoised using morphological filtering. According to the region of interest, the outlier points beyond the crop row were removed from the image by the improved RANSAC algorithm. The detection and clustering of feature points were carried out to ensure the accuracy of the final extracted center line of the crop row. Finally, the navigation path was obtained after the least square fitting. The experimental results show that the better fitting path was obtained after the removal of the outliers using improved RANSAC, which was in line with the actual position for the center line of the crop row. Specifically, the line recognition rate was 96.5% using the traditional RANSAC algorithm, the average error angle was 1.41°, and the average time of image processing was 0.087 s. By contrast, the line recognition rate increased to 98.9% after removing outliers by the improved RANSAC algorithm, and the average error angle was only 0.53°. The performance of center line extraction was significantly improved using a modified RANSAC algorithm, compared with the original. In addition, a comparison was also made between the improved and traditional Hough transform. The effectiveness of the improved model was then verified to extract the navigation path. The spraying robot with visual navigation was self-developed to better validate the practical application of this improved model in a complex environment. The path-tracking experiments were then conducted autonomously in the field of cotton seedling. Three initial states and three moving speeds were selected, including 0.4, 0.5, and 0.6 m/s. Image processing was realized using OpenCV with robot operating system (ROS). Furthermore, an open-source library was also configured during image processing using machine vision. A path-tracking algorithm was utilized to improve the tracking accuracy using adaptive sliding-mode control. Among them, the maximum lateral deviations of the robot were 1.53, 2.29, and 2.59 cm, respectively, when the speed values were 0.4, 0.5, and 0.6 m/s, respectively. There were no rolling failures, fully meeting the precision requirements of the application robot for the line operation in the cotton field under the planting mode of "1 film, 3 ridges, and 6 rows". Visual navigation can also provide the theoretical support and technical basis for the autonomous navigation and mobile operation of agricultural robots on the farm.

       

    /

    返回文章
    返回