Abstract:
Visual navigation of robots has been widely used to extract information from their surroundings to determine subsequent activity in agricultural fields. However, the navigation accuracy has been limited to the very complex scene in the field, particularly under light variation and plant growth. It is also difficult to extract the crop path using sparse cotton seedlings, missing seedlings, and weeds in the cotton seedling stage scenario. In this study, a visual navigation method was established using improved RANSAC (random sample consensus). A series of experiments was also carried out on navigation path tracking in the cotton field. Firstly, the images were captured at multiple growth stages during cotton seedlings using agricultural robot with a camera. Then the crop rows and background were distinguished to fully separate by adaptive threshold segmentation. The binary images were also denoised using morphological filtering. According to the region of interest, the outlier points beyond the crop row were removed from the image by the improved RANSAC algorithm. The detection and clustering of feature points were carried out to ensure the accuracy of the final extracted center line of the crop row. Finally, the navigation path was obtained after the least square fitting. The experimental results show that the better fitting path was obtained after the removal of the outliers using improved RANSAC, which was in line with the actual position for the center line of the crop row. Specifically, the line recognition rate was 96.5% using the traditional RANSAC algorithm, the average error angle was 1.41°, and the average time of image processing was 0.087 s. By contrast, the line recognition rate increased to 98.9% after removing outliers by the improved RANSAC algorithm, and the average error angle was only 0.53°. The performance of center line extraction was significantly improved using a modified RANSAC algorithm, compared with the original. In addition, a comparison was also made between the improved and traditional Hough transform. The effectiveness of the improved model was then verified to extract the navigation path. The spraying robot with visual navigation was self-developed to better validate the practical application of this improved model in a complex environment. The path-tracking experiments were then conducted autonomously in the field of cotton seedling. Three initial states and three moving speeds were selected, including 0.4, 0.5, and 0.6 m/s. Image processing was realized using OpenCV with robot operating system (ROS). Furthermore, an open-source library was also configured during image processing using machine vision. A path-tracking algorithm was utilized to improve the tracking accuracy using adaptive sliding-mode control. Among them, the maximum lateral deviations of the robot were 1.53, 2.29, and 2.59 cm, respectively, when the speed values were 0.4, 0.5, and 0.6 m/s, respectively. There were no rolling failures, fully meeting the precision requirements of the application robot for the line operation in the cotton field under the planting mode of "1 film, 3 ridges, and 6 rows". Visual navigation can also provide the theoretical support and technical basis for the autonomous navigation and mobile operation of agricultural robots on the farm.