Gu Baoxing, Liu Qin, Tian Guangzhao, Wang Haiqing, Li He, Xie Shangjie. Recognizing and locating the trunk of a fruit tree using improved YOLOv3[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(6): 122-129. DOI: 10.11975/j.issn.1002-6819.2022.06.014
    Citation: Gu Baoxing, Liu Qin, Tian Guangzhao, Wang Haiqing, Li He, Xie Shangjie. Recognizing and locating the trunk of a fruit tree using improved YOLOv3[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(6): 122-129. DOI: 10.11975/j.issn.1002-6819.2022.06.014

    Recognizing and locating the trunk of a fruit tree using improved YOLOv3

    • Abstract: Autonomous navigation can be critical to improve the quality and efficiency of orchard robots during operation. Since the identification is relatively fine, the current location of the fruit tree has posed a great challenge on the depth orientation and accuracy, resulting in some limited operations in orchards. In this study, an improved YOLOv3 algorithm was proposed for tree trunk recognition and binocular camera positioning. Firstly, the SENet attention mechanism module was integrated into the residual module in Darknet53 for a feature extraction network of YOLOv3. As such, the feature re-calibration was achieved to extract the useful feature, and compress the useless feature information, where an improved residual network SE-Res module was obtained. The stacking of residual network SE-Res modules were used several times for the improved feature extraction of the YOLOv3 model and more accurate target detection. Secondly, the K-means clustering was added into the original YOLOv3 model of anchor box information for the requirement of high precision. The updated YOLOv3 model was utilized to optimize the test frame for the more accurate inspection information, where the more accurate the test box information was set to obtain the more accurate positioning information. The images were collected by the left and right cameras of the binocular camera, respectively, and then transmitted to the improved YOLOv3 model for the tree trunk detection. The information of the inspection frame was output, including the category information, the center point coordinates of the inspection frame, and the width and height of the inspection frame. Target matching was performed on the collected images under the output detection frame. The target matching was achieved within the threshold for the difference between the area of the target inspection frame and the coordinate of the center point v axis in the left and right images for the same category information. The parallax information of the same target was obtained after successful matching. The tree trunk was then located using the triangulation of binocular camera, according to the parallax information. Experimental results show that the improved YOLOv3 model can be used to better identify and locate the fruit tree trunks, where the optimal time was 0.046 s/frame. Specifically, the average precision and recall rate were 97.54% and 91.79%, respectively, which were improved by 3.01 and 3.84 percentage points, respectively, compared with the original, while by 14.09 and 20.52 percentage points, compared with the original SSD model. The positioning accuracy of the three models in longitudinal direction was higher than that in transverse direction in the experiment. The average positioning errors of transverse direction and longitudinal direction were 0.039 and 0.266 m, respectively, and the average error ratios were 3.84% and 2.08%. Compared with the original YOLOv3, the mean value of positioning error in transverse direction and longitudinal direction were reduced by 0.140 and 0.945 m, respectively, and the error ratio was reduced by 15.44 and 14.17 percentage points, respectively. Compared with the original SSD model, the mean positioning errors of transverse direction and longitudinal direction were reduced by 0.216 and 1.456 m, respectively, and the error ratio was reduced by 21.58 and 20.43 percentage points, respectively. Therefore, the improved model can be used to identify and locate the fruit trees in the autonomous navigation of orchard robots, including the ditching fertilization, grass cutting, and pesticide spraying operations. The finding can lay a theoretical foundation to improve the efficiency for better operation quality.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return