ZHANG Shihao, SHEN Lei, SONG Lijie, et al. Identifying and positioning grape compound buds using RGB-D images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2023, 39(21): 172-180. DOI: 10.11975/j.issn.1002-6819.202307014
    Citation: ZHANG Shihao, SHEN Lei, SONG Lijie, et al. Identifying and positioning grape compound buds using RGB-D images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2023, 39(21): 172-180. DOI: 10.11975/j.issn.1002-6819.202307014

    Identifying and positioning grape compound buds using RGB-D images

    • Efficient and accurate identification and position of compound buds can greatly contribute to the orchard robots for automatic bud removal operations, thereby improving the efficiency of flower and fruit thinning. This study aims to accurately identify and position the small objects in complex backgrounds for grape bud removal in the field. (1) A visual recognition system was proposed for the concurrent bud and secondary bud using a combination of far and near visual. Euclidean distance was applied on the visual mapping plane. The concurrent bud was then identified in the far view, whereas, the secondary bud was identified in the near view within the concurrent bud mapping area, in order to improve the recognition accuracy of small target objects in the field. The original images (2132 images) were randomly divided into the training set (1735 images) and validation set (397 images) in the ratio of 8:2. The training set and validation set were uniformly cropped to 2944×1656 pixel size, and then scaled to 1280×720 pixel size with 2.3 times of equal proportion. Data augmentation was carried out by three random combinations of augmentation through scaling, flipping, and color gamut transformation for the concurrent and secondary bud datasets. Once the augmentation scale was too large for the target features, the images were manually excluded from the dataset. As such, the training set was enlarged to 15840 images. YOLOv5m and YOLOv5s were selected as the network models for juxtaposed bud and parabudding detection. The sizes of the trained models were 42.1 and 14.4 MB, respectively, with AP of 0.702 and 0.773, F1 scores of 0.685 and 0.765 on the test set of concurrent and secondary bud images, respectively, where the average inference time per image was 10.62 and 7.01 ms. 60 compound bud images showed that the overall average accuracy of this improved model was 0.905, and the average detection time was 18.1 ms for each process; (2) An online detection and position were then proposed for the compound bud in the field. The online detection and position were introduced in the prediction stage to improve the light-resistant ability of the depth camera when positioning in a complex environment, in order to reduce the fluctuation and error of the depth information. The same compound bud was online detected and positioned for 5 consecutive frames. The initial frame and the latest frame image data were deleted and updated in real time, until the relative error of positioning coordinates in 5 consecutive frames was less than the error threshold, in order to obtain the average positioning coordinates of the compound bud. Better performance was achieved in the positioning accuracy of compound bud in natural background and the resistance to light coupling in complex light environment. The test results showed that the positioning accuracy of the improved model was ±0.916 and ±0.654 mm for the concurrent and secondary bud, respectively. The repeatability and accuracy can fully meet the demand for the secondary bud wiping, indicating a reliable and effective system. The finding can provide high-quality technical support for the online positioning of the concurrent and secondary buds for sprout wiping operation in sprouting robots, in order to realize the automatic sprout wiping in orchards.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return