基于实例分割的大场景下茶叶嫩芽轮廓提取与采摘点定位

    Extracting tea bud contour and location of picking points in large scene using case segmentation

    • 摘要: 为了解决野外大场景下茶叶嫩芽识别与采摘点精确定位问题,提出了一种基于实例分割的Yolov5s-segment改进算法。该算法首先引入P2微小目标检测层,解决原始Yolov5s-segment网络P3、P4、P5检测层对于小目标检测能力不佳的问题。其次,在主干网络末端增加CBAM(convolutional block attention module)注意力机制模块,提升模型的抗干扰能力,实现野外自然光照环境下茶叶嫩芽轮廓特征提取。最后,根据嫩芽轮廓特征进行采摘点精确定位。研究结果表明,相较于原始Yolov5s-segment模型,改进模型的精确度、召回率、F1分数、平均精度均值mAP50和mAP50-95分别提升了7.0、8.9、8.1、8.3、7.3个百分点。使用该方法可以准确提取大场景下的单芽、一芽一叶、一芽两叶三种类型茶叶嫩芽轮廓,并且实现采摘点的精确定位。研究结果为名优茶智能化快速采摘提供了一定的理论基础。

       

      Abstract: Automatic picking tea buds is ever increasing in the continuous expansion of famous brand tea market at present. However, manual picking cannot fully meet the needs of short-term concentrated harvesting in the large-scale tea production. Mechanical mass picking can also share the very short window period of famous brand tea. Therefore, it is the urgent demand to realize the intelligent and accurate picking of famous tea. Among them, visual identification has been confined to the small target of tea buds and the complex background under the natural environment in the field. Fortunately, Yolov5s-segment network model is suitable to identify tea buds, due to the easy deployment, simple integrated structure and real-time detection. In this study, a novel model was proposed to extract the contour and then locate the picking point of tea bud using improved Yolov5s-segment. The rapid recognition and accurate location of picking points were realized for the tea buds in large field scenes. Firstly, P2 micro-target detection layer was imported into the Neck and Head of the original network. The small targets detection of P3, P4 and P5 layers was achieved in the improved Yolov5s-segment network. Secondly, the contour features of tea bud were extracted in the natural lighting environment. CBAM (convolutional block attention module) was added into the end of Backbone of Yolov5s-segment network for the anti-interference of the improved model. Finally, the position coordinates of the bud stem were extracted, according to the bud contour. The picking points were located accurately for the single bud, one bud with one leaf, and one bud with two leaves. Furthermore, the micro target detection layer and CBAM attention mechanism were gradually added into the original Yolov5s-segment model. In ablation test, the CBAM attention mechanism was replaced with SE, ECA and SimAM. After that, the optimal model was compared with Yolov8s-segment and Yolact. Finally, the positioning test was carried out using the optimal model. The results showed that the best performance was achieved in the model with the micro-target detection layer and CBAM attention mechanism. Compared with the original Yolov5s-segment model, the accuracy, recall rate, F1 score, mean average precision mAP50 and mAP50-95 were improved by 7.0, 8.9, 8.1, 8.3 and 7.3 percentage points, respectively. In the comparison test, the mAP50 of the improved model increased by 4.2 and 9.5 percentage points, respectively, compared with the Yolov8s-segment and Yolact. The positioning test showed that the pixel coordinates of picking points were accurately obtained in the 0° and 45° shooting angles of camera under different states, indicating the high positioning accuracy. The improved model can be used to accurately extract the outline of single bud, one bud with one leaf and one bud with two leaves in large field scenes. The precise location of picking points can be realized at the same time. The findings can provide a theoretical basis for intelligent and rapid picking of famous tea.

       

    /

    返回文章
    返回