基于改进YOLOv5s的轻量化金银花识别方法

    Lightweight honeysuckle recognition method based on improved YOLOv5s

    • 摘要: 为提高金银花采摘机器人的工作效率和采摘精度,实现将模型方便快速部署到移动端,该研究提出一种基于改进YOLOv5s的轻量化金银花识别方法。用EfficientNet的主干网络替换YOLOv5s的Backbone层,并在改进之后的Backbone层加入原YOLOv5s的SPPF特征融合模块,减少了模型的参数量和计算量,同时降低模型权重的大小,便于之后移动端的部署;其次,为提高模型对于金银花的识别效果,该研究在Neck层中用CARAFE上采样模块替换原始模型中的上采样模块,在略微提高参数量的前提下提高了模型识别金银花的精确度和平均精度,提高了采摘效率。试验结果显示,改进后的轻量化模型参数量仅为3.89 × 106 M,为原始YOLOv5s模型的55.5%;计算量仅为7.8 GFLOPs,为原始模型的49.4%;权重仅为7.8 MB,为原始模型的57.4%,并且精确度和平均精度达到了90.7%和91.8%,相比原始YOLOv5s模型分别提高1.9和0.6个百分点。改进后的轻量化模型与当前主流的Faster-RCNN、SSD、YOLO系列目标检测模型相比,不但提高了检测精度,还大幅减少了模型的参数量、计算量和权重大小,研究结果为后续金银花采摘机器人的识别和移动端的部署提供了参考和依据。

       

      Abstract: Honeysuckle is one of the most common varieties of Lonicera japonica in Chinese herbal plants. High medicinal and economic value can also be found, due to the clearing of heat and detoxification for the removal of inflammation and swelling. However, manual picking cannot fully meet the large-scale production for the harvesting of honeysuckle in modern agriculture at present, due to the time-consuming, and labor-intensive task. Fortunately, more mechanized and intelligent harvesting has been used in recent years. Particularly, the picking robots can be expected to gradually replace human labor, in order to greatly improve harvesting efficiency. Therefore, it is necessary to design a honeysuckle picking robot to replace manual picking. Among them, accurate and rapid recognition of honeysuckle can be one of the most important links to design the honeysuckle picking robots. In this study, a lightweight object detection model was proposed for the honeysuckle using improved YOLOv5s. The efficiency and picking accuracy of honeysuckle picking robots were also promoted for the easy and fast deployment of the model to mobile terminal afterward. The Backbone layer of YOLOv5s was replaced with the backbone network of EfficientNet. The size of the model weights also reduced the number of parameters and computational efforts. Depthwise Conv was then used to reduce the complexity of the model. Therefore, the recognition accuracy of the model needed to improve, due to the increase of the width and depth of the model. The SPPF feature fusion module of the original YOLOv5s was first added to the improved Backbone layer. The fusion degree of different feature layers of the model improved the ability of the model to extract the features. Secondly, the CARAFE upsampling module was added to the Neck layer, in order to improve the recognition accuracy of honeysuckle. Finally, the average precision of the model was improved to rapidly and accurately recognize the honeysuckle for the high harvesting efficiency with a slightly higher number of parameters. Different upsampling kernels were obtained to predict. The upsampling kernels varied greatly in the different feature layers, particularly on the global semantic information. The experimental results show that the lightweight model shared only 3.89 × 106 M parameters using the improved YOLOv5s model, which was 55.5% of the original one. The computational volume was only 7.8 GFLOPs, 49.4% of the original one. The weight was only 7.8 MB, 57.4% of the original one, and the precision and average precision reached 90.7% and 91.8%, respectively, which were 1.9 and 0.6 percentage points better than the original one. The higher precision was achieved in the improved lightweight model while reducing complexity. The recognition of honeysuckle was also fully realized to facilitate the deployment. The improved lightweight model can be expected to identify the overlapping honeysuckle. The detection frame can fully wrap the honeysuckle without missing detection. The improved lightweight model improved the detection accuracy, compared with the current mainstream Faster-RCNN, SSD, and YOLO series object detection models. The number of parameters was significantly reduced in the computation, and weight size of the model. The finding can also provide a strong reference for the subsequent mobile deployment and recognition of the honeysuckle picking robots.

       

    /

    返回文章
    返回