Lightweight honeysuckle recognition method based on improved YOLOv5s
-
Graphical Abstract
-
Abstract
Honeysuckle is one of the most common varieties of Lonicera japonica in Chinese herbal plants. High medicinal and economic value can also be found, due to the clearing of heat and detoxification for the removal of inflammation and swelling. However, manual picking cannot fully meet the large-scale production for the harvesting of honeysuckle in modern agriculture at present, due to the time-consuming, and labor-intensive task. Fortunately, more mechanized and intelligent harvesting has been used in recent years. Particularly, the picking robots can be expected to gradually replace human labor, in order to greatly improve harvesting efficiency. Therefore, it is necessary to design a honeysuckle picking robot to replace manual picking. Among them, accurate and rapid recognition of honeysuckle can be one of the most important links to design the honeysuckle picking robots. In this study, a lightweight object detection model was proposed for the honeysuckle using improved YOLOv5s. The efficiency and picking accuracy of honeysuckle picking robots were also promoted for the easy and fast deployment of the model to mobile terminal afterward. The Backbone layer of YOLOv5s was replaced with the backbone network of EfficientNet. The size of the model weights also reduced the number of parameters and computational efforts. Depthwise Conv was then used to reduce the complexity of the model. Therefore, the recognition accuracy of the model needed to improve, due to the increase of the width and depth of the model. The SPPF feature fusion module of the original YOLOv5s was first added to the improved Backbone layer. The fusion degree of different feature layers of the model improved the ability of the model to extract the features. Secondly, the CARAFE upsampling module was added to the Neck layer, in order to improve the recognition accuracy of honeysuckle. Finally, the average precision of the model was improved to rapidly and accurately recognize the honeysuckle for the high harvesting efficiency with a slightly higher number of parameters. Different upsampling kernels were obtained to predict. The upsampling kernels varied greatly in the different feature layers, particularly on the global semantic information. The experimental results show that the lightweight model shared only 3.89 × 106 M parameters using the improved YOLOv5s model, which was 55.5% of the original one. The computational volume was only 7.8 GFLOPs, 49.4% of the original one. The weight was only 7.8 MB, 57.4% of the original one, and the precision and average precision reached 90.7% and 91.8%, respectively, which were 1.9 and 0.6 percentage points better than the original one. The higher precision was achieved in the improved lightweight model while reducing complexity. The recognition of honeysuckle was also fully realized to facilitate the deployment. The improved lightweight model can be expected to identify the overlapping honeysuckle. The detection frame can fully wrap the honeysuckle without missing detection. The improved lightweight model improved the detection accuracy, compared with the current mainstream Faster-RCNN, SSD, and YOLO series object detection models. The number of parameters was significantly reduced in the computation, and weight size of the model. The finding can also provide a strong reference for the subsequent mobile deployment and recognition of the honeysuckle picking robots.
-
-