基于改进YOLOv8n的轻量化红花识别方法

    Recognizing safflower using improved lightweight YOLOv8n

    • 摘要: 为解决智能化采收中红花识别易受田间复杂环境、设备计算资源等限制的问题,该研究提出一种基于改进YOLOv8n的轻量化红花识别方法,以便将模型部署在移动端上进行目标检测。该研究应用Vanillanet轻量化网络结构代替YOLOv8n的骨干特征提取网络,降低了模型的复杂程度;将大型可分离核注意力模块(large separable kernel attention, LSKA)引入特征融合网络,以降低存储量和计算资源消耗;将YOLOv8n的损失函数从中心点与边界框的重叠联合(center intersection of union, CIoU)替换为动态非单调的聚焦机制(wise intersection of union, WIoU)提升检测器的总体性能;并选用随机梯度下降算法(stochastic gradient descent, SGD)进行模型训练,以提高模型鲁棒性。试验结果表明,改进后的轻量化模型每秒传输帧数(frames per second, FPS)为123.46帧/s,与原YOLOv8n模型相比提高了7.41%,而模型大小为3.00MB,仅为原来的50.17%,并且精确度(precision, P)和平均精度值(mean average precision, mAP)达到了93.10%和96.40%,与YOLOv5s与YOLOv7-tiny检测模型相比,FPS分别提高了25.93%和19.76%,模型大小为原模型的21.90%和25.86%,研究结果可为后续红花的智能化采收装备研发提供技术支持。

       

      Abstract: Safflower is one of the most important cash crops in China. Its production area is concentrated in Xinjiang, Gansu and Ningxia. However, the harvesting of safflower relies mainly on manual labour at present. Particularly, the operating environment is easily affected by weather factors. Fortunately, intelligent harvesting can be expected to improve the efficiency of safflower harvesting with labour cost savings. Previous research often focused on pneumatic, pulling, combing, and cutting harvesting. However, it is still required for manual work during harvesting. The autonomous operation can be realized by combining target detection and navigation in the harvesting robots. However, the complex working environment in the field has limited the accurate recognition and localization in the harvesting process. This study aims to promote the performance of safflower recognition under the complex environment in the field during intelligent harvesting. A lightweight safflower recognition was also proposed using an improved YOLOv8n. The computational resources of the device were then deployed to the model on the mobile for detection. A dataset of 2309 images was created to categorize into two classes: picked and no picked. The safflower blooming was categorized into four stages, namely the bud, first flowering, prime bloom, and septum stage. The prime bloom stage was the best picking time in the most economically beneficial period of safflower. Therefore, the safflower only in the prime bloom stage was picked rather than the bud, first flowering, and septum stage. The improvement procedures were as follows. Firstly, the Vanillanet lightweight network structure was applied to substitute for the Backbone of YOLOv8n, in order to reduce the complex structure of the model. Secondly, the large separable kernel attention (LSKA) module was introduced into the Neck, in order to reduce the amount of storage and computational resource consumption. Thirdly, The YOLOv8n's loss function was revised from the center intersection of union (CIoU) to the wise intersection of union (WIoU), in order to improve the overall performance of the detector. Finally, the stochastic gradient descent (SGD) was chosen to train the model for robustness. The experimental results showed that the frames per second (FPS) of the improved lightweight model increased by 7.41%, while the weight file was only 50.17% of the original one. The precision (P) and the mean average precision (mAP) values reached 93.10% and 96.40%, respectively. Furthermore, the FPS was improved by 25.93% and 19.76%, the weight file was reduced by 21.90% and 25.86%, respectively, compared with the YOLOv5s and YOLOv7-tiny models. Meanwhile, better robustness was achieved in the improved model. The Jetson Orin NX flatform was then selected to deploy for testing. The single-image detection time of YOLOv8n and YOLOv8n-VLWS was 0.38s and 0.27s, which was 28.95% shorter than the original model. The high precision and lightweight of real-time detection was realized for the safflower in the field. The findings can provide the technical support to develop intelligent harvesting equipment for safflower.

       

    /

    返回文章
    返回