采用改进YOLOv3-Tiny模型的轻量化莲蓬质量分级算法

    Lightweight classification of lotus seedpod quality using improved YOLOv3-Tiny model

    • 摘要: 精准高效的莲蓬质量分级算法是实现莲蓬采后自动化加工的重要一环。针对目前莲蓬果实的采后质量分级研究较少的问题,该研究建立了莲蓬果实质量分级原则,提出了改进 YOLOv3-Tiny(you only look once version 3-Tiny)模型的莲蓬质量分级算法。首先在3种光照条件下架设摄像头垂直采集莲蓬图像并建立试验数据集,通过数据增强技术扩充数据集;接着使用K均值聚类算法重新设计先验锚框尺度,提高先验锚框的回归精度。随后以YOLOv3-Tiny原骨干网络为基础,加入空间金字塔池化模块(spatial pyramid pooling,SPP),提升网络提取特征信息的能力;最后利用YOLOv3-Tiny的参数进化模块为该模型进化出一套合适的超参数。试验结果表明,改进的 YOLOv3-Tiny模型对莲子识别的平均精度均值(mean average precision,mAP)和召回率(recall)分别为96.80%和94.60%;与原YOLOv3-Tiny模型相比,mAP提高12.49个百分点,召回率提高11.59个百分点,并且每秒传输帧数达到25帧,是Faster R-CNN网络模型的1.24倍。试验数据说明所提改进算法对于莲蓬果实上的莲子具有更好的识别效果,而且满足实时检测的要求,可以为莲蓬质量分级研究提供技术参考。

       

      Abstract: An accurate and efficient classification of quality is one of the most important steps to realize the automatic processing of lotus after harvest. However, it is still lacking in the quality classification of lotus after harvest. In this study, the classification of lotus quality was established, according to the evaluation scheme for white lotus quality published by the Chinese Association of Traditional Chinese Medicine. An improved YOLOv3-Tiny (You Only Look Once version 3-Tiny) model was also proposed for quality classification. Firstly, a vertical camera was deployed on an image acquisition platform for lotus seedpod, according to the principles of image recognition. The images of lotus seedpods were collected under three lighting conditions. A comprehensive test dataset was then constructed to augment using three image augmentation techniques: rotation, color switching, and affine transformation. The K-means clustering was also employed to optimize the scales of the prior anchor boxes, in order to enhance the regression accuracy. Ultimately, six scales were generated for the prior anchor boxes, namely (14, 18), (19, 21), (20, 27), (36, 44), (42, 49), and (50, 56). The accuracy rate was achieved at 85.48%. Subsequently, the SPP (Spatial Pyramid Pooling) module was added to extract feature information, according to the original YOLOv3-Tiny backbone network. Finally, the parameter evolution module of YOLOv3-Tiny was used to evolve a set of appropriate hyperparameters for the model. Ablation experiments were carried out to verify the effectiveness of data augmentation and redesign the prior anchor box scales. The backbone network of feature extraction was then optimized to improve the performance in the related hyperparameter evolution. The ablation experiments showed that the recognition precision increased by 4.95 percentage points after data augmentation. The precision of recognition increased by 0.76 percentage points after the K-Means algorithm to reunite classes. Spatial Pyramid Pooling (SPP) module was added to improve the precision of recognition by 1.12 percentage points. The hyperparameter evolution module was added to improve the precision of recognition by 7.94 percentage points, indicating the high overall robustness of the model. A test was then conducted to verify the effectiveness of the improved YOLOv3-Tiny model on the quality classification of lotus seedpod. A comparison was also made between the original YOLOv3-Tiny network model and Faster R-CNN. The results show that the improved YOLOv3-Tiny model achieved a Mean Average Precision (mAP) of 96.80%, a Precision (P) of 93.10%, and a Recall (R) of 94.60%. Compared with the original YOLOv3-Tiny model, the mAP, the P, and the R increased by 12.49, 14.77, and 11.59 percentage points, respectively. The number of frames transmitted per second reached 25 Hz, which was 1.24 times that of the Faster R-CNN network model. The Ablation experiments showed that when the improved method of Data Augmentation was added, P, R and mAP were respectively increased by 4.95, 4.52 and 5.61 percentage points compared with the original. Once the prior anchor box scale was redesigned, the P, R, and mAP of the network model were 0.76, 0.10, and 0.25 percentage points respectively higher than those of the model with the original prior anchor scale. The SPP module was introduced to increase the P, R, and mAP by 1.12, 0.10, and 0.23 percentage points, respectively. The P, R, and mAP increased by 7.94, 6.87, and 6.40 percentage points, respectively, after hyperparameter evolution. Ablation experiments verified that data augmentation, redesigning the prior anchor box scale, the YOLOv3-Tiny feature extraction backbone network, and related hyperparameter evolution effectively enhanced the detection performance of the network model for all levels of lotus after harvest. The data show that the improved algorithm shared a better recognition for lotus seeds, and fully met the requirement for real-time detection. The finding can also provide a technical reference for the classification of lotus quality.

       

    /

    返回文章
    返回