基于深度学习的玉米籽粒破损在线检测技术

    Online detection technology for broken corn kernels based on deep learning

    • 摘要: 玉米籽粒破损是制约中国玉米籽粒直收技术推广应用的瓶颈问题,如何快速准确地获取玉米收获过程中籽粒损伤情况是玉米智能化收获的关键。为了解决这一问题,该研究提出一种基于深度学习的玉米籽粒破损检测装置及方法,该方法采用籽粒单层化装置不断获取高质量玉米籽粒集图像数据,并通过深度学习分割、分类两阶段模型实现破损玉米籽粒检测。图像分割阶段通过深度学习经典实例分割模型(Mask R-CNN)完成区域内玉米籽粒单体分割;而图像分类则由该研究基于残差模块提出的新型网络模型(BCK-CNN)实现。为了评价BCK-CNN分类模型的有效性,将其和其他典型深度学习分类模型进行对比测试,并利用可视化的技术评估了不同模型对玉米籽粒的分类性能。结果表明:BCK-CNN模型对完整、破损玉米籽粒的分类准确性分别达到96.5%、94.2%。另外,选取平均相对误差为评价指标,通过模拟试验对比验证了该检测方法对破损玉米籽粒的检测性能。结果表明:相较于人工计算籽粒破损率,该研究提出的破损玉米籽粒检测方法计算得到的平均相对误差仅4.02%;且将其部署在移动工控机上对单周期玉米籽粒集图像检测时间可以控制在1.2 s内,研究结果为玉米收获过程中破损籽粒高效精准检测提供参考。

       

      Abstract: Corn grain damage has been one of the most serious challenges during harvest, even to restrict the popularization and application of direct harvest technology in China. It is necessary to rapidly and accurately obtain the grain damage in the intelligent process of corn harvest. In this study, an improved detection was proposed for the corn kernel damage using deep learning. Two parts included: the detection device and the algorithm of corn kernel monolayer. The single-layer detection device aimed to change the chaotic grain flow into a stable state, particularly for the high-quality corn grain images that fully met the detection requirements. The feeding speed was controlled to ensure the normal operation of the detection device in the process of image acquisition. The angle between the device and the horizontal plane was optimized to solve the phenomenon of image dragging. A two-stage model of deep learning segmentation and classification was used to detect the damaged corn grains. Specifically, the deep learning classical instance segmentation model (Mask R-CNN) was used to complete the segmentation of corn kernel monomer in the region at the image segmentation stage. The image classification was realized by a new network model (BCK-CNN) using the residual module. The experiments show that the Mask R-CNN model shared the better performance on the segmentation of corn grains, in order to fully support the subsequent whole and damaged corn kernel classification task. The effectiveness of the BCK-CNN classification model was verified to compare it with the GoogLeNet, VGG16, ResNet classical classification network, and Mask R-CNN model. The visual technology was used to evaluate the classification performance of different models for corn grains. The results showed that the BCK-CNN model achieved the best comprehensive classification performance for corn grains, with the classification accuracy of whole and damaged corn grains reaching 96.5% and 94.2%, respectively, indicating the highest detection efficiency, compared with GoogleNet, VGG16, ResNet and Mask R-CNN models. The average processing time of 60 single corn grain images was only 19.9 ms. The performance of the damaged corn kernel detection was verified (Mask R-CNN+BCK-CNN), where the average relative error was selected as the evaluation index using the manual calculation of the damaged kernel rate. The average relative error of the improved model was only 4.02%, compared with the manual Mask R-CNN and (used alone), Mask R-CNN+GoogLeNet, Mask R-CNN+VGG16, and Mask R-CNN+ResNet. The detection time was controlled within 1.2s for the single-cycle corn kernel set image when deployed on the mobile industrial computer, which basically met the real-time detection requirements. The finding can provide a strong reference for the efficient and accurate detection of damaged grains in the process of corn harvesting.

       

    /

    返回文章
    返回