基于级联卷积神经网络的番茄花期识别检测方法

    Tomato florescence recognition and detection method based on cascaded neural network

    • 摘要: 对作物花期状态的准确识别是温室作物授粉的前提。为提高花期识别的准确率,该研究以温室番茄为例,提出一种基于级联卷积神经网络的番茄花期识别检测方法。首先采用改进的基于特征金字塔花束提取神经网络(Flower Extraction Feature Pyramid Networks, FE-FPN)实现番茄花束的局部区域提取,并采用Prim最小生成树对提取的花束区域图像进行识别优先级排序,然后按序将其输入到改进的Yolov3网络,实现番茄花朵不同花期的精准辨识检测。在包含4类花期、共1 600幅样本的番茄花束图像数据集上进行试验验证,本文方法对番茄不同花期的检测性能较好,平均检测精度达到了82.79%,平均单张检测时间为12.54 ms,各花期检测精度为花蕾期85.71%、全开期95.46%、谢花期62.66%、初果期88.34%;相比Mask R-CNN和空间金字塔池化网络(Spatial Pyramid Pooling Networks, SPP-Net),平均检测精度提高了3.67和2.39个百分点,而且识别错误率比基础Yolov3网络降低了1.25个百分点。最后,将本文所提方法部署到大型玻璃温室环境下番茄授粉机器人上进行实际验证,识别准确率为76.67%,除去漏提取花束准确率达85.18%。研究结果可为设施番茄授粉机器人的精准作业提供重要依据。

       

      Abstract: Abstract: Accurate identification of the flowering state of crops is a prerequisite for pollination of greenhouse crops. In order to improve the accuracy of the florescence recognition, this study proposes a method for recognition and detection the tomato florescence based on cascaded convolutional neural networks. Due to the complex growth environment of tomato flowers, the flowers present small and multi-target distributions, and the same bouquet has the characteristics of flowers of different flowering periods. A single network can realize the recognition of tomato bouquets, but it cannot simultaneously realize the accurate recognition of the flowers in the bouquets with multiple flowering periods, resulting in insufficient flower characteristic information. In response to these problems, this paper proposes a method of cascading two-level neural networks in hopes realize the research of precise identification of tomato flower blooming period, and explore a new identification method for the precise operation of tomato pollination robot. First, the improved Flower Extraction Feature Pyramid Networks (FE-FPN) is used to achieve the region extraction of tomato bouquets, and then the Prim minimum spanning tree is used to prioritize the flowering of the extracted bouquet pictures, and finally the sorted extracted bouquet pictures are input to the improved Yolov3 the network realizes accurate identification of the flowering state of tomato flowers. Shooting at 8:00 am, 12:00 noon, and 6:00 pm, respectively, and experimented on a data set consisting of 1 600 tomato bouquet images, which included bud stage, full opening stage, flowering stage and early fruiting stage respectively. The first cascade improved FE-FPN network used to perform multi-scale prediction and pixel extraction of tomato bouquets, the results indicated the average correct extraction rate is 98.11%, the over extraction rate is 3.56%, and the missing extraction rate is 5.42%. The second cascade network uses an improved multi-scale and multi-input Yolov3 neural network to accurately identify the flowering period of flowers. On the basis of increasing the speed of the network, it increases the fusion of target feature information, and the model recognition rate and accuracy are higher. The average detection accuracy MAP for the flowering period of tomato flowers is 82.79%, and the average detection time is 12.54 ms. The average detection accuracy is higher 3.67 and 2.39 percent points than Mask R-CNN and Spatial Pyramid Pooling Networks (SPP-Net), respectively. The recognition error rate is lower 1.25 percent points than the Yolov3 network before the improvement, especially in the recognition accuracy of the flower bud stage. Finally, the method was deployed on the tomato pollination robot and verified in a large glass greenhouse. The recognition rate of complex environment and without missing extraction reaches 85.18%. Due to the great similarity in the color and shape characteristics of the flowering stage and the bud stage, the recognition accuracy of the flowering stage is low. However, when it is deployed to the facility tomato pollination robot in the later, the flowers in the flowering stage do not need to be pollinated. The research results can provide an important basis for the precise operation of intelligent pollination robots.

       

    /

    返回文章
    返回