刘立波, 程晓龙, 赖军臣. 基于改进全卷积网络的棉田冠层图像分割方法[J]. 农业工程学报, 2018, 34(12): 193-201. DOI: 10.11975/j.issn.1002-6819.2018.12.023
    引用本文: 刘立波, 程晓龙, 赖军臣. 基于改进全卷积网络的棉田冠层图像分割方法[J]. 农业工程学报, 2018, 34(12): 193-201. DOI: 10.11975/j.issn.1002-6819.2018.12.023
    Liu Libo, Cheng Xiaolong, Lai Junchen. Segmentation method for cotton canopy image based on improved fully convolutional network model[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 193-201. DOI: 10.11975/j.issn.1002-6819.2018.12.023
    Citation: Liu Libo, Cheng Xiaolong, Lai Junchen. Segmentation method for cotton canopy image based on improved fully convolutional network model[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 193-201. DOI: 10.11975/j.issn.1002-6819.2018.12.023

    基于改进全卷积网络的棉田冠层图像分割方法

    Segmentation method for cotton canopy image based on improved fully convolutional network model

    • 摘要: 针对传统的全卷积网络分割精度低、效果差等问题,该文提出一种结合条件随机场的改进全卷积网络棉田冠层图像分割方法。首先通过提取和学习图像特征对全卷积网络进行训练以优化其分割性能,得到初步分割结果和训练后的全卷积网络模型;接着将初步分割结果以像素和像素对应的分类向量形式输入到条件随机场中,同时结合像素间相对关系构建能量函数再进行训练,对初步分割结果进行优化得到训练后的条件随机场模型;进而通过验证过程对全卷积网络和条件随机场模型参数进一步调优,得到最优的全卷积网络和条件随机场;最后结合最优的全卷积网络和条件随机场实现棉田冠层图像分割并进行试验。试验结果表明:该方法的平均像素准确率为83.24%,平均交并比为71.02%,平均速度达到0.33 s/幅,与传统的全卷积网络分割性能相比分别提升了16.22和12.1个百分点,改进效果明显;与Zoom-out和CRFasRNN(conditional random fields as recurrent neural networks)分割方法进行对比,平均像素准确率分别提升了4.56和1.69个百分点,平均交并比分别提升了7.23和0.83个百分点;与逻辑回归方法和SVM(support vector machine)方法进行对比,平均像素准确率分别提升了3.29和4.01个百分点,平均交并比分别提升了2.69和3.55个百分点。该文方法在背景复杂、光照条件复杂等环境下可以准确分割出冠层目标区域,鲁棒性较好,可为棉花生长状态自动化监测提供参考。

       

      Abstract: Abstract: Using computer vision technology to monitor cotton growing state is the development trend of cotton production informatization, and is also the widely used automation technology. Cotton field canopy image segmentation under natural scenes is an important part for monitoring growth status of cotton. Since the existing methods are not good in the image segmentation of cotton fields in the environment of light and background, and the adaptive ability is not strong, an image segmentation method based on the fully convolution network and conditional random field was proposed. Firstly, fully convolutional network was used to extract the image features, and the fully convolutional network was trained by back propagation algorithm and stochastic gradient descent(SGD)to obtain the initial segmentation results; This network model is an improvement on the traditional network model convolutional neural network (CNN), which was widely used in image classification. The full connect layer is transformed into a convolution layer, so that the fully convolutional network's output image as input image size, so it can be applied in the field of image segmentation. For achieve the FCN segmentation model structure, VGG16 was chosen as a basic network structure. And for improve the accuracy of segmentation results, we design the skip structure which can combine pool3 pool4 pool5 feature pictures to combine the FCN-8s. Secondly, the initial segmentation results were input into the conditional random field, used the classification vector of image pixels and the relative relationship between the pixels to construct the energy function and train the conditional random field, obtained the best performance conditional random field; conditional random field is an undirected graph model, this model builds a graph on the relationship between image pixels and image pixels. Each pixel has different label, conditional random field improve the quality of the segmentation results and optimal segmentation results by minimizing the graph energy function. Thirdly, we through the verification process, the structure of the full convolutional network and the conditional random field model is optimized. Finally, used the trained fully convolutional network and conditional random field make inferences about input image to get the final segmentation result.In order to verify the effect of the method proposed in this paper, 625 test cotton canopy images were captured from the cotton producing areas in Xinjiang, China from April to July 2017. These images were collected by the Canon EOS6D digital camera with 1360×800, and zoomed into 850×500 pixels. The 375 images in these images are used as the training set image, and 125 images are used as the validation set images, and the 125 images are used as the test set. This method programming development environment is Python2.7, the operating system is Ubuntu 14.04 LTS, the deep learning development framework is Chainer 1.24 on an NVIDIA Quadro k5000 GPU(graphics processing unit). The experimental result show that proposed method's mean pixel accuracy is 83.24%, mean IOU is 71.02%, Compared with the traditional fully convolutional network, the performance of the proposed method is improved by 16.22 and 12.1 percentages respectively, proposed method has obvious improvement effect. Compared with the Zoom-out and conditional random field as recurrent neural networks(CRFasRNN) segmentation methods, the average pixel accuracy of fully convolutional network is improved by 4.56 and 1.69 percentages, and the average crossing ratio is improved by 7.23 and 0.83 percentages respectively. Compared with the logistic regression method and SVM method, the average pixel accuracy of fully convolutional network is improved by 3.29 and 4.01 percentages, and the average crossing ratio is improved by 2.69 and 3.55 percentages respectively. In order to verify the robustness of the method proposed in this paper, the segmentation of cotton canopy images under complex environment(plastic film), complex canopy area(leaf cover) and complex illumination condition(poor contrast in the canopy area) were carried out by different methods. Because of the GPU parallel computation technology, the proposed segmentation method is very efficient, the average processing time is 0.33s. The segmentation result show that the proposal method is less affected by environment and has better robustness. This study can provide reference for automatic monitoring of cotton growth status.

       

    /

    返回文章
    返回