Duan Lingfeng, Xiong Xiong, Liu Qian, Yang Wanneng, Huang Chenglong. Field rice panicle segmentation based on deep full convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 202-209. DOI: 10.11975/j.issn.1002-6819.2018.12.024
    Citation: Duan Lingfeng, Xiong Xiong, Liu Qian, Yang Wanneng, Huang Chenglong. Field rice panicle segmentation based on deep full convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 202-209. DOI: 10.11975/j.issn.1002-6819.2018.12.024

    Field rice panicle segmentation based on deep full convolutional neural network

    • Abstract: Panicle traits are significant in rice yield measurement, disease detection, nutrition diagnosis and growth period assessment. Precise segmentation of panicles from the rice images is a prerequisite for panicle trait measurement and rice phenotyping. However, pancile segmentation for various accessions and growth periods in the complex field environment is a big challenge. In this paper, we proposed a robust and fast image segmentation method for rice panicles in the field based on deep full convolutional neural network. The deep full convolutional neural network for panicle segmentation, namely, PanicleNet, was based on SegNet. The architecture of PanicleNet is similar with SegNet, except that the output number of PanicleNet's last convolutional layer is set as 2, corresponding to 2 types of classification (panicle pixel/ non-panicle pixel). The overall segmentation mainly consists of 2 steps: Offline training step and on-line segmentation step. At the offline training step, 50 original field rice images with a resolution of 1971×1815 pixels from different accessions and growth stages were used to train the PanicleNet. Each original image was firstly divided into 24 sub-images with a resolution of 360×480 pixels. Data were augmented by adjusting the illumination component of the sub-images to simulate the changing illumination in the field. In total, 2880 images were used for training set and 720 images for validation set. The training of the network was accomplished by Caffe. The stochastic gradient descent (SGD) with momentum algorithm was used to train the PanicleNet. The momentum was set as 0.9 and the learning rate was set as 0.001. The network was initialized by VGGNet and then fine-tuned using our data. The batch size of training set and validation set was set as 4 and 2, respectively. The network was validated every 720 epochs. The training was stopped when the error converged. Finally, the network was trained for 72000 epochs. The final training model (PanicleNet) was saved to disk, which can be called by using the Caffe C++ interface in the on-line segmentation process. At the on-line segmentation step, an original image was firstly divided into several sub-images with a resolution of 360×480 pixels. The number of sub-images depends on the size of the original image. Each sub-image was then subjected to the PanicleNet for pixel-wise classification. The segmentation images for sub-images were then jointed to get the segmentation image for the original image. Tests showed that PanicleNet was capable of dealing with the complex and irregular panicle border, cluttered background, color overlapping between panicles and leaves, and large variation in panicle color, shape, size, texture caused by difference in rice accessions, growth periods, illumination unbalance and variation. Two image datasets, namely, Dataset2016 and Dataset2017, were used to evaluate the performance of the segmentation method. The 23 images in Dataset2016 were the remaining ones in the 73 images in 2016, 50 images of which were used for training the PanicleNet. And the 22 images in Dataset2017 were taken in 2017. The rice plants in different images belonged to different varieties. In total, 50 varieties were involved in the training phase and another 45 varieties were involved in the testing phase. The growth periods of the rice plants were also varied from heading stage to mature stage. The performance of PanicleNet was compared with other 4 approaches: Panicle-SEG, HSeg, i2 hysteresis thresholding, and jointSeg. PanicleNet outperformed the other 4 approaches. For the Dataset2016, the average Qseg value and F-measure value for PanicleNet were 0.76 and 0.86, respectively. For the Dataset2017, the average Qseg value and F-measure value for PanicleNet were 0.67 and 0.80, respectively. The processing efficiency of PanicleNet was approximately 2 s for segmenting an image with a resolution of 1971×1815 pixels. The proposed method promotes the accuracy and efficiency of panicle segmentation, which provides a novel tool for rice phenotyping, rice breeding and cultivation.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return