Crop root segmentation and phenotypic information extraction based on images of minirhizotron
-
Graphical Abstract
-
Abstract
The images of crop roots collected by the minirhizotron method have a complex soil background, and roots occupy a relatively small proportion. Common semantic segmentation models used for root extraction may misclassify pixels at the edge of roots as soil when the receptive field is small or multi-scale feature fusion is inadequate. Moreover, the long image acquisition cycle and initial difficulty in collecting a large number of valid samples with the minirhizotron method hindered the rapid deployment of root extraction models. To improve the accuracy of root phenotyping measurements and optimize extraction model deployment strategies, this study devised an in-situ automatic root imaging system to acquire minirhizotron images of crops in real time. A full-scale skip feature fusion mechanism is constructed for the U2-Net model with rich receptive fields for the effective classification of root pixels in minirhizotron images. Integrating data augmentation and fine-tuning method of transfer learning methods to achieve rapid deployment of root extraction models for target species. The full-scale skip feature fusion mechanism involved fusing the output features of the upper encoder and lower decoder of the U2-Net model across all scales, thereby serving as input features for a certain decoder layer and effectively retaining more feature information to enhance the decoder's information restoration capability. In terms of model deployment, this study compared fine tuning method of transfer learning with mixed training to address the issue of model training with limited samples. Experimental materials included self-developed in-situ automatic root imaging systems for collecting garlic sprout root system images and the publicly available minirhizotron dataset PRMI(plant root minirhizotron imagery). The experimental design included performance metric comparisons on the PRMI(plant root minirhizotron imagery)dataset, followed by analysis and validation on garlic sprout data. The control group comprised pre-improvement and post-improvement versions of the U2-Net, U-Net, SegNet, and DeeplabV3+_Resnet50 models. Finally, the predictive effects of fine-tuning and mixed training methods on the garlic sprout test set were compared to identify effective strategies for rapid model deployment. The experimental results demonstrate that on the garlic sprout dataset, the improved U2-Net model achieves average F1 score and IoU (intersection and comparison) of 86.54% and 76.28%, respectively. Compared to the pre-improvement version, U-Net, SegNet, and DeeplabV3+_Resnet50 models, the average F1 increases by 0.66、5.51、8.67 and 2.84 percentage points, respectively, while the average IoU increases by 1.02、8.18、12.52 and 4.31 percentage points, respectively. Practical segmentation shows enhanced recognition capability for root system edges, significantly reducing over-segmentation and under-segmentation phenomena. In the garlic sprout dataset with limited samples, fine-tuning of transfer learning outperformed mixed data and training, with model performance metrics F1 and IoU improving by 2.89 and 4.45 percentage points, respectively. For root length, area, and average diameter, the determination coefficient R2 values between the result of the model's predicted images and manually labeled images reached 0.965, 0.966, and 0.830, respectively. This study offers a reference for enhancing the accuracy of root phenotype measurement in minirhizotron images and the rapid deployment of root extraction models.
-
-