朱逢乐, 严霜, 孙霖, 何梦竹, 郑增威, 乔欣. 基于深度学习多源数据融合的生菜表型参数估算方法[J]. 农业工程学报, 2022, 38(9): 195-204. DOI: 10.11975/j.issn.1002-6819.2022.09.021
    引用本文: 朱逢乐, 严霜, 孙霖, 何梦竹, 郑增威, 乔欣. 基于深度学习多源数据融合的生菜表型参数估算方法[J]. 农业工程学报, 2022, 38(9): 195-204. DOI: 10.11975/j.issn.1002-6819.2022.09.021
    Zhu Fengle, Yan Shuang, Sun Lin, He Mengzhu, Zheng Zengwei, Qiao Xin. Estimation method of lettuce phenotypic parameters using deep learning multi-source data fusion[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(9): 195-204. DOI: 10.11975/j.issn.1002-6819.2022.09.021
    Citation: Zhu Fengle, Yan Shuang, Sun Lin, He Mengzhu, Zheng Zengwei, Qiao Xin. Estimation method of lettuce phenotypic parameters using deep learning multi-source data fusion[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(9): 195-204. DOI: 10.11975/j.issn.1002-6819.2022.09.021

    基于深度学习多源数据融合的生菜表型参数估算方法

    Estimation method of lettuce phenotypic parameters using deep learning multi-source data fusion

    • 摘要: 生菜外部表型参数的无损、高精度估算对全天候生长监测意义重大。为提高生菜表型参数估算模型泛化性能,以基质培生菜为研究对象,提出了基于深度学习的融合二维RGB图像和深度(Depth)图像的生菜表型参数(湿质量、干质量、株高、直径、叶面积)高精度估算方法。采集4个生菜品种生长全过程的表型参数数据集,包含RGB图像、深度图像和人工测量的表型参数,共388个样本。对RGB图像和深度图像进行背景分割和数据归一化,输入构建的深度学习多源数据融合模型对5种表型参数进行同步回归训练。试验表明,该研究方法对5种表型参数的估算决定系数均高于0.94,平均绝对百分比误差均低于8%,而传统特征提取+机器学习方法对部分表型参数估算的平均绝对百分比误差高达13%以上,表明该研究估算方法具有较高的精度。消融试验表明融合RGB和深度图像的深度学习模型优于仅使用单源图像的模型,尤其在株高、直径和叶面积的估算上。对生菜不同品种和不同生长阶段的估算结果表明该模型适用于不同颜色、形状的生菜品种,亦对不同生长阶段、不同植株大小的生菜具有一定的适应性。因此,该研究提出的基于深度学习多源数据融合模型的生菜表型参数估算方法性能优异,对设施蔬菜生长监测和产量预估有重要的应用价值。

       

      Abstract: Non-destructive and high-precision estimation of external phenotypic parameters of lettuce can greatly contribute to monitoring the all-weather plant growth and process optimization, particularly for the digital, automated, and intelligent cultivation of facility vegetables. However, most machine vision studies on the non-destructive estimation of lettuce phenotypic parameters used artificially designed and manually extracted image features, leading that the model can be vulnerable to environmental interference and low generalization performance. Deep learning also relied only on two-dimensional visible light (RGB) images at present, without considering the influence of the three-dimensional morphology of plants on the estimation of phenotypic parameters. It is very necessary to improve the model performance during this time. Taking the substrate cultured lettuce as the research object, this study aims to propose a high-precision estimation of lettuce phenotypic parameters using deep learning fusing two-dimensional RGB images and depth (Depth) images. The fresh weight, dry weight, plant height, diameter, and leaf area were calculated using the multi-source image dataset covering the whole growth period of four lettuce cultivars. A dataset of lettuce phenotypic parameters was collected from 388 samples in total, including RGB images, Depth images, and manually measured phenotypic parameters. Background segmentation and data normalization were performed on the RGB images and Depth images, which were then inputted into the constructed multi-source data fusion deep learning model for the synchronous regression fitting on five phenotypic parameters. A cross validation was carried out for the model training and estimation experiments on the dataset. The results showed that all the estimation coefficients of determination (R2) for the five phenotypic parameters were higher than 0.94, and the mean absolute percentage errors (MAPE) were all lower than 8%, indicating a higher accuracy and better performance than before. An ablation experiment showed that the deep learning model combining RGB and Depth images was better than that only used single-source images, especially in the estimation of plant height, diameter, and leaf area. A comparison was also made to traditional feature extraction and machine learning methods. It was found that the MAPE errors of random forest and support vector regression models were higher than 13% in the estimation of certain phenotypic parameters, indicating that the improved model was significantly superior to the traditional ones. Moreover, the proposed model was suitable for different lettuce cultivars with diverse colors, shapes, and plant sizes during different growth stages, particularly for better cross-variety adaptability. Therefore, the estimation of lettuce phenotypic parameters using the deep learning multi-source data fusion model demonstrated the excellent performance and important application value for the rapid monitoring of growth and yield of facility vegetables.

       

    /

    返回文章
    返回