Abstract:
Non-destructive and high-precision estimation of external phenotypic parameters of lettuce can greatly contribute to monitoring the all-weather plant growth and process optimization, particularly for the digital, automated, and intelligent cultivation of facility vegetables. However, most machine vision studies on the non-destructive estimation of lettuce phenotypic parameters used artificially designed and manually extracted image features, leading that the model can be vulnerable to environmental interference and low generalization performance. Deep learning also relied only on two-dimensional visible light (RGB) images at present, without considering the influence of the three-dimensional morphology of plants on the estimation of phenotypic parameters. It is very necessary to improve the model performance during this time. Taking the substrate cultured lettuce as the research object, this study aims to propose a high-precision estimation of lettuce phenotypic parameters using deep learning fusing two-dimensional RGB images and depth (Depth) images. The fresh weight, dry weight, plant height, diameter, and leaf area were calculated using the multi-source image dataset covering the whole growth period of four lettuce cultivars. A dataset of lettuce phenotypic parameters was collected from 388 samples in total, including RGB images, Depth images, and manually measured phenotypic parameters. Background segmentation and data normalization were performed on the RGB images and Depth images, which were then inputted into the constructed multi-source data fusion deep learning model for the synchronous regression fitting on five phenotypic parameters. A cross validation was carried out for the model training and estimation experiments on the dataset. The results showed that all the estimation coefficients of determination (R2) for the five phenotypic parameters were higher than 0.94, and the mean absolute percentage errors (MAPE) were all lower than 8%, indicating a higher accuracy and better performance than before. An ablation experiment showed that the deep learning model combining RGB and Depth images was better than that only used single-source images, especially in the estimation of plant height, diameter, and leaf area. A comparison was also made to traditional feature extraction and machine learning methods. It was found that the MAPE errors of random forest and support vector regression models were higher than 13% in the estimation of certain phenotypic parameters, indicating that the improved model was significantly superior to the traditional ones. Moreover, the proposed model was suitable for different lettuce cultivars with diverse colors, shapes, and plant sizes during different growth stages, particularly for better cross-variety adaptability. Therefore, the estimation of lettuce phenotypic parameters using the deep learning multi-source data fusion model demonstrated the excellent performance and important application value for the rapid monitoring of growth and yield of facility vegetables.