Abstract:
Abstract: 3D point clouds can be expected to acquire plant phenotypic traits at present. The damage to crops can be reduced significantly, compared with the traditional manual direct contact measurement. The measurement errors caused by the occlusion between leaves and leaf curl in the two-dimensional images can also be avoided. In this study, a 3D reconstruction system was proposed using Kinect V3, in order to efficiently and accurately reconstruct 3D models of individual crops for the nondestructive measurements of crop phenotype traits by the point cloud. A turntable was also built using a stepping motor. The table surface was designed into multi-color concentric circles. The plane normal vector and central point of the turntable were calculated automatically using concentric circles, which were used for the horizontal alignment of the point cloud and for the calculation of the transformation matrix between multi-view point clouds. The point clouds of crops captured at the multiple view angles by Kinect V3 were transformed to the same coordinate system for the coarse registration using the transformation matrix. Then, the Trimmed Iterative Closest Point (TrICP) was used for the precise registration, thereby completing the 3D reconstruction of individual crops. More importantly, Chinese flowering cabbage and cucumber seedlings were selected as the experimental objects for the 3D reconstruction. Firstly, the reference point clouds were reconstructed by the Multi-View Stereo and Structure From Motion (MVS-SFM). The reason was that the MVS-SFM previously presented sufficient accuracy for the crop 3D reconstruction and high-throughput crop phenotyping analysis. A counting was performed on the distribution frequency for the set of distances between the reconstructed and reference point cloud. The results show that the reconstruction speed of the proposed model was improved by about 90%, compared with the MVS-SFM. In the Chinese flowering cabbage, the average error of distance between the reconstructed and the reference point clouds was 0.59 cm; while 64.45% and 90.63% of the distances sets were less than 0.5 and 1.0 cm, respectively. In the cucumber seedlings, the average error of distance was 0.67cm; while 60.09% and 85.45% of the distances sets were less than 0.5 and 1.0 cm, respectively. Both of the groups showed a high level of overlap. Secondly, the single leaf area was extracted by calculating the area of the surface mesh model that was reconstructed using the Delaunay triangular meshing algorithm. The Root Mean Square Error (RMSE) and the average relative error of the Chinese flowering cabbage leaf area were 3.83 cm2 and 5.88%, respectively, compared with the manual measurement using a leaf area meter. The RMSE and the average relative error of cucumber seedling leaf area were 2.08 cm2 and 6.50%, respectively. Both of the groups showed high correlation and accuracy. In addition, the Kinect V3 was compared with the predecessor, Kinect v2, indicating sufficient accuracy for the crop 3D reconstruction. The results show that the Kinect V3 can be used to capture much denser point clouds than the Kinect V2, indicating the high accuracy of crop 3D reconstruction and extraction of leaf area. The proposed model can be expected to quickly reconstruct the individual crops and then effectively extract the leaf area parameters, indicating that it can provide efficient technical tools and data support for crop breeding, cultivation, and agricultural production.