Abstract:
Abstract: An Accurate and rapid detection of tea buds can be a critical precondition in an intelligent tea-picking system. Nevertheless, it is still lacking in the robustness, universality, and accuracy during the segmentation of tea buds images under natural light. Fortunately, deep learning can greatly contribute to the artificial extraction of features for higher accuracy of tea bud detection in recent years. However, the current detection of tea bud can also be affected by the light to a great extent. Thus, it is very necessary to establish a new model of tea bud detection with high robustness and low sensitivity for intelligent tea picking in a natural environment. A novel automatic detection model of tea buds was proposed using a regional brightness adaptive correction and deep learning, in order to improve the robustness under different light intensities. First, the Longjing 43 tea was taken as a research object to collect the images under various light intensities. A labelling software was then used to label for the "V" and "I" posture of one bud and one leaf. After that, a data enhancement was performed by adding the Gaussian noise and horizontal mirroring. Second, the RGB images were converted to the grayscale images, of which the gray values were averaged. Subsequently, the distribution of the Average Gray (AG) values and convolution feature diagram were utilized to classify the grayscale images under different illumination intensities. Specifically, three levels were obtained, including the low, medium, and high brightness, corresponding to the intervals 30, 90), 90, 170), and 170, 230, respectively. A gamma brightness adaptive correction was then performed on the images with high brightness using different sizes of partitioning. At last, the multiple models of tea buds detection using deep learning were trained on the same training and test set. The results showed that the YOLOv5+CSPDarknet53 presented an optimal performance of tea bud detection, with the detection precision of 88.2% and recall rate of 82.1%, compared with SSD+VGG16, Faster RCNN+VGG16, YOLOv3+Darknet53, and YOLOv4+CSPDarknet53 models. The precision of detection was improved by 4.8 percentage points after a detection suppression on the YOLOv5 model. Thus, there was much less phenomenon that the same target was boxed for several times. Moreover, the image features of tea buds were of high significance in the brightness intervals 30, 90) and 90, 170), indicating that there was a much fewer influence of the brightness adaptive correction on the detection of the images with the medium or low brightness. There was also a relatively lower contrast between the tea buds and old leaves in the high brightness images. Furthermore, the significance of tea buds was weakened with the deepening of network layers, indicating that the effective features of tea buds cannot be extracted so far. As such, the recall rate of bud detection was only 71.1% for the high brightness images, indicating a relatively high missing detection rate. More importantly, the recall rate of bud detection was promoted by 19.2 percentage points than before, particularly after 2×3 block and regional gamma adaptive brightness correction for the high brightness images. The testing of tea images under various light intensities demonstrated that the YOLOv5 detection model using a regional brightness adaptive correction achieved the detection precision of 92.4% and the recall rate of 90.4%, indicating high robustness and low sensitivity to light changes. Therefore, the finding can provide a promising theoretical basis for the intelligent tea-picking system under natural light conditions.