Detecting lotus seedpod maturation using improved YOLOv5s
-
Graphical Abstract
-
Abstract
An accurate detection has been a great challenge at the varying maturity stages of lotus seedpods within the dynamic field environments. In this study, a groundbreaking approach was proposed to robustly detect and distinguish the maturity levels of lotus seedpods. The improved YOLOv5s model was also selected for computer vision and precision agriculture. The Bottleneck Transformer (BoT) self-attention mechanism module was first incorporated into the core feature network. The global and local features were then integrated into the mapping structure. This fusion also amplified the discernibility between different maturity stages of lotus seedpods under the ever-changing visual conditions within the lotus fields. Intersection over Union (EIoU) loss function was used to enhance the precision of bounding box regression. The improved model was then optimized to precisely delineate the contours and boundaries of lotus seedpods. A substantial elevation of detection was found in precision agriculture. K-means++ clustering was introduced to enhance the network convergence, in order to recalibrate the calculation of initial anchor box sizes. A more accurate and efficient learning process was then achieved to ultimately enhance the performance of the improved model more accessible for practical applications. The results show that outstanding performance was achieved in the improved YOLOv5s model on the test dataset, with the boasting precision (P) rates of 98.95%, recall (R) rates of 97.00%, and an impressive mean average precision (mAP) score of 98.30%. Notably, the average detection time remained at 6.4 ms, with a compact model size of 13.4 megabytes. The improved YOLOv5s model performed better than most detection models, including YOLOv3, YOLOv3-tiny, YOLOv4-tiny, YOLOv5s, and YOLOv7. There were substantial improvements of 0.2 percentage points, 1.8 percentage points, 1.5 percentage points, 0.5 percentage points, and 0.9 percentage points in mAP, respectively, indicating better detection of the lotus seedpod maturity. Furthermore, a real-time detection system of lotus seedpod maturity was established to integrate the improved YOLOv5s model into a Raspberry Pi 4B mobile controller. Additionally, four distance ranges were tested to verify the improved model. The better performance was obtained in the 0.5-meter to 1.0-meter detection ranges, compared with the YOLOv5 and YOLOv3-tiny models. This YOLOv5s-based approach can be expected to detect the maturity of lotus seedpods for the development of intelligent harvesting equipment. The applicability of the improved model can also be extended to the defying gravity during growth for sustainable and efficient crop management in the frontiers of precision agriculture and industry practices. In summary, modern computer vision can greatly contribute to agricultural needs, such as the improved YOLOv5s model. The accurate and rapid detection of lotus seedpod maturity was achieved for better efficiency, sustainability, and productivity in precision agriculture and smart farming, as well as the crop maturity detection in gravity-defying plants.
-
-