Navigation center line extraction method for fishing and light complementary pond operation boat based on improved YOLOv8n
-
-
Abstract
Due to the obstruction caused by the photovoltaic panel arrays above the fishing and light complementary ponds, the BeiDou/GPS positioning signals are significantly affected, leading to a substantial reduction in the autonomous navigation accuracy of unmanned workboats. Additionally, traditional machine vision algorithms have low robustness, and the images are easily affected by changes in light and shadow, the distribution of aquatic plants, and surface obstacles, resulting in suboptimal visual navigation line detection. Aiming at the above problems, an improved YOLOv8n model was proposed in this study to extract the navigation center line in fish and light complementary ponds. First, from the point of view of improving the real-time detection, the HGNetV2 network was used as the backbone network, and batch normalization (BN) and shared convolution structure were used to design a lightweight detection head network and reduce the size of the model. Then the SPPF_LSKA module was used as the feature fusion layer to improve the multi-scale feature fusion capability of the model. Finally, Wise-IoU (weighted interpolation of sequential evidence for intersection over union) loss function was used to improve the bounding box regression performance and the detection accuracy of remote small targets. The improved YOLOv8n detection frame coordinates were used to extract the reference points for the positioning of the cement columns on both sides, the lines of the cement columns on both sides were fitted by the least square method, and the middle line of the navigation was extracted by the angle bisection line. Ablation test results showed that compared with the original YOLOv8n model, the calculation amount, parameter number and model volume of the improved YOLOv8n model were decreased by 36.0%, 36.8% and 32.8%, respectively, and the mean average precision (mAP) was 97.9%. The accuracy was 93.1%, the detection time of single image was 6.8 ms, and the detection speed was increased by 42.9%. In the comparison test of different models, compared with YOLOv5s, YOLOv6, YOLOv7 and YOLOv8, the improved YOLOv8n model exhibited the smallest size and the highest degree of lightweight design, while maintaining a high level of detection accuracy and demonstrating excellent performance in detecting concrete columns. In the navigation center line positioning analysis test, the average linear error between the reference point of the extraction cement column and the manual observation mark was 3.69 cm and 4.57 cm within the range of 0~5 m and 5~10 m, respectively, and the average linear error between the extraction navigation center line and the actual navigation center line was 3.26 cm, with an accuracy of 92%. In the real-time test of navigation midline, compared with the original model, the average extraction speed of navigation midline by the improved YOLOv8n was 117.01 f/s on Windows11 test platform, and the average extraction speed of navigation midline was improved by 38.04%. On Jetson test platform, the average extraction speed of navigation midline was 22.34 f/s, and average extraction speed of navigation midline was increased by 36.38%, which meets the navigation requirements of the unmanned fishing and light complementary pond boat and provides a reference for the subsequent research on the visual navigation system of the operation boat.
-
-