基于YOLOv8n-Aerolite的轻量化蝴蝶兰种苗目标检测算法

    Lightweight phalaenopsis seedling detection based on YOLOv8n-Aerolite

    • 摘要: 小型植物组织检测对植物自动化培养产业的发展具有重要意义,为了提升蝴蝶兰种苗夹取点视觉检测效率以及解决现有模型较为冗余的问题,提出了一种轻量化目标检测算法YOLOv8n-Aerolite。首先,采用StarNet作为主干网络,在此基础上增加嵌入大核可分离卷积的池化层SPPF_LSKA(large-separable-kernel-attention),实现轻量化的同时保证准确率;然后在颈部网络中采用结合StarBlock的C2f_Star模块,提高模型对蝴蝶兰种苗检测的准确率;最后,采用以共享卷积为基础的轻量级检测头Detect_LSCD(lightweight shared convolutional detection head),提升模型对于小目标的检测精度及检测速度。在对蝴蝶兰种苗图像数据集的目标检测中,YOLOv8n-Aerolite算法的平均推理速度达到了435.8帧/s,精确度达91.1%,权重文件大小仅为3.1MB,对于夹取点所在小目标检测精度达91.6%,在种苗的夹取中,成功率为78%,研究结果可为发展小型作物自动化栽培技术提供参考。

       

      Abstract: This study aimed to improve the efficiency of visual detection for seedling gripping points in the automated rapid propagation of Phalaenopsis orchids, particularly on edge devices with limited computational resources and storage capacities. To address these challenges, the research introduced a lightweight object detection algorithm, named YOLOv8n-Aerolite, designed to balance high detection accuracy with reduced computational complexity. This balance was intended to make the algorithm particularly suitable for real-time applications on devices with restricted hardware capabilities. To achieve these objectives, the algorithm was developed with StarNet as the backbone network, selected for its efficient feature extraction capabilities. To further optimize the model, a SPPF_LSKA (Large-Separable-Kernel-Attention) layer was incorporated, which allowed for a significant reduction in the model’s computational demands while maintaining the precision required for accurate detection. This approach utilized a unique large-separable-kernel design, enhancing the model’s ability to process key visual features with minimal resource usage, an advancement critical for edge devices. Additionally, a new C2f_Star module, combined with the StarBlock, was implemented in the network's neck to enhance feature fusion. This enhancement was crucial for improving the model's ability to detect fine details, such as the small and intricate seedling gripping points. The integration of C2f_Star also introduced a multi-scale feature processing method, which proved valuable for distinguishing the gripping points in dense environments where seedlings are closely spaced. The detection head was also redesigned to include a lightweight shared convolutional layer structure, referred to as Detect_LSCD (Lightweight Shared Convolutional Detection Head), which resulted in a notable increase in detection speed and a reduction in the overall size of the model. These optimizations were specifically geared towards ensuring that the algorithm could perform efficiently in resource-limited environments. The proposed YOLOv8n-Aerolite algorithm was tested on the Phalaenopsis seedling image dataset. Experimental results showed that the model achieved an average inference speed of 435.8 frames per second, a performance highly suitable for real-time applications. This speed marked a significant improvement over existing methods, positioning the model as one of the fastest available options for edge-based seedling detection. The model’s detection accuracy reached 91.1%, with an impressive precision of 91.6% for detecting small targets black tuber, which was the gripping points of the seedlings, validating its reliability in practical deployments. Such high accuracy in detecting small targets underscores the algorithm’s suitability for tasks where precise targeting of small objects is essential. In addition, the model’s weight file size was compressed to just 3.1 MB, making it ideal for deployment on edge devices where storage capacity is constrained. To further validate the algorithm's effectiveness in real-world scenarios, practical gripping experiments were conducted, yielding a success rate of 78%. In order to assess the model's generalizability, the YOLOv8n-Aerolite algorithm was also tested on the 3D reconstructed Phalaenopsis seedling dataset, which similarly focuses on small target detection. The results demonstrated a 1.6% higher mAP0.5 score when compared to the original YOLOv8n, indicating that the modifications successfully improved the model's performance across different datasets. This cross-dataset testing confirms that the enhancements provide robustness and adaptability, allowing the model to excel in a variety of detection tasks. In conclusion, the YOLOv8n-Aerolite algorithm significantly advances the field of automated crop propagation by providing a highly efficient, accurate, and lightweight solution for visual detection tasks. The research serves as a valuable reference for developing scalable and automated propagation technologies, especially for small-scale crops like Phalaenopsis orchids. YOLOv8n-Aerolite meets the needs of edge computing environments, making it a practical solution for broader agricultural automation applications.

       

    /

    返回文章
    返回