基于改进YOLOv8的葡萄叶片病害检测与识别

    Grape leaf disease detection and identification based on improved YOLOv8

    • 摘要: 为了提高葡萄叶片病害检测的效率与准确性,克服传统人工检测方法效率低、主观性强的局限,该研究提出一种基于改进YOLOv8的葡萄叶片病害检测方法,通过引入CBAM注意力机制优化特征金字塔结构,增强对不同尺度病斑的特征捕获能力;同时采用ShuffleNet V2轻量级网络替代原有CBS模块,在保持检测性能的同时显著降低模型参数量和计算复杂度。试验结果表明,改进模型在包含3种主要病害(黑腐病、褐斑病、轮斑病)的5000张叶片图像数据集上实现了94.6%的平均精度均值(mAP@0.5),较YOLOv8提升2.70个百分点,推理速度52.06 FPS,满足田间实时检测需求。与YOLOv5、Faster R-CNN、YOLOv8等模型对比,改进模型在检测精度和效率方面均表现出优势,为葡萄叶片病害智能识别提供了一定支持,该模型可以作为葡萄叶片病害检测的一种有效方法。

       

      Abstract: Grape is one of the most popular fresh fruits in the primary ingredients for winemaking. There is an ever-growing demand and cultivation area. However, the grapes are susceptible to the diseases in their growth period. The quality and yield of the grapes can dominate the grape production. Once the grapevines are affected by diseases, their physiological and morphological characteristics can often prominently be reflected in the leaves. Therefore, computer vision and deep learning can be applied to disease detection and identification in grape leaves. The rapid detection and identification of grape leaf diseases can be achieved to reduce the risk of cultivation. This study aims to enhance the precision and processing speed of spot recognition on the grape leaf. An innovative detection was proposed using the YOLOv8 algorithm. The network models were selected for the different depths and widths, according to the task requirements for the object detection, image classification, instance segmentation, and keypoint detection tasks. There was no variation in the neck network of the YOLOv8 model. A PAN+FPN structure was used to enhance the network performance of the fuse features. The head part was used to convert the output feature maps from the neck part into the detection. The head part of the YOLOv8 was drawn on the decoupled head from the YOLOX and YOLOv6. The objectness branch was eliminated to perform the bounding box regression and object classification. Two parts were the 4×reg_max and num_class. Firstly, the images of the grape leaf diseases were preprocessed. The potential uneven distribution of the disease image samples led to the model overfitting. Data augmentation was employed to increase the sample diversity, including the image rotation and mirror transformation. These measures were collectively used to improve the generalization and accuracy of the disease detection. Secondly, the precision and processing speed enhanced the spot recognition of the grape leaf; Therefore, the CBAM (Convolutional Block Attention Module) attention mechanism was integrated into the backbone to enhance the refined processing of the feature maps. The CBAM was designed as a concise and efficient attention mechanism in the feedforward convolutional networks. Spatial and channel information was effectively integrated to enhance the recognition of the network. The spots of the different scales were recognized, because the downsampling led to information loss during feature extraction, particularly when dealing with the spots of varying sizes. Finally, the main framework was adopted as ShuffleNet V2 (a lightweight convolutional neural network), in order to replace the conventional CBS convolutional module. The parameters were then reduced for the computational requirements. Special emphasis on ShuffleNet V2 was placed on the balanced distribution of the parameters and computational load across different parts of the network. Thereby the resource utilization efficiency was optimized to reduce the energy consumption of the model. The results show that the efficient computational performance was achieved to reduce the memory access. The operations were simplified to optimize the data paths, thereby reducing the energy consumption for the increasing speed; The network structure was avoided with the computationally expensive operations. Experiments also demonstrated that the detection speed was maintained on the high efficiency and accuracy, indicating the better detection of the grape leaf spots. The model was trained using a dataset of images from the three types of grape leaf diseases. A comparison was made on the YOLOv5, YOLOv8, and the improved YOLOv8 models. The accuracy of the improved YOLOv8 model reached 94.6%, indicating enhanced accuracy than the rest. This improved model can serve as an effective approach to detecting grape leaf diseases.

       

    /

    返回文章
    返回