基于多尺度融合注意力机制的群猪检测方法

    Detecting herd pigs using multi-scale fusion attention mechanism

    • 摘要: 群猪检测是现代化猪场智慧管理的关键环节。针对群猪计数过程中,小目标或被遮挡的猪只个体易漏检的问题,该研究提出了基于多尺度融合注意力机制的群猪检测方法。首先基于YOLOv7模型构建了群猪目标检测网络YOLOpig,该网络设计了融合注意力机制的小目标尺度检测网络结构,并基于残差思想优化了最大池化卷积模块,实现了对被遮挡与小目标猪只个体的准确检测;其次结合GradCAM算法进行猪只检测信息的特征可视化,验证群猪检测试验特征提取的有效性。最后使用目标跟踪算法StrongSORT实现猪只个体的准确跟踪,为猪只的检测任务提供身份信息。研究以育肥阶段的长白猪为测试对象,基于不同视角采集的视频数据集进行测试,验证了YOLOpig网络结合StongSORT算法的准确性和实时性。试验结果表明,该研究提出的YOLOpig模型精确率、召回率及平均精度分别为90.4%、85.5%和92.4%,相较于基础YOLOv7模型平均精度提高了5.1个百分点,检测速度提升7.14%,比YOLOv5、YOLOv7tiny和 YOLOv8n 3种模型的平均精度分别提高了12.1、16.8和5.7个百分点,该文模型可以实现群猪的有效检测,满足养殖场管理需要。

       

      Abstract: Target detection has been widely applied to many scenarios in daily life, including people flow management, items counting, and object searching. Deep learning has also effectively improved the accuracy of target detection in recent years. Therefore, target detection can be expected to improve the efficiency of intelligent management in the livestock industry, such as pig farms. Daily population counting has been one of the most important steps in modern pig farms, such as target tracking and behavior recognition. However, the small targets or occluded pigs are difficult to accurately detect during counting. In this study, an accurate and rapid detection of the pig population was proposed to treat the small targets and occlusion in the dataset using a multi-scale fusion attention mechanism. YOLOpig network was constructed to detect the individual pig target using YOLOv7. Then, the scale network structure was proposed to enhance the detection of small targets. The residual thought of network structure was used to improve the convolution module for the high accuracy of the experiments. A parameter-free attention mechanism was also added to reduce the network weights while accelerating the detection speed. GradCAM was used for the feature visualization to verify the effectiveness of the experimental feature extraction. Finally, target tracking (StrongSORT) was adopted to accurately track the individual IDs of the pigs that were detected by the improved model, providing the identity information required for pig detection tasks. Experiments were conducted to verify the accuracy and real-time performance of the improved model on Large White pigs in the fattening stage. A series of experiments were conducted, including model ablation, model comparison, pig feature information extraction, and tracking. The effectiveness and feasibility of the models were verified to detect the pig groups. The great potential was obtained to solve the difficulties and challenges in pig population counting, providing important support in the agricultural breeding field. The experimental results show that the accuracy, recall, and average accuracy of the counting were 90.4%, 85.5%, and 92.4%, respectively. Furthermore, the average accuracy and the detection speed were improved by 5.1 percentage points and 7.14%, respectively, compared with the basic YOLOv7 model. The average accuracies of the YOLOv5, YOLOv7tiny, and YOLOv8n models were also improved by 12.1, 16.8, and 5.7 percentage points, respectively. Specifically, the pig population counting with a multi-scale fusion attention mechanism can be expected to rapidly and accurately complete the counting task, and then effectively deal with small targets and occlusion. The widespread application of the improved model can greatly contribute to the operational efficiency of pig farms for labor cost-saving, in order to promote the development of intelligent technology in the field of agricultural breeding. The more accurate and efficient counting of pigs can provide strong technical support for the field of agricultural breeding and intelligent farming.

       

    /

    返回文章
    返回