基于改进YOLOv8n的笼养鸡死鸡实时检测方法

    Real-time detection method of dead chickens in cages based on improved YOLOv8n

    • 摘要: 为了实现复杂环境下蛋鸡舍巡检机器人对死鸡的快速准确检测,该研究提出一种基于改进YOLOv8n的笼养鸡死鸡实时检测方法。该方法以YOLOv8n为基础,首先在网络中使用跨阶段融合异构卷积(cross stage partial hetconv,CSPHet)替换原网络中C2f结构,将梯度变化集中到特征图的各个层次,并使用更小的1×1卷积核替代部分3×3核,以在保持检测精度的同时降低模型参数量和计算量;其次,在Neck层中引入空间增强注意力模块(spatially enhanced attention module,SEAM),旨在补偿被遮挡区域的损失响应,提高对被遮挡鸡的检测精度;最后,引入动态上采样(dynamics upsampling,DySample),提高特征图放大后的分辨率,强化蛋鸡区域有效特征。试验结果表明,提出的改进YOLOv8n算法对死鸡检测平均精度均值为95.8%,参数量为2.46 MB。与原始YOLOv8n相比,平均检测精度均值提高1.5个百分点,参数量减少了18.3%。为验证该模型的稳定性、适用性和实时性,成功将改进模型部署到巡检机器人中的边缘设备端,实现了对笼养蛋鸡中死鸡的准确检测与识别。该研究为基于边缘设备的巡检机器人在设施环境中的死鸡检测提供了技术支持。

       

      Abstract: In large-scale egg-laying hen farms, a significant number of hens perished daily. Under harsh conditions, manual inspection of deceased chickens had numerous limitations. Consequently, the utilization of inspection robots to replace manual inspection emerged as a new trend. To enable egg-laying hen house health inspection robots to swiftly and precisely detect deceased chickens in complex environments, this study introduced a dead chicken detection method for caged egg-laying hens based on an enhanced YOLOv8n model. The research subjects were four-tier caged laying hens on a certain large-scale laying hen farm in Zhejiang Province. In order to improve the recognition effect and avoid misjudging the lying chicken as dead chicken. Thus, a "lie" label was added during the annotation process. The dataset is defined with a total of three labels: normal, lying, and dead, with the label names being "normal", "lie", and "dead", respectively. In the constructed dataset of 4,000 images, dead chickens were present in 2,000 images and lying hens were present in 1,000 images. The dataset was randomly divided into a training set, a validation set, and a test set at a ratio of 6:2:2. This method, based on YOLOv8n, initially employed Cross Stage Partial Hetconv (CSPHet) to replace the C2f structure within the network, thereby focusing the gradient changes of feature maps across various levels. Additionally, smaller 1×1 convolution kernels were utilized to substitute some 3×3 convolution kernels, effectively minimizing the model's parameters and computational burden while maintaining detection precision. Secondly, due to the acute occlusion issue in high-density caged chicken farming, occlusion among egg-laying hens resulted in data loss. Therefore, a Spatially Enhanced Attention Module (SEAM) was integrated into the Neck layer to compensate for the response loss in occluded regions and enhance the detection precision of occluded targets. Lastly, Dynamics Upsampling (DySample) was introduced to bolster the resolution of upsampled feature maps and emphasize the effective features of layer regions. Experiments comparing the effects of different input sizes on the training results show that when the input image size is 800 × 800 pixels, the precision, recall and mAP are the highest, which are 96.5%, 98.0% and 94.4% respectively. When the input image size is 640 × 640, the recall is the highest, which is 98%. The precision and mAP are 1.3% and 0.1% lower than the size of 800 × 800, but the required GPU is reduced by 24.3%. Considering that the edge computing device of the robot needs to process the data of four cameras at the same time, it is more appropriate to set the input image size to 640 × 640.The original YOLOv8n's bounding box precision (Box(p)) was 92.6%, and after incorporating the SEAM attention mechanism, it improved to 93.2%, marking a 0.6 percentage point increase, which confirmed the ameliorative effect of the SEAM mechanism on occlusion. The mAP of the enhanced YOLOv8n algorithm for dead chicken detection stood at 95.8%, with an precision of 96.6%, 2.46MB of parameters, and an inference time of 25.8 milliseconds per image. Compared to the original YOLOv8n, the mAP and a precision increased by 1.5 and 1.4 percentage points, respectively, while the number of parameters and the inference time per image decreased by 18.3% and 15.7%, respectively. In addition, based on the good real-time performance of the Yolo series models, this study compared the improved YOLOv8n model with other Yolo series models, such as YOLOv5n, YOLOv6n, and the original YOLOv8n. The results showed that the improved mAP increased by 2.5%, 2.8%, and 1.5%, respectively, and the parameters and detection time were optimal. To validate the stability, applicability, and real-time performance of the enhanced YOLOv8n model, it was successfully deployed on the inspection robot's edge device. By configuring the Real-Time Streaming Protocol (RTSP), real-time video streams were acquired, enabling real-time detection and recognition of deceased chickens among caged egg-laying hens. This research provides technical support for the detection of dead chickens by robots in the facility environment based on edge devices, and also provides important reference value for promoting the intellectualization of layer cage.

       

    /

    返回文章
    返回