基于改进YOLOv8n的笼养鸡死鸡实时检测

    Real-time detection of dead chickens in cages using improved YOLOv8n

    • 摘要: 为了实现复杂环境下蛋鸡舍巡检机器人对死鸡的快速准确检测,该研究提出一种基于改进YOLOv8n的笼养鸡死鸡实时检测方法。该方法以YOLOv8n为基础,首先在网络中使用跨阶段融合异构卷积(cross stage partial hetconv,CSPHet)替换原网络中C2f结构,将梯度变化集中到特征图的各个层次,并使用更小的1×1卷积核替代部分3×3核,以在保持检测精度的同时降低模型参数量和计算量;其次,在Neck层中引入空间增强注意力模块(spatially enhanced attention module,SEAM),旨在补偿被遮挡区域的损失响应,提高对被遮挡鸡的检测精度;最后,引入动态上采样(dynamics upsampling,DySample),提高特征图放大后的分辨率,强化蛋鸡区域有效特征。试验结果表明,提出的改进YOLOv8n算法对死鸡检测平均精度均值为95.8%,参数量为2.46 MB。与原始YOLOv8n相比,平均检测精度均值提高1.5个百分点,参数量减少了18.3%。为验证该模型的稳定性、适用性和实时性,成功将改进模型部署到巡检机器人中的边缘设备端,实现了对笼养蛋鸡中死鸡的准确检测与识别。该研究为基于边缘设备的巡检机器人对蛋鸡舍中的死鸡快速精准检测提供了技术支持。

       

      Abstract: Egg-laying chickens can inevitably perish daily in large-scale farms. However, manual inspection of deceased chickens cannot fully meet the large-scale production under harsh conditions. Consequently, inspection robots can be utilized as a highly efficient trend in modern agriculture. In this study, the inspection robot was performed on the egg-laying chickens house to swiftly and precisely detect the deceased chickens under complex environments. The detection of dead chickens were also introduced for the caged egg-laying chickens using an improved YOLOv8n model. The research objects were four-tier caged laying chickens on a large-scale laying chickens farm in Zhejiang Province, China. In order to improve the recognition effect and avoid misjudging the lying chickens as dead chickens. Thus, a "lie" label was added during annotation. The dataset was defined with a total of three labels: normal, lying, and dead, where the label was named "normal", "lie", and "dead", respectively. The dataset of 4000 images was constructed after labeling. The dead chickens were presented in 2000 images, while the lying chickens were presented in 1000 images. The dataset was randomly divided into a training set, a validation set, and a test set at a ratio of 6:2:2. Initially, cross stage partial hetconv (CSPHet) was employed to replace the C2f structure within the network using YOLOv8n. The gradient changes of feature maps were then focused across various levels. Additionally, the smaller 1×1 convolution kernels were utilized to substitute some 3×3 convolution kernels, effectively minimizing the model's parameters and computational burden while maintaining detection precision. Secondly, due to the acute occlusion issue in high-density caged chickens farming, occlusion among egg-laying chickens resulted in data loss. Therefore, the spatially improved attention module (SEAM) was integrated into the neck layer to compensate for the response loss in occluded regions and enhance the detection precision of occluded targets. Lastly, dynamics upsampling (DySample) was introduced to bolster the resolution of upsampled feature maps, and then emphasize the effective features of layer regions. A series of experiments were carried out to compare the effects of different input image sizes on the training.The results showed that when the input image size is 800 × 800 pixels, the precision, recall and mAP were the highest, which were 96.5%, 98.0% and 94.4% respectively. Furthermore, the highest recall was 98%, when the size of the input image was 640 × 640 pixels. The precision and mAP were 1.3 and 0.1 percentage points lower than the size of 800 × 800 pixels, respectively, but the required GPU was reduced by 24.4%. The more appropriate size of input image was selected as the 640 × 640 pixels, considering that the edge computing device of the robot was required to process the data of four cameras at the same time. After introducing SEAM mechanism, the box precision (box (P)) was improved by 0.6 percentage point increase, which confirmed the ameliorative effect of the SEAM mechanism on occlusion. The mAP of the improved YOLOv8n algorithm was 95.8% to detectthe dead chickens, with a precision of 96.6%, 2.46 MB of parameters, and an inference time of 25.8 ms. In addition, the improved YOLOv8n model was compared with the YOLO series models, such as YOLOv5n, YOLOv6n, and the original YOLOv8n. The results showed that the improved mAP increased by 2.5, 2.8, and 1.5 percentage points; the precision increased 1.7, 1.6 and 1.4 percentage points; the parameters are reduced by1.6%, 41.8% and 18.3%; the inference time decreased by 11.9%, 36.8% and 15.7% respectively, which showed that the improved YOLOv8n model has strong advantages in thecomprehensive performance. The real-time streaming protocol (RTSP) was configured to acquire real-time video streams for the real-time detection and recognition of deceased chickens among caged egg-laying chickens. The improved YOLOv8n model was successfully deployed on the edge device of the inspection robot, in order to validate the stability, applicability, and real-time performance. This finding can provide the technical support and reference to detect the dead chickens in the facility environment using edgedevices of robots.

       

    /

    返回文章
    返回