基于改进YOLOv8n的叠层笼养鸡只笼位采食量估测

    Estimation of chicken cage’s feed intake based on improved YOLOv8n in stacked-cage-raising system

    • 摘要: 为了实现叠层笼养模式下鸡只笼位采食量的估测,该研究以笼养肉种鸡为研究对象,对YOLOv8n模型进行轻量化改进提出了YOLO-FSG模型以识别鸡只的采食行为;在此基础上,采用多元线性回归模型建立了以采食时长和采食次数为自变量,采食量为因变量的估测模型。YOLO-FSG模型通过以下3点改进实现模型轻量化:1)在主干网络部分,将高效多尺度注意力(efficient multi-scale attention,EMA)模块、Faster-NetBlock和C2F(CSPDarknet53 to 2-stage feature pyramid network)结合,构建了C2F-FEblock以增强特征提取能力和降低模型复杂度;2)在颈部网络部分,使用Slim-Neck设计方法,将GSConv(group shuffle convolution)和VovGSCSP(GSConv spatial cross stage partial)模块分别替换卷积和C2F模块优化特征融合与处理过程;3)在检测头部分,使用2个共享参数的分组卷积模块替换4个卷积模块以减少计算量。结果表明,YOLO-FSG模型识别采食行为的平均精度均值(mAP0.5)、参数量、浮点计算量和检测时间分别为97.1%、1.94 M、4 G和3.6 ms,优于YOLOv5n、YOLOv7n、YOLOv8n等常见目标检测模型。采食量估测模型在测试集上的决定系数和均方根误差分别为0.9和5.91 g,表明该模型具有较高的估测精度。该研究提出的方法可为叠层笼养模式下鸡只的精细化管理提供技术支撑,促进养鸡业的智能化和可持续发展。

       

      Abstract: This study aims to estimate the feed intake of chickens in stacked-cage-raising systems. Meat-type breeder chickens were selected as the research object. An improved lightweight version of the YOLOv8n model, named YOLO-FSG, was developed to recognize chicken feeding behaviors. A multivariate linear regression model was established to estimate feed intake using feeding duration and frequency as variables. A dataset of 2043 images was constructed. The data augmentation was applied to the training and validation sets using random rotation, flipping, brightness alteration, and Gaussian blurring. The expanded training and validation set contained 3268 and 408 images, respectively. Meanwhile, the test set contained 204 images. The YOLO-FSG model was achieved in the lightweight performance after three key modifications: 1)In the backbone network, the efficient multi-scale attention (EMA) module, Faster-NetBlock, and C2F (CSPDarknet53 to two-stage feature pyramid network) were combined to create the C2F-FEblock, thus enhancing feature extraction while reducing model complexity. 2) In the neck network, a Slim-Neck design strategy was employed to replace the convolutional layers and C2F modules with GSConv (group shuffle convolution) and VovGSCSP (GSConv spatial cross stage partial) modules, in order to optimize the feature fusion and processing. 3) In the detection head, grouped convolutional modules with two shared parameters were replaced by four convolutional modules to reduce the computational load. The YOLO-FSG model achieved a 97.1% mean average precision at a 0.5 threshold (mAP0.5) and a 47.4% mean average precision across 0.5-0.95 thresholds (mAP0.5-0.95), with a model size of 1.94 M, 4.0 G floating-point operations (FLOPs), and a detection time of 3.6 ms. Compared with YOLOv8n, the improved model was enhanced by 0.2 percentage points at mAP0.5-0.95 and maintained the same mAP0.5; There was a decrease in the number of parameters, where FLOPs were reduced by 35.3% and 50.6%, respectively; The detection time decreased by 0.2 ms, leading to the more portable model. The improved model outperformed the common object detection models, such as YOLOv5n, YOLOv7n, and YOLOv9t. Feeding duration and frequency were calculated to convert hourly feeding video recordings into sequential image frames. Each frame was analyzed to detect objects using the YOLO-FSG model. The bounding boxes were obtained to detect the entities, in order to complete their coordinates and cage classification. A predefined threshold line was used to delineate the feeding area proximate to the troughs. Once the centroid of a chicken's head fell within this designated feeding area, it was classified as actively feeding. The duration of each feeding episode was determined to count the number of successive frames during feeding activity. An increment was recorded in feeding frequency, whenever there was a pause exceeding 3 s between two feeding episodes. Thus, the feeding frequency and feeding duration were obtained for each cage position within a one-hour period. Correspondingly, the actual feed intake of chickens was measured manually within different cages over one hour. A total of 185 samples were obtained after the removal of unusable data. The training and testing sets were divided at an 8:2 ratio. The feed intake estimation model was achieved in a coefficient of determination (R2) and root mean square error (RMSE) on the test set of 0.9 and 5.91, respectively. In the future, automatic weighing equipment for feed should be utilized to develop more accurate feed intake estimation models. This finding can provide technological support to the fine management of chickens in stacked-cage-raising systems, thereby promoting intelligent development in poultry farming.

       

    /

    返回文章
    返回