Abstract:
An accurate and rapid detection of eggshell damage has been one of the most important steps in the field of egg processing. Eggshells are prone to mechanical impact and damage during laying and processing, leading to the leakage of egg contents. It has also posed serious bacterial infections and food safety risks on the rest of the intact eggs in the processing line, particularly for the cleaning work and production costs. Furthermore, the current manual detection relies mainly on the subjective experience with slow speed and low accuracy. Therefore, it is of great practical significance to rapidly, accurately, and low-costly detect broken shell eggs in egg processing. In this study, an online real-time detection system was proposed for the broken shell eggs using the improved YOLOv7 (you only look once) model. Significant differences were also presented in the defects between broken shell eggs. Specifically, the YOLOv7 network was selected to replace the loss function CIoU (complete-IoU) with WIoUv2 (wise-IoU). The coordinate attention (CA) modules were added into the deformable convolutional DCNv2 (deformable convnet) in the backbone network. At the same time, the detection head (IDetect) module was replaced in the YOLOv7 network with a decoupled detection head (IDetect-Decoupled) with implicit knowledge learning. Among them, the CA module was used to effectively identify and locate targets of interest for optimal model performance, where the deformable convolution was combined to decouple the detection heads. WIoU loss functions were used to learn the local and global information while improving the accuracy and speed of shell-breaking egg detection with the less missed and false detections. The generalization and robustness of the model were further improved using data augmentation. Various operations were performed on the training set, such as random cropping, 50% probability of random horizontal and vertical flipping, 50% probability of random rotation and scaling, random noise, brightness and color adjustments. The dataset was screened to remove the unqualified data after data augmentation. Finally, a final dataset was obtained with 1729 images, including 3782 intact and 1574 cracked eggs. The experimental results on the PC side show that better performance was achieved in the improved model, with an average precision (mAP) of 94.0% on the test set. The single image detection time of 13.1 ms was 2.9 percentage points higher than before, where the detection time was only extended by 1.0 ms. The parameter quantity of the improved model was 3.64×10
7, indicating a decrease of 2.1 percentage points, compared with the original model. In addition, the improved YOLOv7 model enhanced the mAP of broken shell eggs by 5.4, 3.4, and 4.5 percentage points, compared with the SSD, Faster R-CNN, and YOLOv5 models, respectively. The best detection accuracy was achieved for the broken shell eggs, indicating the high effectiveness of the improved model. Finally, the model was also deployed to the device side through the format conversion. The ONNXRuntime deep learning was utilized to conduct the online detection and verification under its inference framework. The improved model was reduced by 3.8 percentage points on the false detection rate of broken eggs, whereas, the missed detection rate remained unchanged. The average frame rate was about 54 frames/s for online detection. The strong robustness of the improved model fully met the requirements of online real-time detection. This finding can provide the technical reference for the online detection of broken shell eggs.