Classification of pig sounds based on deep neural network
-
-
Abstract
Abstract: Pig sounds reflect the stress and health status of pigs, also it is the most easily collected biomarker through non-contact methods. To improve the classification accuracy of pig sound signals, this study used the spectrogram to visualize the time-frequency characteristics, and combined with the deep neural network classification model. Four contents were discussed as followed: 1) The sound data set was constructed. According to the different sound signals, the pig's behavior could be recognized by the classification network. When the pig was in normal statuses, the pig sounds were called as grunts. If the pig was in frightened statuses, such as injected or chased, pig sounds were defined as screams. Before the feeding, when pigs see the food, pigs made long irritable sounds. The sounds were called as howls of hunger. All pig sounds were collected on-farm by the sound collection box. On the farm, a laptop was used as a host computer to display all the working parameters of the collection box. The data transmission and storage scheme adopted the Client/Server architecture. Besides, the worker labeled sounds, according to the behavior. 2) Spectrograms of different sounds built up the training and test dataset of the image recognition network. The pig sound was a stationary signal in short time duration, therefore, continuously calculating the frequency spectrum of the sound signal in the vicinity of the selected instant of time gave rise to a time-frequency spectrum. The study discussed the optimal spectrogram parameters, which were suitable for the structure of the deep neural network. Experiment results showed that the segment length of the pig sounds was 256 samples and the overlap was 128 samples, the classification accuracy of the deep neural network was highest. The spectrogram optimization experiment results showed that the recognition accuracy was improved by 1.8%. 3) The deep neural network was designed. The study used the MobileNetV2 network to achieve recognition, which was based on an inverted residual structure where the shortcut connections were between the thin bottleneck layers. Aiming to the portable platform in the real application, the width factor and the resolution factor were introduced to define a smaller and more efficient architecture. Also, Adam optimizer formed an adequate substitute for the underlying RMSprop optimizer, and it made the loss function convergent faster. Adam optimizer calculated the adaptive parameter-learning rate based on the mean value of the first moment, making full use of the mean value of the second moment of the gradient. The result implied the width factor was chosen as 0.5, the accuracy was highest. 4) Compared experiments had been done. Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), Random Forest (RF), and Extra Trees (ET) algorithms were compared with the proposed pig sound recognition network. All algorithms were trained and tested on the same sound dataset. Specifically, the proposed algorithm increased the recognition accuracy of screams from 84.5% to 97.1%, and the accuracy of howls was increased from 86.1% to 97.5%. But the recognition accuracy of grunts was decreased from 100% to 97.3%. This was caused by the difference in the principle of different recognition algorithms. Furthermore, through the experiments on the width factor and resolution factor, a smaller and more efficient model was defined based on the standard MobileNetV2 model, and the running speed of the model was significantly improved to meet the needs of practical applications, however, the accuracy remained. This study showed that the abnormal pig vocalization was related to abnormal behavior, so sound recognition could help to monitor behaviors. In the future, the abnormal behaviors combined the sound recognition and video analysis would be discussed.
-
-