Research on automatic classification and detection of chicken parts based on deep learning algorithmYan Chen, Xianhui Peng, Lu Cai, Ming Jiao, Dandan Fu, Chen Chen Xu, Peng Zhang
- Food Science
Accurate classification and identification of chicken parts are critical to improve the productivity and processing speed in poultry processing plants. However, the overlapping of chicken parts has an impact on the effectiveness of the identification process. To solve this issue, this study proposed a real‐time classification and detection method for chicken parts, utilizing YOLOV4 deep learning. The method can identify segmented chicken parts on the assembly line in real time and accurately, thus improving the efficiency of poultry processing. First, 600 images containing multiple chicken part samples were collected to build a chicken part dataset after using the image broadening technique, and then the dataset was divided according to the 6:2:2 division principle, with 1200 images as the training set, 400 images as the test set, and 400 images as the validation set. Second, we utilized the single‐stage target detector YOLO to predict and calculate the chicken part images, obtaining the categories and positions of the chicken leg, chicken wing, and chicken breast in the image. This allowed us to achieve real‐time classification and detection of chicken parts. This approach enabled real‐time and efficient classification and detection of chicken parts. Finally, the mean average precision (mAP) and the processing time per image were utilized as key metrics to evaluate the effectiveness of the model. In addition, four other target detection algorithms were introduced for comparison with YOLOV4‐CSPDarknet53 in this study, which include YOLOV3‐Darknet53, YOLOV3‐MobileNetv3, SSD‐MobileNetv3, and SSD‐VGG16. A comprehensive comparison test was conducted to assess the classification and detection performance of these models for chicken parts. Finally, for the chicken part dataset, the mAP of the YOLOV4‐CSPDarknet53 model was 98.86% on a single image with an inference speed of 22.2 ms, which was higher than the other four models of YOLOV3‐Darknet53, YOLOV3‐MobileNetv3, SSD‐MobileNetv3, and SSD‐VGG16 mAP by 3.27%, 3.78%, 6.91%, and 6.13%, respectively. The average detection time was reduced by 13, 1.9, 6.2, and 20.3 ms, respectively. In summary, the chicken part classification and detection method proposed in this study offers numerous benefits, including the ability to detect multiple chicken parts simultaneously, as well as delivering high levels of accuracy and speed. Furthermore, this approach effectively addresses the issue of accurately identifying individual chicken parts in the presence of occlusion, thereby reducing waste on the assembly line.
The aim of this study is to offer visual technical assistance in minimizing wastage and resource depletion during the sorting and cutting of chicken parts in poultry production and processing facilities. Furthermore, considering the diverse demands and preferences regarding chicken parts, this research can facilitate product processing that caters specifically to consumer preferences.