220
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Computer vision classification detection of chicken parts based on optimized Swin-Transformer

, , , , &
Article: 2347480 | Received 26 Jan 2024, Accepted 19 Apr 2024, Published online: 08 May 2024
 

ABSTRACT

In order to achieve real-time classification and detection of various chicken parts, this study introduces an optimized Swin-Transformer method for the classification and detection of multiple chicken parts. It initially leverages the Transformer’s self-attention structure to capture more comprehensive high-level visual semantic information from chicken part images. The image enhancement technique was applied to the image in the preprocessing stage to enhance the feature information of the image, and the migration learning method was used to train and optimize the Swin-Transformer model on the enhanced chicken parts dataset for classification and detection of chicken parts. Furthermore, this model was compared to four commonly used models in object target detection tasks: YOLOV3-Darknet53, YOLOV3-MobileNetv3, SSD-MobileNetv3, and SSD-VGG16. The results indicated that the Swin-Transformer model outperforms these models with a higher mAP value by 1.62%, 2.13%, 5.26%, and 4.48%, accompanied by a reduction in detection time by 16.18 ms, 5.08 ms, 9.38 ms, and 23.48 ms, respectively. The method of this study fulfills the production line requirements while exhibiting superior performance and greater robustness compared to existing conventional methods.

GRAPHICAL ABSTRACT

Acknowledgments

Authors extend their gratitude to Shida Zhao and Shucai Wang for her technical assistance to this study.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data used to support the findings of this study are available from the corresponding author upon request.

Additional information

Funding

This work was supported by Chinese National Natural Science Foundation of China [51905387], Scientific Research Project from Department of Education of Hubei Province [D20211601] and Research Project of Wuhan Polytechnic University [2021Y26].