CarcassFormer: An End-to-end Transformer-based Framework for Simultaneous Localization, Segmentation and Classification of Poultry Carcass Defect
In the food industry, assessing the quality of poultry carcasses during processing is a crucial step. This study proposes an effective approach for automating the assessment of carcass quality without requiring skilled labor or inspector involvement. The proposed system is based on machine learning...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the food industry, assessing the quality of poultry carcasses during
processing is a crucial step. This study proposes an effective approach for
automating the assessment of carcass quality without requiring skilled labor or
inspector involvement. The proposed system is based on machine learning (ML)
and computer vision (CV) techniques, enabling automated defect detection and
carcass quality assessment. To this end, an end-to-end framework called
CarcassFormer is introduced. It is built upon a Transformer-based architecture
designed to effectively extract visual representations while simultaneously
detecting, segmenting, and classifying poultry carcass defects. Our proposed
framework is capable of analyzing imperfections resulting from production and
transport welfare issues, as well as processing plant stunner, scalder, picker,
and other equipment malfunctions. To benchmark the framework, a dataset of
7,321 images was initially acquired, which contained both single and multiple
carcasses per image. In this study, the performance of the CarcassFormer system
is compared with other state-of-the-art (SOTA) approaches for both
classification, detection, and segmentation tasks. Through extensive
quantitative experiments, our framework consistently outperforms existing
methods, demonstrating remarkable improvements across various evaluation
metrics such as AP, AP@50, and AP@75. Furthermore, the qualitative results
highlight the strengths of CarcassFormer in capturing fine details, including
feathers, and accurately localizing and segmenting carcasses with high
precision. To facilitate further research and collaboration, the pre-trained
model and source code of CarcassFormer is available for research purposes at:
\url{https://github.com/UARK-AICV/CarcassFormer}. |
---|---|
DOI: | 10.48550/arxiv.2404.11429 |