CACFNet: Fabric defect detection via context-aware attention cascaded feedback network

Fabric defect detection plays an irreplaceable role in the quality control of the textile manufacturing industry, but it is still a challenging task due to the diversity and complexity of defects and environmental factors. Visual saliency models imitating the human vision system can quickly determin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Textile research journal 2023-07, Vol.93 (13-14), p.3036-3055
Hauptverfasser: Liu, Zhoufeng, Tian, Bo, Li, Chunlei, Ding, Shumin, Xi, Jiangtao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Fabric defect detection plays an irreplaceable role in the quality control of the textile manufacturing industry, but it is still a challenging task due to the diversity and complexity of defects and environmental factors. Visual saliency models imitating the human vision system can quickly determine the defect regions from the complex texture background. However, most visual saliency-based methods still suffer from incomplete predictions owing to the variability of fabric defects and low contrast with the background. In this paper, we develop a context-aware attention cascaded feedback network for fabric defect detection to achieve more accurate predictions, in which a parallel context extractor is designed to characterize the multi-scale contextual information. Moreover, a top-down attention cascaded feedback module was devised adaptively to select the important multi-scale complementary information and then transmit it to an adjacent shallower layer to compensate for the inconsistency of information among layers for accurate location. Finally, a multi-level loss function is applied to guide our model for generating more accurate prediction results via optimizing multiple side-output predictions. Experimental results on the two fabric datasets built under six widely used evaluation metrics demonstrate that our proposed framework outperforms state-of-the-art models remarkably.
ISSN:0040-5175
1746-7748
DOI:10.1177/00405175231151439