An unsupervised defect detection model for a dry carbon fiber textile

Inspection of dry carbon textiles is a key step to ensure quality in aerospace manufacturing. Due to the rarity and variety of defects, collecting a comprehensive defect dataset is difficult, while collecting ‘normal’ data is comparatively easy. In this paper, we present an unsupervised defect detec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent manufacturing 2022-10, Vol.33 (7), p.2075-2092
Hauptverfasser: Szarski, Martin, Chauhan, Sunita
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Inspection of dry carbon textiles is a key step to ensure quality in aerospace manufacturing. Due to the rarity and variety of defects, collecting a comprehensive defect dataset is difficult, while collecting ‘normal’ data is comparatively easy. In this paper, we present an unsupervised defect detection method for carbon fiber textiles that meets four key criteria for industrial applicability: using only ‘normal’ data, achieving high accuracy even on small and subtle defects, allowing visual interpretation, and achieving real-time performance. We combine a Visual Transformer Encoder and a Normalizing Flow to gather global context from input images and directly produce an image likelihood which is then used as an anomaly score. We demonstrate that when trained on only 150 normal samples, our method correctly detects 100% of anomalies with a 0% false positive rate on a industrial carbon fabric dataset with 34 real defect samples, including subtle stray fiber defects covering only 1% image area where previous methods are shown to fail. We validate the performance on the large public defect dataset MVTec-AD Textures , where we outperform previous work by 4–10%, proving the applicability of our method to other domains. Additionally, we propose a method to extract interpretable anomaly maps from Visual Transformer Attention Rollout and Image Likelihood Gradients that produces convincing explanations for detected anomalies. Finally, we show that the inference time for the model is acceptable at 32 ms, achieving real-time performance.
ISSN:0956-5515
1572-8145
DOI:10.1007/s10845-022-01964-7