Fabric defect image generation method based on the dual-stage W-net generative adversarial network

Due to the intricate and diverse nature of textile defects, detecting them poses an exceptionally challenging task. In comparison with conventional defect detection methods, deep learning-based defect detection methods generally exhibit superior precision. However, utilizing deep learning for defect...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Textile research journal 2024-07, Vol.94 (13-14), p.1543-1557
Hauptverfasser: Hu, Xuejuan, Liang, Yifei, Wang, Hengliang, Tan, Yadan, Liu, Shiqian, Pan, Fudong, Wu, Qingyang, He, Zhengdi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Due to the intricate and diverse nature of textile defects, detecting them poses an exceptionally challenging task. In comparison with conventional defect detection methods, deep learning-based defect detection methods generally exhibit superior precision. However, utilizing deep learning for defect detection requires a substantial volume of training data, which can be particularly challenging to accumulate for textile flaws. To augment the fabric defect dataset and enhance fabric defect detection accuracy, we propose a fabric defect image generation method based on Pix2Pix generative adversarial network. This approach devises a novel dual-stage W-net generative adversarial network. By increasing the network depth, this model can effectively extract intricate textile image features, thereby enhancing its ability to expand information sharing capacity. The dual-stage W-net generative adversarial network allows generating desired defects on defect-free textile images. We conduct quality assessment of the generated fabric defect images resulting in peak signal-to-noise ratio and structural similarity values exceeding 30 and 0.930, respectively, and a learned perceptual image patch similarity value no greater than 0.085, demonstrating the effectiveness of fabric defect data augmentation. The effectiveness of dual-stage W-net generative adversarial network is established through multiple comparative experiments evaluating the generated images. By examining the detection performance before and after data augmentation, the results demonstrate that mean average precision improves by 6.13% and 14.57% on YOLO V5 and faster recurrent convolutional neural networks detection models, respectively.
ISSN:0040-5175
1746-7748
DOI:10.1177/00405175241233942