Semi-supervised learning with GAN for automatic defect detection from images

Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Automation in construction 2021-08, Vol.128, p.103764, Article 103764
Hauptverfasser: Zhang, Gaowei, Pan, Yue, Zhang, Limao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled images, which is built on a dual attention mechanism. Specifically, the attention mechanism is employed to extract the rich and global representations of pixels in both the spatial and channel dimension for better feature representation. The other one is the fully convolutional discriminator (FCD) network, which employs two kinds of loss functions (the adversarial loss and the cross-entropy loss) to generate the confidential density maps of unlabeled images in a semi-supervised learning manner. In contrast to most existing methods heavily relying on labeled or weakly-labeled images, the developed SSGAN model can leverage unlabeled images to enhance the segmentation performance and alleviate the data labeling task. The effectiveness of the proposed SSGAN model is demonstrated in a public dataset with four classes of steel defects. In comparison with other state-of-the-art methods, our developed model using 1/8 and 1/4 labeled data can reach promising mean Intersection over Union (IoU) of 79.0% and 81.8%, respectively. Moreover, the proposed SSGAN is robust and flexible in the segmentation under various scenarios. •A semi-supervised generative adversarial network with two sub-networks is developed.•It leverages unlabeled images to enhance segmentation performance and alleviate labeling task.•The effectiveness is verified in a public dataset with four classes of steel defects.•1/8 and 1/4 labeled data reach mean Intersection over Union of 79.0% and 81.8%, respectively.•The developed approach is robust and flexible in the segmentation of various scenarios.
ISSN:0926-5805
1872-7891
DOI:10.1016/j.autcon.2021.103764