Attention-based generative adversarial network with internal damage segmentation using thermography
This paper describes a real-time, high-performance deep-learning network to segment internal damages of concrete members at the pixel level using active thermography. Unlike surface damage, the collection and preparation of ground truth data for internal damage is extremely challenging and time cons...
Gespeichert in:
Veröffentlicht in: | Automation in construction 2022-09, Vol.141, p.104412, Article 104412 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper describes a real-time, high-performance deep-learning network to segment internal damages of concrete members at the pixel level using active thermography. Unlike surface damage, the collection and preparation of ground truth data for internal damage is extremely challenging and time consuming. To overcome these critical limitations, an attention-based generative adversarial network (AGAN) was developed to generate synthetic images for training the proposed internal damage segmentation network (IDSNet). The developed IDSNet outperforms other state-of-the-art networks, with a mean intersection over union of 0.900, positive predictive value of 0.952, F1-score of 0.941, and sensitivity of 0.942 over a test set. AGAN improves 12% of the mIoU of the IDSNet. IDSNet can perform real-time processing of 640 × 480 × 3 sizes of thermal images with 74 frames per second due to its extremely lightweight segmentation network with only 0.085 M total learnable parameters.
•Real-time deep-learning internal damage segmentation network using thermography.•Internal damage segmentation network (IDSNet) proposed for concrete members.•Attention-based GAN (AGAN) for generating artificial training data for IDSNet.•AGAN synthetic data improves the 12% mean intersection over the union of IDSNet.•IDSNet outperforms state-of-the-art models with a 90% mIoU. |
---|---|
ISSN: | 0926-5805 1872-7891 |
DOI: | 10.1016/j.autcon.2022.104412 |