Generative domain-adapted adversarial auto-encoder model for enhanced ultrasonic imaging applications
In this study, we propose a class-conditioned Generative Adversarial Autoencoder (cGAAE) to improve the realism of simulated ultrasonic imaging techniques, in particular the Multi-modal Total Focusing Method (M-TFM), based on the availability of both simulated and experimental TFM images. In particu...
Gespeichert in:
Veröffentlicht in: | NDT & E international : independent nondestructive testing and evaluation 2024-12, Vol.148, p.103234, Article 103234 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this study, we propose a class-conditioned Generative Adversarial Autoencoder (cGAAE) to improve the realism of simulated ultrasonic imaging techniques, in particular the Multi-modal Total Focusing Method (M-TFM), based on the availability of both simulated and experimental TFM images. In particular, this work studied the case of the inspection of a complex geometry block representative of weld-inspection problem based on ultrasonic multi-elements probe. The cGAAE is represented by a tailored learning schema, trained in a semi-supervised fashion on a labeled mixture of synthetic (class 0) and experimental (class 1) M-TFM images, obtained under different meaningful inspection set-ups parameters (i.e., the celerity of the transverse ultrasonic wave, the specimen back-wall slope and height, the flaw tilt and heigh). That is, the cGAAE schema consists in a combination of learning stages involving class-conditioned spatial-transformers and arbitrary style transfer endows the cGAAE of powerful generative features, such as quasi real-time generation of M-TFM images by sweep of the inspection parameters. We exploited the cGAAE model to improve the realism of simulated M-TFM images and enhance the accuracy of the inverse problem, aiming at estimating the inspection parameters based on experimental acquisitions.
•A deep learning generative model is conceived for a small non-destructive data set.•State-of-the-art generative networks from large public data set are applied.•Spatial Transformer is adapted for a class-aware generator and discriminator.•A domain-adapted representation is learned and inspected for generation proposes. |
---|---|
ISSN: | 0963-8695 1879-1174 |
DOI: | 10.1016/j.ndteint.2024.103234 |