Artifact suppression for sparse view CT via transformer-based generative adversarial network
•A novel encoder-decoder transformer-based generative adversarial network designed to suppress sparse view CT image artifact.•In Transformer, we utilized the multi-Dconv head transposed attention module, enhancing its ability of features extraction.•To improve structure and detail recovery performan...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2024-09, Vol.95, p.106297, Article 106297 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A novel encoder-decoder transformer-based generative adversarial network designed to suppress sparse view CT image artifact.•In Transformer, we utilized the multi-Dconv head transposed attention module, enhancing its ability of features extraction.•To improve structure and detail recovery performance, we adopted the gated-Dconv feed-forward network in Transformer.•Within the GAN learning framework, we adopted a discriminator to enhance the ability of the generator.
Sparse view CT images are often severely degraded by streak artifacts. Numerous studies have confirmed the remarkable progress made by deep learning (DL) in sparse view CT imaging scenarios. However, the mainstream CNN-based methods are inefficient when capturing feature information in large regions. In this paper, a transformer based generative adversarial network (SVT-GAN), which is designed to efficiently suppress artifacts in sparse view CT images, is proposed. We leverage the advantages of transformer networks and adversarial learning into a framework to improve the quality of sparse view CT image restoration results. The generator is primarily composed of an encoder-decoder structure that relies on the transformer model to learn multiscale local–global representations and leverage contextual information derived from distant artifacts. Moreover, in contrast with the standard transformer model, we utilize the multi-Dconv head-transposed attention (MDTA) module to enhance the ability of the proposed approach to extract both local and nonlocal information and produce impressive structure and detail restoration results. To suppress the transformation of artifact features, the gated-Dconv feedforward network (GDFN) is utilized. Within the GAN learning framework, we employ a simple nine-layer network as the discriminator to enhance the ability of the generator to suppress artifacts and retain features. Compared with the recently developed state-of-the-art methods, the proposed model significantly reduces serious noise artifacts while preserving details on the AAPM and Real CT datasets. Qualitative and quantitative assessments demonstrate the competitive performance of the SVT-GAN. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2024.106297 |