Rigid and non-rigid motion artifact reduction in X-ray CT using attention module

•Existing methods are limited to specific motions or customized CT systems.•We propose a new real-time method using residual learning and an attention module.•Our model is a generalized framework to handle any CT setups and motion types. [Display omitted] Motion artifacts are a major factor that can...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2021-01, Vol.67, p.101883-101883, Article 101883
Hauptverfasser: Ko, Youngjun, Moon, Seunghyuk, Baek, Jongduk, Shim, Hyunjung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Existing methods are limited to specific motions or customized CT systems.•We propose a new real-time method using residual learning and an attention module.•Our model is a generalized framework to handle any CT setups and motion types. [Display omitted] Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2020.101883