TT U-Net: Temporal Transformer U-Net for Motion Artifact Reduction Using PAD (Pseudo All-Phase Clinical-Dataset) in Cardiac CT
Involuntary motion of the heart remains a challenge for cardiac computed tomography (CT) imaging. Although the electrocardiogram (ECG) gating strategy is widely adopted to perform CT scans at the quasi-quiescent cardiac phase, motion-induced artifacts are still unavoidable for patients with high hea...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on medical imaging 2023-12, Vol.42 (12), p.3805-3816 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Involuntary motion of the heart remains a challenge for cardiac computed tomography (CT) imaging. Although the electrocardiogram (ECG) gating strategy is widely adopted to perform CT scans at the quasi-quiescent cardiac phase, motion-induced artifacts are still unavoidable for patients with high heart rates or irregular rhythms. Dynamic cardiac CT, which provides functional information of the heart, suffers even more severe motion artifacts. In this paper, we develop a deep learning based framework for motion artifact reduction in dynamic cardiac CT. First, we build a PAD (Pseudo All-phase clinical-Dataset) based on a whole-heart motion model and single-phase cardiac CT images. This dataset provides dynamic CT images with realistic-looking motion artifacts that help to develop data-driven approaches. Second, we formulate the problem of motion artifact reduction as a video deblurring task according to its dynamic nature. A novel TT U-Net (Temporal Transformer U-Net) is proposed to excavate the spatiotemporal features for better motion artifact reduction. The self-attention mechanism along the temporal dimension effectively encodes motion information and thus aids image recovery. Experiments show that the TT U-Net trained on the proposed PAD performs well on clinical CT scans, which substantiates the effectiveness and fine generalization ability of our method. The source code, trained models, and dynamic demo will be available at https://github.com/ivy9092111111/TT-U-Net . |
---|---|
ISSN: | 0278-0062 1558-254X |
DOI: | 10.1109/TMI.2023.3310933 |