RegFormer: A Local-Nonlocal Regularization-Based Model for Sparse-View CT Reconstruction
Sparse-view computed tomography (CT) is one of the primal means to reduce radiation risk. However, the reconstruction of sparse-view CT with the classic analytical method is usually contaminated by severe artifacts. By designing the regularization terms carefully, iterative reconstruction (IR) algor...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on radiation and plasma medical sciences 2024-02, Vol.8 (2), p.184-194 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sparse-view computed tomography (CT) is one of the primal means to reduce radiation risk. However, the reconstruction of sparse-view CT with the classic analytical method is usually contaminated by severe artifacts. By designing the regularization terms carefully, iterative reconstruction (IR) algorithms can produce promising results. Aided by the powerful deep learning techniques, learned regularization terms with convolution neural network (CNN) have attracted much attention and can further improve performance. In this article, to further enhance the performance of existing learnable regularization-based networks, we propose a learnable local-nonlocal regularization-based model called RegFormer for sparse-view CT reconstruction. Specifically, we unroll the iterative scheme into a neural network and replace the gradient of handcrafted regularization terms with learnable kernels. The convolution layers are used to learn the gradient of local regularization, resulting in excellent denoising performance. In addition, the transformer-based encoders and decoders incorporate the learned nonlocal prior into the model, preserving the structures and details. To enhance the ability to extract deep features, we propose an iteration transmission (IT) module that further improves the efficiency of each iteration. The experimental results show that our proposed RegFormer outperforms several state-of-the-art methods in artifact reduction and detail preservation. |
---|---|
ISSN: | 2469-7311 2469-7303 |
DOI: | 10.1109/TRPMS.2023.3281148 |