A Deep Information Sharing Network for Multi-Contrast Compressed Sensing MRI Reconstruction
Compressed sensing (CS) theory can accelerate multi-contrast magnetic resonance imaging (MRI) by sampling fewer measurements within each contrast. However, conventional optimization-based reconstruction models suffer several limitations, including a strict assumption of shared sparse support, time-c...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2019-12, Vol.28 (12), p.6141-6153 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Compressed sensing (CS) theory can accelerate multi-contrast magnetic resonance imaging (MRI) by sampling fewer measurements within each contrast. However, conventional optimization-based reconstruction models suffer several limitations, including a strict assumption of shared sparse support, time-consuming optimization, and "shallow" models with difficulties in encoding the patterns contained in massive MRI data. In this paper, we propose the first deep learning model for multi-contrast CS-MRI reconstruction. We achieve information sharing through feature sharing units, which significantly reduces the number of model parameters. The feature sharing unit combines with a data fidelity unit to comprise an inference block, which are then cascaded with dense connections, allowing for efficient information transmission across different depths of the network. Experiments on various multi-contrast MRI datasets show that the proposed model outperforms both state-of-the-art single-contrast and multi-contrast MRI methods in accuracy and efficiency. We demonstrate that improved reconstruction quality can bring benefits to subsequent medical image analysis. Furthermore, the robustness of the proposed model to misregistration shows its potential in real MRI applications. |
---|---|
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2019.2925288 |