DSFormer: A Dual-domain Self-supervised Transformer for Accelerated Multi-contrast MRI Reconstruction
Multi-contrast MRI (MC-MRI) captures multiple complementary imaging modalities to aid in radiological decision-making. Given the need for lowering the time cost of multiple acquisitions, current deep accelerated MRI reconstruction networks focus on exploiting the redundancy between multiple contrast...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-contrast MRI (MC-MRI) captures multiple complementary imaging
modalities to aid in radiological decision-making. Given the need for lowering
the time cost of multiple acquisitions, current deep accelerated MRI
reconstruction networks focus on exploiting the redundancy between multiple
contrasts. However, existing works are largely supervised with paired data
and/or prohibitively expensive fully-sampled MRI sequences. Further,
reconstruction networks typically rely on convolutional architectures which are
limited in their capacity to model long-range interactions and may lead to
suboptimal recovery of fine anatomical detail. To these ends, we present a
dual-domain self-supervised transformer (DSFormer) for accelerated MC-MRI
reconstruction. DSFormer develops a deep conditional cascade transformer (DCCT)
consisting of several cascaded Swin transformer reconstruction networks
(SwinRN) trained under two deep conditioning strategies to enable MC-MRI
information sharing. We further present a dual-domain (image and k-space)
self-supervised learning strategy for DCCT to alleviate the costs of acquiring
fully sampled training data. DSFormer generates high-fidelity reconstructions
which experimentally outperform current fully-supervised baselines. Moreover,
we find that DSFormer achieves nearly the same performance when trained either
with full supervision or with our proposed dual-domain self-supervision. |
---|---|
DOI: | 10.48550/arxiv.2201.10776 |