Disentangle domain features for cross-modality cardiac image segmentation

•Disentangling domain features for cross-modality cardiac image segmentation.•Implementing unsupervised domain adaptation on both feature and image levels.•Zero-loss to enhance the characteristics of the domain-invariant features(DSFs).•Embeding a self-attention module into the UDA framework. [Displ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2021-07, Vol.71, p.102078-102078, Article 102078
Hauptverfasser: Pei, Chenhao, Wu, Fuping, Huang, Liqin, Zhuang, Xiahai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Disentangling domain features for cross-modality cardiac image segmentation.•Implementing unsupervised domain adaptation on both feature and image levels.•Zero-loss to enhance the characteristics of the domain-invariant features(DSFs).•Embeding a self-attention module into the UDA framework. [Display omitted] Unsupervised domain adaptation (UDA) generally learns a mapping to align the distribution of the source domain and target domain. The learned mapping can boost the performance of the model on the target data, of which the labels are unavailable for model training. Previous UDA methods mainly focus on domain-invariant features (DIFs) without considering the domain-specific features (DSFs), which could be used as complementary information to constrain the model. In this work, we propose a new UDA framework for cross-modality image segmentation. The framework first disentangles each domain into the DIFs and DSFs. To enhance the representation of DIFs, self-attention modules are used in the encoder which allows attention-driven, long-range dependency modeling for image generation tasks. Furthermore, a zero loss is minimized to enforce the information of target (source) DSFs, contained in the source (target) images, to be as close to zero as possible. These features are then iteratively decoded and encoded twice to maintain the consistency of the anatomical structure. To improve the quality of the generated images and segmentation results, several discriminators are introduced for adversarial learning. Finally, with the source data and their DIFs, we train a segmentation network, which can be applicable to target images. We validated the proposed framework for cross-modality cardiac segmentation using two public datasets, and the results showed our method delivered promising performance and compared favorably to state-of-the-art approaches in terms of segmentation accuracies. The source code of this work will be released via https://zmiclab.github.io/projects.html, once this manuscript is accepted for publication.
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2021.102078