M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With Annotation-Efficient Supervision
Automated medical image segmentation using deep neural networks typically requires substantial supervised training. However, these models fail to generalize well across different imaging modalities. This shortcoming, amplified by the limited availability of expert annotated data, has been hampering...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automated medical image segmentation using deep neural networks typically
requires substantial supervised training. However, these models fail to
generalize well across different imaging modalities. This shortcoming,
amplified by the limited availability of expert annotated data, has been
hampering the deployment of such methods at a larger scale across modalities.
To address these issues, we propose M-GenSeg, a new semi-supervised generative
training strategy for cross-modality tumor segmentation on unpaired bi-modal
datasets. With the addition of known healthy images, an unsupervised objective
encourages the model to disentangling tumors from the background, which
parallels the segmentation task. Then, by teaching the model to convert images
across modalities, we leverage available pixel-level annotations from the
source modality to enable segmentation in the unannotated target modality. We
evaluated the performance on a brain tumor segmentation dataset composed of
four different contrast sequences from the public BraTS 2020 challenge data. We
report consistent improvement in Dice scores over state-of-the-art
domain-adaptive baselines on the unannotated target modality. Unlike the prior
art, M-GenSeg also introduces the ability to train with a partially annotated
source modality. |
---|---|
DOI: | 10.48550/arxiv.2212.07276 |