METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy
Novel multimodal imaging methods are capable of generating extensive, super high resolution datasets for preclinical research. Yet, a massive lack of annotations prevents the broad use of deep learning to analyze such data. So far, existing generative models fail to mitigate this problem because of...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Novel multimodal imaging methods are capable of generating extensive, super
high resolution datasets for preclinical research. Yet, a massive lack of
annotations prevents the broad use of deep learning to analyze such data. So
far, existing generative models fail to mitigate this problem because of
frequent labeling errors. In this paper, we introduce a novel generative method
which leverages real anatomical information to generate realistic image-label
pairs of tumours. We construct a dual-pathway generator, for the anatomical
image and label, trained in a cycle-consistent setup, constrained by an
independent, pretrained segmentor. The generated images yield significant
quantitative improvement compared to existing methods. To validate the quality
of synthesis, we train segmentation networks on a dataset augmented with the
synthetic data, substantially improving the segmentation over baseline. |
---|---|
DOI: | 10.48550/arxiv.2104.10993 |