RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging
Deep learning in medical imaging often requires large-scale, high-quality data or initiation with suitably pre-trained weights. However, medical datasets are limited by data availability, domain-specific knowledge, and privacy concerns, and the creation of large and diverse radiologic databases like...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning in medical imaging often requires large-scale, high-quality
data or initiation with suitably pre-trained weights. However, medical datasets
are limited by data availability, domain-specific knowledge, and privacy
concerns, and the creation of large and diverse radiologic databases like
RadImageNet is highly resource-intensive. To address these limitations, we
introduce RadImageGAN, the first multi-modal radiologic data generator, which
was developed by training StyleGAN-XL on the real RadImageNet dataset of
102,774 patients. RadImageGAN can generate high-resolution synthetic medical
imaging datasets across 12 anatomical regions and 130 pathological classes in 3
modalities. Furthermore, we demonstrate that RadImageGAN generators can be
utilized with BigDatasetGAN to generate multi-class pixel-wise annotated paired
synthetic images and masks for diverse downstream segmentation tasks with
minimal manual annotation. We showed that using synthetic auto-labeled data
from RadImageGAN can significantly improve performance on four diverse
downstream segmentation datasets by augmenting real training data and/or
developing pre-trained weights for fine-tuning. This shows that RadImageGAN
combined with BigDatasetGAN can improve model performance and address data
scarcity while reducing the resources needed for annotations for segmentation
tasks. |
---|---|
DOI: | 10.48550/arxiv.2312.05953 |