RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging

Deep learning in medical imaging often requires large-scale, high-quality data or initiation with suitably pre-trained weights. However, medical datasets are limited by data availability, domain-specific knowledge, and privacy concerns, and the creation of large and diverse radiologic databases like...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-12
Hauptverfasser: Liu, Zelong, Zhou, Alexander, Yang, Arnold, Yilmaz, Alara, Yoo, Maxwell, Sullivan, Mikey, Zhang, Catherine, Grant, James, Li, Daiqing, Fayad, Zahi A, Huver, Sean, Deyer, Timothy, Mei, Xueyan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Zelong
Zhou, Alexander
Yang, Arnold
Yilmaz, Alara
Yoo, Maxwell
Sullivan, Mikey
Zhang, Catherine
Grant, James
Li, Daiqing
Fayad, Zahi A
Huver, Sean
Deyer, Timothy
Mei, Xueyan
description Deep learning in medical imaging often requires large-scale, high-quality data or initiation with suitably pre-trained weights. However, medical datasets are limited by data availability, domain-specific knowledge, and privacy concerns, and the creation of large and diverse radiologic databases like RadImageNet is highly resource-intensive. To address these limitations, we introduce RadImageGAN, the first multi-modal radiologic data generator, which was developed by training StyleGAN-XL on the real RadImageNet dataset of 102,774 patients. RadImageGAN can generate high-resolution synthetic medical imaging datasets across 12 anatomical regions and 130 pathological classes in 3 modalities. Furthermore, we demonstrate that RadImageGAN generators can be utilized with BigDatasetGAN to generate multi-class pixel-wise annotated paired synthetic images and masks for diverse downstream segmentation tasks with minimal manual annotation. We showed that using synthetic auto-labeled data from RadImageGAN can significantly improve performance on four diverse downstream segmentation datasets by augmenting real training data and/or developing pre-trained weights for fine-tuning. This shows that RadImageGAN combined with BigDatasetGAN can improve model performance and address data scarcity while reducing the resources needed for annotations for segmentation tasks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2900745936</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2900745936</sourcerecordid><originalsourceid>FETCH-proquest_journals_29007459363</originalsourceid><addsrcrecordid>eNqNjN0KgjAYQEcQJOU7fND1YG3-5KX0Y0J2Ud3LyE-ZqKtt9vwZ9ABdnYtzODPicSE2dBtwviC-tS1jjEcxD0PhkfNVVnkvG8zSC1AKKRRj5xTtdSU72EsnLTp6e8gOIcMBjXTqjZDmUGsDBVZqUvA9qKFZkXktO4v-j0uyPh7uuxN9Gv0a0bqy1aMZJlXyhLE4CBMRif-qD_yvOxw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2900745936</pqid></control><display><type>article</type><title>RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging</title><source>Free E- Journals</source><creator>Liu, Zelong ; Zhou, Alexander ; Yang, Arnold ; Yilmaz, Alara ; Yoo, Maxwell ; Sullivan, Mikey ; Zhang, Catherine ; Grant, James ; Li, Daiqing ; Fayad, Zahi A ; Huver, Sean ; Deyer, Timothy ; Mei, Xueyan</creator><creatorcontrib>Liu, Zelong ; Zhou, Alexander ; Yang, Arnold ; Yilmaz, Alara ; Yoo, Maxwell ; Sullivan, Mikey ; Zhang, Catherine ; Grant, James ; Li, Daiqing ; Fayad, Zahi A ; Huver, Sean ; Deyer, Timothy ; Mei, Xueyan</creatorcontrib><description>Deep learning in medical imaging often requires large-scale, high-quality data or initiation with suitably pre-trained weights. However, medical datasets are limited by data availability, domain-specific knowledge, and privacy concerns, and the creation of large and diverse radiologic databases like RadImageNet is highly resource-intensive. To address these limitations, we introduce RadImageGAN, the first multi-modal radiologic data generator, which was developed by training StyleGAN-XL on the real RadImageNet dataset of 102,774 patients. RadImageGAN can generate high-resolution synthetic medical imaging datasets across 12 anatomical regions and 130 pathological classes in 3 modalities. Furthermore, we demonstrate that RadImageGAN generators can be utilized with BigDatasetGAN to generate multi-class pixel-wise annotated paired synthetic images and masks for diverse downstream segmentation tasks with minimal manual annotation. We showed that using synthetic auto-labeled data from RadImageGAN can significantly improve performance on four diverse downstream segmentation datasets by augmenting real training data and/or developing pre-trained weights for fine-tuning. This shows that RadImageGAN combined with BigDatasetGAN can improve model performance and address data scarcity while reducing the resources needed for annotations for segmentation tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Datasets ; Generative artificial intelligence ; Image resolution ; Machine learning ; Medical imaging ; Performance enhancement ; Synthetic data</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Liu, Zelong</creatorcontrib><creatorcontrib>Zhou, Alexander</creatorcontrib><creatorcontrib>Yang, Arnold</creatorcontrib><creatorcontrib>Yilmaz, Alara</creatorcontrib><creatorcontrib>Yoo, Maxwell</creatorcontrib><creatorcontrib>Sullivan, Mikey</creatorcontrib><creatorcontrib>Zhang, Catherine</creatorcontrib><creatorcontrib>Grant, James</creatorcontrib><creatorcontrib>Li, Daiqing</creatorcontrib><creatorcontrib>Fayad, Zahi A</creatorcontrib><creatorcontrib>Huver, Sean</creatorcontrib><creatorcontrib>Deyer, Timothy</creatorcontrib><creatorcontrib>Mei, Xueyan</creatorcontrib><title>RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging</title><title>arXiv.org</title><description>Deep learning in medical imaging often requires large-scale, high-quality data or initiation with suitably pre-trained weights. However, medical datasets are limited by data availability, domain-specific knowledge, and privacy concerns, and the creation of large and diverse radiologic databases like RadImageNet is highly resource-intensive. To address these limitations, we introduce RadImageGAN, the first multi-modal radiologic data generator, which was developed by training StyleGAN-XL on the real RadImageNet dataset of 102,774 patients. RadImageGAN can generate high-resolution synthetic medical imaging datasets across 12 anatomical regions and 130 pathological classes in 3 modalities. Furthermore, we demonstrate that RadImageGAN generators can be utilized with BigDatasetGAN to generate multi-class pixel-wise annotated paired synthetic images and masks for diverse downstream segmentation tasks with minimal manual annotation. We showed that using synthetic auto-labeled data from RadImageGAN can significantly improve performance on four diverse downstream segmentation datasets by augmenting real training data and/or developing pre-trained weights for fine-tuning. This shows that RadImageGAN combined with BigDatasetGAN can improve model performance and address data scarcity while reducing the resources needed for annotations for segmentation tasks.</description><subject>Annotations</subject><subject>Datasets</subject><subject>Generative artificial intelligence</subject><subject>Image resolution</subject><subject>Machine learning</subject><subject>Medical imaging</subject><subject>Performance enhancement</subject><subject>Synthetic data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjN0KgjAYQEcQJOU7fND1YG3-5KX0Y0J2Ud3LyE-ZqKtt9vwZ9ABdnYtzODPicSE2dBtwviC-tS1jjEcxD0PhkfNVVnkvG8zSC1AKKRRj5xTtdSU72EsnLTp6e8gOIcMBjXTqjZDmUGsDBVZqUvA9qKFZkXktO4v-j0uyPh7uuxN9Gv0a0bqy1aMZJlXyhLE4CBMRif-qD_yvOxw</recordid><startdate>20231210</startdate><enddate>20231210</enddate><creator>Liu, Zelong</creator><creator>Zhou, Alexander</creator><creator>Yang, Arnold</creator><creator>Yilmaz, Alara</creator><creator>Yoo, Maxwell</creator><creator>Sullivan, Mikey</creator><creator>Zhang, Catherine</creator><creator>Grant, James</creator><creator>Li, Daiqing</creator><creator>Fayad, Zahi A</creator><creator>Huver, Sean</creator><creator>Deyer, Timothy</creator><creator>Mei, Xueyan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231210</creationdate><title>RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging</title><author>Liu, Zelong ; Zhou, Alexander ; Yang, Arnold ; Yilmaz, Alara ; Yoo, Maxwell ; Sullivan, Mikey ; Zhang, Catherine ; Grant, James ; Li, Daiqing ; Fayad, Zahi A ; Huver, Sean ; Deyer, Timothy ; Mei, Xueyan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29007459363</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Annotations</topic><topic>Datasets</topic><topic>Generative artificial intelligence</topic><topic>Image resolution</topic><topic>Machine learning</topic><topic>Medical imaging</topic><topic>Performance enhancement</topic><topic>Synthetic data</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Zelong</creatorcontrib><creatorcontrib>Zhou, Alexander</creatorcontrib><creatorcontrib>Yang, Arnold</creatorcontrib><creatorcontrib>Yilmaz, Alara</creatorcontrib><creatorcontrib>Yoo, Maxwell</creatorcontrib><creatorcontrib>Sullivan, Mikey</creatorcontrib><creatorcontrib>Zhang, Catherine</creatorcontrib><creatorcontrib>Grant, James</creatorcontrib><creatorcontrib>Li, Daiqing</creatorcontrib><creatorcontrib>Fayad, Zahi A</creatorcontrib><creatorcontrib>Huver, Sean</creatorcontrib><creatorcontrib>Deyer, Timothy</creatorcontrib><creatorcontrib>Mei, Xueyan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Zelong</au><au>Zhou, Alexander</au><au>Yang, Arnold</au><au>Yilmaz, Alara</au><au>Yoo, Maxwell</au><au>Sullivan, Mikey</au><au>Zhang, Catherine</au><au>Grant, James</au><au>Li, Daiqing</au><au>Fayad, Zahi A</au><au>Huver, Sean</au><au>Deyer, Timothy</au><au>Mei, Xueyan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging</atitle><jtitle>arXiv.org</jtitle><date>2023-12-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Deep learning in medical imaging often requires large-scale, high-quality data or initiation with suitably pre-trained weights. However, medical datasets are limited by data availability, domain-specific knowledge, and privacy concerns, and the creation of large and diverse radiologic databases like RadImageNet is highly resource-intensive. To address these limitations, we introduce RadImageGAN, the first multi-modal radiologic data generator, which was developed by training StyleGAN-XL on the real RadImageNet dataset of 102,774 patients. RadImageGAN can generate high-resolution synthetic medical imaging datasets across 12 anatomical regions and 130 pathological classes in 3 modalities. Furthermore, we demonstrate that RadImageGAN generators can be utilized with BigDatasetGAN to generate multi-class pixel-wise annotated paired synthetic images and masks for diverse downstream segmentation tasks with minimal manual annotation. We showed that using synthetic auto-labeled data from RadImageGAN can significantly improve performance on four diverse downstream segmentation datasets by augmenting real training data and/or developing pre-trained weights for fine-tuning. This shows that RadImageGAN combined with BigDatasetGAN can improve model performance and address data scarcity while reducing the resources needed for annotations for segmentation tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2900745936
source Free E- Journals
subjects Annotations
Datasets
Generative artificial intelligence
Image resolution
Machine learning
Medical imaging
Performance enhancement
Synthetic data
title RadImageGAN -- A Multi-modal Dataset-Scale Generative AI for Medical Imaging
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T11%3A40%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=RadImageGAN%20--%20A%20Multi-modal%20Dataset-Scale%20Generative%20AI%20for%20Medical%20Imaging&rft.jtitle=arXiv.org&rft.au=Liu,%20Zelong&rft.date=2023-12-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2900745936%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2900745936&rft_id=info:pmid/&rfr_iscdi=true