Self-improving generative foundation model for synthetic medical image generation and clinical applications

In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalitie...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature medicine 2024-12
Hauptverfasser: Wang, Jinzhuo, Wang, Kai, Yu, Yunfang, Lu, Yuxing, Xiao, Wenchao, Sun, Zhuo, Liu, Fei, Zou, Zixing, Gao, Yuanxu, Yang, Lei, Zhou, Hong-Yu, Miao, Hanpei, Zhao, Wenting, Huang, Lisha, Zeng, Lingchao, Guo, Rui, Chong, Ieng, Deng, Boyu, Cheng, Linling, Chen, Xiaoniao, Luo, Jing, Zhu, Meng-Hua, Baptista-Hon, Daniel, Monteiro, Olivia, Li, Ming, Ke, Yu, Li, Jiahui, Zeng, Simiao, Guan, Taihua, Zeng, Jin, Xue, Kanmin, Oermann, Eric, Luo, Huiyan, Yin, Yun, Zhang, Kang, Qu, Jia
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title Nature medicine
container_volume
creator Wang, Jinzhuo
Wang, Kai
Yu, Yunfang
Lu, Yuxing
Xiao, Wenchao
Sun, Zhuo
Liu, Fei
Zou, Zixing
Gao, Yuanxu
Yang, Lei
Zhou, Hong-Yu
Miao, Hanpei
Zhao, Wenting
Huang, Lisha
Zeng, Lingchao
Guo, Rui
Chong, Ieng
Deng, Boyu
Cheng, Linling
Chen, Xiaoniao
Luo, Jing
Zhu, Meng-Hua
Baptista-Hon, Daniel
Monteiro, Olivia
Li, Ming
Ke, Yu
Li, Jiahui
Zeng, Simiao
Guan, Taihua
Zeng, Jin
Xue, Kanmin
Oermann, Eric
Luo, Huiyan
Yin, Yun
Zhang, Kang
Qu, Jia
description In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalities, where the availability of diverse and comprehensive datasets is often inadequate. To address this challenge, we introduce a unified medical image-text generative model called MINIM that is capable of synthesizing medical images of various organs across various imaging modalities based on textual instructions. Clinician evaluations and rigorous objective measurements validate the high quality of MINIM's synthetic images. MINIM exhibits an enhanced generative capability when presented with previously unseen data domains, demonstrating its potential as a generalist medical AI (GMAI). Our findings show that MINIM's synthetic images effectively augment existing datasets, boosting performance across multiple medical applications such as diagnostics, report generation and self-supervised learning. On average, MINIM enhances performance by 12% for ophthalmic, 15% for chest, 13% for brain and 17% for breast-related tasks. Furthermore, we demonstrate MINIM's potential clinical utility in the accurate prediction of HER2-positive breast cancer from MRI images. Using a large retrospective simulation analysis, we demonstrate MINIM's clinical potential by accurately identifying targeted therapy-sensitive EGFR mutations using lung cancer computed tomography images, which could potentially lead to improved 5-year survival rates. Although these results are promising, further validation and refinement in more diverse and prospective settings would greatly enhance the model's generalizability and robustness.
doi_str_mv 10.1038/s41591-024-03359-y
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_miscellaneous_3146653069</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3146653069</sourcerecordid><originalsourceid>FETCH-LOGICAL-p141t-5f4f139fb16f1f750ce7f4f07fb2d4855cc3564b353a82ad0548186485a85e0e3</originalsourceid><addsrcrecordid>eNpNkEtLxDAUhYMozjj6B1xIlm6iyeTRdimDLxhwoYK7kqY3YzRNa9MO9N8bdRRX9-Occy-ci9ApoxeM8vwyCiYLRuhSEMq5LMi0h-ZMCkVYRl_2__EMHcX4RinlVBaHaMYLpbhQ2Ry9P4K3xDVd325d2OANBOj14LaAbTuGOmEbcNPW4JPQ4ziF4RUGZ3ADtTPaY9foDfztpbAONTbehW9Xd51P8GXEY3RgtY9wspsL9Hxz_bS6I-uH2_vV1Zp0TLCBSCss44WtmLLMZpIayJJEM1sta5FLaQyXSlRccp0vdU2lyFmukqNzCRT4Ap3_3E2lPkaIQ9m4aMB7HaAdY8mZUEpyqooUPdtFxyoVKrs-1emn8vdB_BN7hWqW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3146653069</pqid></control><display><type>article</type><title>Self-improving generative foundation model for synthetic medical image generation and clinical applications</title><source>Nature Journals Online</source><source>SpringerLink Journals - AutoHoldings</source><creator>Wang, Jinzhuo ; Wang, Kai ; Yu, Yunfang ; Lu, Yuxing ; Xiao, Wenchao ; Sun, Zhuo ; Liu, Fei ; Zou, Zixing ; Gao, Yuanxu ; Yang, Lei ; Zhou, Hong-Yu ; Miao, Hanpei ; Zhao, Wenting ; Huang, Lisha ; Zeng, Lingchao ; Guo, Rui ; Chong, Ieng ; Deng, Boyu ; Cheng, Linling ; Chen, Xiaoniao ; Luo, Jing ; Zhu, Meng-Hua ; Baptista-Hon, Daniel ; Monteiro, Olivia ; Li, Ming ; Ke, Yu ; Li, Jiahui ; Zeng, Simiao ; Guan, Taihua ; Zeng, Jin ; Xue, Kanmin ; Oermann, Eric ; Luo, Huiyan ; Yin, Yun ; Zhang, Kang ; Qu, Jia</creator><creatorcontrib>Wang, Jinzhuo ; Wang, Kai ; Yu, Yunfang ; Lu, Yuxing ; Xiao, Wenchao ; Sun, Zhuo ; Liu, Fei ; Zou, Zixing ; Gao, Yuanxu ; Yang, Lei ; Zhou, Hong-Yu ; Miao, Hanpei ; Zhao, Wenting ; Huang, Lisha ; Zeng, Lingchao ; Guo, Rui ; Chong, Ieng ; Deng, Boyu ; Cheng, Linling ; Chen, Xiaoniao ; Luo, Jing ; Zhu, Meng-Hua ; Baptista-Hon, Daniel ; Monteiro, Olivia ; Li, Ming ; Ke, Yu ; Li, Jiahui ; Zeng, Simiao ; Guan, Taihua ; Zeng, Jin ; Xue, Kanmin ; Oermann, Eric ; Luo, Huiyan ; Yin, Yun ; Zhang, Kang ; Qu, Jia</creatorcontrib><description>In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalities, where the availability of diverse and comprehensive datasets is often inadequate. To address this challenge, we introduce a unified medical image-text generative model called MINIM that is capable of synthesizing medical images of various organs across various imaging modalities based on textual instructions. Clinician evaluations and rigorous objective measurements validate the high quality of MINIM's synthetic images. MINIM exhibits an enhanced generative capability when presented with previously unseen data domains, demonstrating its potential as a generalist medical AI (GMAI). Our findings show that MINIM's synthetic images effectively augment existing datasets, boosting performance across multiple medical applications such as diagnostics, report generation and self-supervised learning. On average, MINIM enhances performance by 12% for ophthalmic, 15% for chest, 13% for brain and 17% for breast-related tasks. Furthermore, we demonstrate MINIM's potential clinical utility in the accurate prediction of HER2-positive breast cancer from MRI images. Using a large retrospective simulation analysis, we demonstrate MINIM's clinical potential by accurately identifying targeted therapy-sensitive EGFR mutations using lung cancer computed tomography images, which could potentially lead to improved 5-year survival rates. Although these results are promising, further validation and refinement in more diverse and prospective settings would greatly enhance the model's generalizability and robustness.</description><identifier>ISSN: 1546-170X</identifier><identifier>EISSN: 1546-170X</identifier><identifier>DOI: 10.1038/s41591-024-03359-y</identifier><identifier>PMID: 39663467</identifier><language>eng</language><publisher>United States</publisher><ispartof>Nature medicine, 2024-12</ispartof><rights>2024. The Author(s), under exclusive licence to Springer Nature America, Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-8207-4411 ; 0000-0001-5314-0195 ; 0000-0002-1256-7050 ; 0000-0001-6312-9299 ; 0000-0002-4549-1697 ; 0000-0002-1876-5963 ; 0000-0002-0166-0866 ; 0000-0002-8758-8243 ; 0000-0001-7585-1270 ; 0000-0002-9464-4426</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39663467$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Jinzhuo</creatorcontrib><creatorcontrib>Wang, Kai</creatorcontrib><creatorcontrib>Yu, Yunfang</creatorcontrib><creatorcontrib>Lu, Yuxing</creatorcontrib><creatorcontrib>Xiao, Wenchao</creatorcontrib><creatorcontrib>Sun, Zhuo</creatorcontrib><creatorcontrib>Liu, Fei</creatorcontrib><creatorcontrib>Zou, Zixing</creatorcontrib><creatorcontrib>Gao, Yuanxu</creatorcontrib><creatorcontrib>Yang, Lei</creatorcontrib><creatorcontrib>Zhou, Hong-Yu</creatorcontrib><creatorcontrib>Miao, Hanpei</creatorcontrib><creatorcontrib>Zhao, Wenting</creatorcontrib><creatorcontrib>Huang, Lisha</creatorcontrib><creatorcontrib>Zeng, Lingchao</creatorcontrib><creatorcontrib>Guo, Rui</creatorcontrib><creatorcontrib>Chong, Ieng</creatorcontrib><creatorcontrib>Deng, Boyu</creatorcontrib><creatorcontrib>Cheng, Linling</creatorcontrib><creatorcontrib>Chen, Xiaoniao</creatorcontrib><creatorcontrib>Luo, Jing</creatorcontrib><creatorcontrib>Zhu, Meng-Hua</creatorcontrib><creatorcontrib>Baptista-Hon, Daniel</creatorcontrib><creatorcontrib>Monteiro, Olivia</creatorcontrib><creatorcontrib>Li, Ming</creatorcontrib><creatorcontrib>Ke, Yu</creatorcontrib><creatorcontrib>Li, Jiahui</creatorcontrib><creatorcontrib>Zeng, Simiao</creatorcontrib><creatorcontrib>Guan, Taihua</creatorcontrib><creatorcontrib>Zeng, Jin</creatorcontrib><creatorcontrib>Xue, Kanmin</creatorcontrib><creatorcontrib>Oermann, Eric</creatorcontrib><creatorcontrib>Luo, Huiyan</creatorcontrib><creatorcontrib>Yin, Yun</creatorcontrib><creatorcontrib>Zhang, Kang</creatorcontrib><creatorcontrib>Qu, Jia</creatorcontrib><title>Self-improving generative foundation model for synthetic medical image generation and clinical applications</title><title>Nature medicine</title><addtitle>Nat Med</addtitle><description>In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalities, where the availability of diverse and comprehensive datasets is often inadequate. To address this challenge, we introduce a unified medical image-text generative model called MINIM that is capable of synthesizing medical images of various organs across various imaging modalities based on textual instructions. Clinician evaluations and rigorous objective measurements validate the high quality of MINIM's synthetic images. MINIM exhibits an enhanced generative capability when presented with previously unseen data domains, demonstrating its potential as a generalist medical AI (GMAI). Our findings show that MINIM's synthetic images effectively augment existing datasets, boosting performance across multiple medical applications such as diagnostics, report generation and self-supervised learning. On average, MINIM enhances performance by 12% for ophthalmic, 15% for chest, 13% for brain and 17% for breast-related tasks. Furthermore, we demonstrate MINIM's potential clinical utility in the accurate prediction of HER2-positive breast cancer from MRI images. Using a large retrospective simulation analysis, we demonstrate MINIM's clinical potential by accurately identifying targeted therapy-sensitive EGFR mutations using lung cancer computed tomography images, which could potentially lead to improved 5-year survival rates. Although these results are promising, further validation and refinement in more diverse and prospective settings would greatly enhance the model's generalizability and robustness.</description><issn>1546-170X</issn><issn>1546-170X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkEtLxDAUhYMozjj6B1xIlm6iyeTRdimDLxhwoYK7kqY3YzRNa9MO9N8bdRRX9-Occy-ci9ApoxeM8vwyCiYLRuhSEMq5LMi0h-ZMCkVYRl_2__EMHcX4RinlVBaHaMYLpbhQ2Ry9P4K3xDVd325d2OANBOj14LaAbTuGOmEbcNPW4JPQ4ziF4RUGZ3ADtTPaY9foDfztpbAONTbehW9Xd51P8GXEY3RgtY9wspsL9Hxz_bS6I-uH2_vV1Zp0TLCBSCss44WtmLLMZpIayJJEM1sta5FLaQyXSlRccp0vdU2lyFmukqNzCRT4Ap3_3E2lPkaIQ9m4aMB7HaAdY8mZUEpyqooUPdtFxyoVKrs-1emn8vdB_BN7hWqW</recordid><startdate>20241211</startdate><enddate>20241211</enddate><creator>Wang, Jinzhuo</creator><creator>Wang, Kai</creator><creator>Yu, Yunfang</creator><creator>Lu, Yuxing</creator><creator>Xiao, Wenchao</creator><creator>Sun, Zhuo</creator><creator>Liu, Fei</creator><creator>Zou, Zixing</creator><creator>Gao, Yuanxu</creator><creator>Yang, Lei</creator><creator>Zhou, Hong-Yu</creator><creator>Miao, Hanpei</creator><creator>Zhao, Wenting</creator><creator>Huang, Lisha</creator><creator>Zeng, Lingchao</creator><creator>Guo, Rui</creator><creator>Chong, Ieng</creator><creator>Deng, Boyu</creator><creator>Cheng, Linling</creator><creator>Chen, Xiaoniao</creator><creator>Luo, Jing</creator><creator>Zhu, Meng-Hua</creator><creator>Baptista-Hon, Daniel</creator><creator>Monteiro, Olivia</creator><creator>Li, Ming</creator><creator>Ke, Yu</creator><creator>Li, Jiahui</creator><creator>Zeng, Simiao</creator><creator>Guan, Taihua</creator><creator>Zeng, Jin</creator><creator>Xue, Kanmin</creator><creator>Oermann, Eric</creator><creator>Luo, Huiyan</creator><creator>Yin, Yun</creator><creator>Zhang, Kang</creator><creator>Qu, Jia</creator><scope>NPM</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8207-4411</orcidid><orcidid>https://orcid.org/0000-0001-5314-0195</orcidid><orcidid>https://orcid.org/0000-0002-1256-7050</orcidid><orcidid>https://orcid.org/0000-0001-6312-9299</orcidid><orcidid>https://orcid.org/0000-0002-4549-1697</orcidid><orcidid>https://orcid.org/0000-0002-1876-5963</orcidid><orcidid>https://orcid.org/0000-0002-0166-0866</orcidid><orcidid>https://orcid.org/0000-0002-8758-8243</orcidid><orcidid>https://orcid.org/0000-0001-7585-1270</orcidid><orcidid>https://orcid.org/0000-0002-9464-4426</orcidid></search><sort><creationdate>20241211</creationdate><title>Self-improving generative foundation model for synthetic medical image generation and clinical applications</title><author>Wang, Jinzhuo ; Wang, Kai ; Yu, Yunfang ; Lu, Yuxing ; Xiao, Wenchao ; Sun, Zhuo ; Liu, Fei ; Zou, Zixing ; Gao, Yuanxu ; Yang, Lei ; Zhou, Hong-Yu ; Miao, Hanpei ; Zhao, Wenting ; Huang, Lisha ; Zeng, Lingchao ; Guo, Rui ; Chong, Ieng ; Deng, Boyu ; Cheng, Linling ; Chen, Xiaoniao ; Luo, Jing ; Zhu, Meng-Hua ; Baptista-Hon, Daniel ; Monteiro, Olivia ; Li, Ming ; Ke, Yu ; Li, Jiahui ; Zeng, Simiao ; Guan, Taihua ; Zeng, Jin ; Xue, Kanmin ; Oermann, Eric ; Luo, Huiyan ; Yin, Yun ; Zhang, Kang ; Qu, Jia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p141t-5f4f139fb16f1f750ce7f4f07fb2d4855cc3564b353a82ad0548186485a85e0e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Jinzhuo</creatorcontrib><creatorcontrib>Wang, Kai</creatorcontrib><creatorcontrib>Yu, Yunfang</creatorcontrib><creatorcontrib>Lu, Yuxing</creatorcontrib><creatorcontrib>Xiao, Wenchao</creatorcontrib><creatorcontrib>Sun, Zhuo</creatorcontrib><creatorcontrib>Liu, Fei</creatorcontrib><creatorcontrib>Zou, Zixing</creatorcontrib><creatorcontrib>Gao, Yuanxu</creatorcontrib><creatorcontrib>Yang, Lei</creatorcontrib><creatorcontrib>Zhou, Hong-Yu</creatorcontrib><creatorcontrib>Miao, Hanpei</creatorcontrib><creatorcontrib>Zhao, Wenting</creatorcontrib><creatorcontrib>Huang, Lisha</creatorcontrib><creatorcontrib>Zeng, Lingchao</creatorcontrib><creatorcontrib>Guo, Rui</creatorcontrib><creatorcontrib>Chong, Ieng</creatorcontrib><creatorcontrib>Deng, Boyu</creatorcontrib><creatorcontrib>Cheng, Linling</creatorcontrib><creatorcontrib>Chen, Xiaoniao</creatorcontrib><creatorcontrib>Luo, Jing</creatorcontrib><creatorcontrib>Zhu, Meng-Hua</creatorcontrib><creatorcontrib>Baptista-Hon, Daniel</creatorcontrib><creatorcontrib>Monteiro, Olivia</creatorcontrib><creatorcontrib>Li, Ming</creatorcontrib><creatorcontrib>Ke, Yu</creatorcontrib><creatorcontrib>Li, Jiahui</creatorcontrib><creatorcontrib>Zeng, Simiao</creatorcontrib><creatorcontrib>Guan, Taihua</creatorcontrib><creatorcontrib>Zeng, Jin</creatorcontrib><creatorcontrib>Xue, Kanmin</creatorcontrib><creatorcontrib>Oermann, Eric</creatorcontrib><creatorcontrib>Luo, Huiyan</creatorcontrib><creatorcontrib>Yin, Yun</creatorcontrib><creatorcontrib>Zhang, Kang</creatorcontrib><creatorcontrib>Qu, Jia</creatorcontrib><collection>PubMed</collection><collection>MEDLINE - Academic</collection><jtitle>Nature medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Jinzhuo</au><au>Wang, Kai</au><au>Yu, Yunfang</au><au>Lu, Yuxing</au><au>Xiao, Wenchao</au><au>Sun, Zhuo</au><au>Liu, Fei</au><au>Zou, Zixing</au><au>Gao, Yuanxu</au><au>Yang, Lei</au><au>Zhou, Hong-Yu</au><au>Miao, Hanpei</au><au>Zhao, Wenting</au><au>Huang, Lisha</au><au>Zeng, Lingchao</au><au>Guo, Rui</au><au>Chong, Ieng</au><au>Deng, Boyu</au><au>Cheng, Linling</au><au>Chen, Xiaoniao</au><au>Luo, Jing</au><au>Zhu, Meng-Hua</au><au>Baptista-Hon, Daniel</au><au>Monteiro, Olivia</au><au>Li, Ming</au><au>Ke, Yu</au><au>Li, Jiahui</au><au>Zeng, Simiao</au><au>Guan, Taihua</au><au>Zeng, Jin</au><au>Xue, Kanmin</au><au>Oermann, Eric</au><au>Luo, Huiyan</au><au>Yin, Yun</au><au>Zhang, Kang</au><au>Qu, Jia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-improving generative foundation model for synthetic medical image generation and clinical applications</atitle><jtitle>Nature medicine</jtitle><addtitle>Nat Med</addtitle><date>2024-12-11</date><risdate>2024</risdate><issn>1546-170X</issn><eissn>1546-170X</eissn><abstract>In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalities, where the availability of diverse and comprehensive datasets is often inadequate. To address this challenge, we introduce a unified medical image-text generative model called MINIM that is capable of synthesizing medical images of various organs across various imaging modalities based on textual instructions. Clinician evaluations and rigorous objective measurements validate the high quality of MINIM's synthetic images. MINIM exhibits an enhanced generative capability when presented with previously unseen data domains, demonstrating its potential as a generalist medical AI (GMAI). Our findings show that MINIM's synthetic images effectively augment existing datasets, boosting performance across multiple medical applications such as diagnostics, report generation and self-supervised learning. On average, MINIM enhances performance by 12% for ophthalmic, 15% for chest, 13% for brain and 17% for breast-related tasks. Furthermore, we demonstrate MINIM's potential clinical utility in the accurate prediction of HER2-positive breast cancer from MRI images. Using a large retrospective simulation analysis, we demonstrate MINIM's clinical potential by accurately identifying targeted therapy-sensitive EGFR mutations using lung cancer computed tomography images, which could potentially lead to improved 5-year survival rates. Although these results are promising, further validation and refinement in more diverse and prospective settings would greatly enhance the model's generalizability and robustness.</abstract><cop>United States</cop><pmid>39663467</pmid><doi>10.1038/s41591-024-03359-y</doi><orcidid>https://orcid.org/0000-0002-8207-4411</orcidid><orcidid>https://orcid.org/0000-0001-5314-0195</orcidid><orcidid>https://orcid.org/0000-0002-1256-7050</orcidid><orcidid>https://orcid.org/0000-0001-6312-9299</orcidid><orcidid>https://orcid.org/0000-0002-4549-1697</orcidid><orcidid>https://orcid.org/0000-0002-1876-5963</orcidid><orcidid>https://orcid.org/0000-0002-0166-0866</orcidid><orcidid>https://orcid.org/0000-0002-8758-8243</orcidid><orcidid>https://orcid.org/0000-0001-7585-1270</orcidid><orcidid>https://orcid.org/0000-0002-9464-4426</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1546-170X
ispartof Nature medicine, 2024-12
issn 1546-170X
1546-170X
language eng
recordid cdi_proquest_miscellaneous_3146653069
source Nature Journals Online; SpringerLink Journals - AutoHoldings
title Self-improving generative foundation model for synthetic medical image generation and clinical applications
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T12%3A25%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-improving%20generative%20foundation%20model%20for%20synthetic%20medical%20image%20generation%20and%20clinical%20applications&rft.jtitle=Nature%20medicine&rft.au=Wang,%20Jinzhuo&rft.date=2024-12-11&rft.issn=1546-170X&rft.eissn=1546-170X&rft_id=info:doi/10.1038/s41591-024-03359-y&rft_dat=%3Cproquest_pubme%3E3146653069%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3146653069&rft_id=info:pmid/39663467&rfr_iscdi=true