Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation

Deep learning-based techniques can obtain high precision for multimodal stroke segmentation tasks. However, the performance often requires a large number of training examples. Additionally, existing data extension approaches for the segmentation are less efficient in creating much more realistic ima...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2022-08, Vol.11 (16), p.2612
Hauptverfasser: Wang, Suzhe, Zhang, Xueying, Hui, Haisheng, Li, Fenglian, Wu, Zelin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 16
container_start_page 2612
container_title Electronics (Basel)
container_volume 11
creator Wang, Suzhe
Zhang, Xueying
Hui, Haisheng
Li, Fenglian
Wu, Zelin
description Deep learning-based techniques can obtain high precision for multimodal stroke segmentation tasks. However, the performance often requires a large number of training examples. Additionally, existing data extension approaches for the segmentation are less efficient in creating much more realistic images. To overcome these limitations, an unsupervised adversarial data augmentation mechanism (UTC-GAN) is developed to synthesize multimodal computed tomography (CT) brain scans. In our approach, the CT samples generation and cross-modality translation differentiation are accomplished simultaneously by integrating a Siamesed auto-encoder architecture into the generative adversarial network. In addition, a Gaussian mixture translation module is further proposed, which incorporates a translation loss to learn an intrinsic mapping between the latent space and the multimodal translation function. Finally, qualitative and quantitative experiments show that UTC-GAN significantly improves the generation ability. The stroke dataset enriched by the proposed model also provides a superior improvement in segmentation accuracy, compared with the performance of current competing unsupervised models.
doi_str_mv 10.3390/electronics11162612
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2706179851</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A745603314</galeid><sourcerecordid>A745603314</sourcerecordid><originalsourceid>FETCH-LOGICAL-c389t-b2b6135f5a1c655c4cb1db256a10215f5dd30e994b5dae3b08329a3de891cb0a3</originalsourceid><addsrcrecordid>eNptUU1PAjEQ3RhNJMgv8NLEM9gP9qNHgookqAfgvOm2s2tht8W2YPj31mCiJswcZvLy3ptJXpLcEjxijON7aEEGZ42WnhCS0YzQi6RHcc6HnHJ6-We_Tgbeb3AsTljBcC8JL_s26M4q0aLpCs070QBaHk14B689WnttGrQ2fr8Dd9AeFHoA2KEZGHAi6AOgiTqA88Lp6PAK4dO6rUe1dWgZn9oCWkQja9ASmg5MiBprbpKrWrQeBj-zn6yfHlfT5-HibTafThZDyQoehhWtMsLSOhVEZmkqx7IiqqJpJgimJOJKMQycj6tUCWAVLhjlgikoOJEVFqyf3J18d85-7MGHcmP3zsSTJc1xRnJepOSX1YgWSm1qG5yQnfaynOTjNMOMkXFkjc6wYivotLQGah3xfwJ2EkhnvXdQlzunO-GOJcHld3DlmeDYFwn3jxI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2706179851</pqid></control><display><type>article</type><title>Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Wang, Suzhe ; Zhang, Xueying ; Hui, Haisheng ; Li, Fenglian ; Wu, Zelin</creator><creatorcontrib>Wang, Suzhe ; Zhang, Xueying ; Hui, Haisheng ; Li, Fenglian ; Wu, Zelin</creatorcontrib><description>Deep learning-based techniques can obtain high precision for multimodal stroke segmentation tasks. However, the performance often requires a large number of training examples. Additionally, existing data extension approaches for the segmentation are less efficient in creating much more realistic images. To overcome these limitations, an unsupervised adversarial data augmentation mechanism (UTC-GAN) is developed to synthesize multimodal computed tomography (CT) brain scans. In our approach, the CT samples generation and cross-modality translation differentiation are accomplished simultaneously by integrating a Siamesed auto-encoder architecture into the generative adversarial network. In addition, a Gaussian mixture translation module is further proposed, which incorporates a translation loss to learn an intrinsic mapping between the latent space and the multimodal translation function. Finally, qualitative and quantitative experiments show that UTC-GAN significantly improves the generation ability. The stroke dataset enriched by the proposed model also provides a superior improvement in segmentation accuracy, compared with the performance of current competing unsupervised models.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics11162612</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Blood ; Coders ; Computed tomography ; CT imaging ; Diagnosis ; Generative adversarial networks ; Image processing ; Image segmentation ; Machine learning ; Medical imaging ; Methods ; Neural networks ; Semantics ; Stroke ; Stroke (Disease)</subject><ispartof>Electronics (Basel), 2022-08, Vol.11 (16), p.2612</ispartof><rights>COPYRIGHT 2022 MDPI AG</rights><rights>2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c389t-b2b6135f5a1c655c4cb1db256a10215f5dd30e994b5dae3b08329a3de891cb0a3</citedby><cites>FETCH-LOGICAL-c389t-b2b6135f5a1c655c4cb1db256a10215f5dd30e994b5dae3b08329a3de891cb0a3</cites><orcidid>0000-0002-3923-6534 ; 0000-0002-7979-942X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Wang, Suzhe</creatorcontrib><creatorcontrib>Zhang, Xueying</creatorcontrib><creatorcontrib>Hui, Haisheng</creatorcontrib><creatorcontrib>Li, Fenglian</creatorcontrib><creatorcontrib>Wu, Zelin</creatorcontrib><title>Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation</title><title>Electronics (Basel)</title><description>Deep learning-based techniques can obtain high precision for multimodal stroke segmentation tasks. However, the performance often requires a large number of training examples. Additionally, existing data extension approaches for the segmentation are less efficient in creating much more realistic images. To overcome these limitations, an unsupervised adversarial data augmentation mechanism (UTC-GAN) is developed to synthesize multimodal computed tomography (CT) brain scans. In our approach, the CT samples generation and cross-modality translation differentiation are accomplished simultaneously by integrating a Siamesed auto-encoder architecture into the generative adversarial network. In addition, a Gaussian mixture translation module is further proposed, which incorporates a translation loss to learn an intrinsic mapping between the latent space and the multimodal translation function. Finally, qualitative and quantitative experiments show that UTC-GAN significantly improves the generation ability. The stroke dataset enriched by the proposed model also provides a superior improvement in segmentation accuracy, compared with the performance of current competing unsupervised models.</description><subject>Accuracy</subject><subject>Blood</subject><subject>Coders</subject><subject>Computed tomography</subject><subject>CT imaging</subject><subject>Diagnosis</subject><subject>Generative adversarial networks</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Medical imaging</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Semantics</subject><subject>Stroke</subject><subject>Stroke (Disease)</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNptUU1PAjEQ3RhNJMgv8NLEM9gP9qNHgookqAfgvOm2s2tht8W2YPj31mCiJswcZvLy3ptJXpLcEjxijON7aEEGZ42WnhCS0YzQi6RHcc6HnHJ6-We_Tgbeb3AsTljBcC8JL_s26M4q0aLpCs070QBaHk14B689WnttGrQ2fr8Dd9AeFHoA2KEZGHAi6AOgiTqA88Lp6PAK4dO6rUe1dWgZn9oCWkQja9ASmg5MiBprbpKrWrQeBj-zn6yfHlfT5-HibTafThZDyQoehhWtMsLSOhVEZmkqx7IiqqJpJgimJOJKMQycj6tUCWAVLhjlgikoOJEVFqyf3J18d85-7MGHcmP3zsSTJc1xRnJepOSX1YgWSm1qG5yQnfaynOTjNMOMkXFkjc6wYivotLQGah3xfwJ2EkhnvXdQlzunO-GOJcHld3DlmeDYFwn3jxI</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Wang, Suzhe</creator><creator>Zhang, Xueying</creator><creator>Hui, Haisheng</creator><creator>Li, Fenglian</creator><creator>Wu, Zelin</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-3923-6534</orcidid><orcidid>https://orcid.org/0000-0002-7979-942X</orcidid></search><sort><creationdate>20220801</creationdate><title>Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation</title><author>Wang, Suzhe ; Zhang, Xueying ; Hui, Haisheng ; Li, Fenglian ; Wu, Zelin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c389t-b2b6135f5a1c655c4cb1db256a10215f5dd30e994b5dae3b08329a3de891cb0a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Blood</topic><topic>Coders</topic><topic>Computed tomography</topic><topic>CT imaging</topic><topic>Diagnosis</topic><topic>Generative adversarial networks</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Medical imaging</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Semantics</topic><topic>Stroke</topic><topic>Stroke (Disease)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Suzhe</creatorcontrib><creatorcontrib>Zhang, Xueying</creatorcontrib><creatorcontrib>Hui, Haisheng</creatorcontrib><creatorcontrib>Li, Fenglian</creatorcontrib><creatorcontrib>Wu, Zelin</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Suzhe</au><au>Zhang, Xueying</au><au>Hui, Haisheng</au><au>Li, Fenglian</au><au>Wu, Zelin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation</atitle><jtitle>Electronics (Basel)</jtitle><date>2022-08-01</date><risdate>2022</risdate><volume>11</volume><issue>16</issue><spage>2612</spage><pages>2612-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Deep learning-based techniques can obtain high precision for multimodal stroke segmentation tasks. However, the performance often requires a large number of training examples. Additionally, existing data extension approaches for the segmentation are less efficient in creating much more realistic images. To overcome these limitations, an unsupervised adversarial data augmentation mechanism (UTC-GAN) is developed to synthesize multimodal computed tomography (CT) brain scans. In our approach, the CT samples generation and cross-modality translation differentiation are accomplished simultaneously by integrating a Siamesed auto-encoder architecture into the generative adversarial network. In addition, a Gaussian mixture translation module is further proposed, which incorporates a translation loss to learn an intrinsic mapping between the latent space and the multimodal translation function. Finally, qualitative and quantitative experiments show that UTC-GAN significantly improves the generation ability. The stroke dataset enriched by the proposed model also provides a superior improvement in segmentation accuracy, compared with the performance of current competing unsupervised models.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics11162612</doi><orcidid>https://orcid.org/0000-0002-3923-6534</orcidid><orcidid>https://orcid.org/0000-0002-7979-942X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2022-08, Vol.11 (16), p.2612
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2706179851
source MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals
subjects Accuracy
Blood
Coders
Computed tomography
CT imaging
Diagnosis
Generative adversarial networks
Image processing
Image segmentation
Machine learning
Medical imaging
Methods
Neural networks
Semantics
Stroke
Stroke (Disease)
title Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T13%3A41%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multimodal%20CT%20Image%20Synthesis%20Using%20Unsupervised%20Deep%20Generative%20Adversarial%20Networks%20for%20Stroke%20Lesion%20Segmentation&rft.jtitle=Electronics%20(Basel)&rft.au=Wang,%20Suzhe&rft.date=2022-08-01&rft.volume=11&rft.issue=16&rft.spage=2612&rft.pages=2612-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics11162612&rft_dat=%3Cgale_proqu%3EA745603314%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2706179851&rft_id=info:pmid/&rft_galeid=A745603314&rfr_iscdi=true