SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth

A key limitation of deep convolutional neural network (DCNN)-based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on medical imaging 2019-04, Vol.38 (4), p.1016-1025
Hauptverfasser: Huo, Yuankai, Xu, Zhoubing, Moon, Hyeonsoo, Bao, Shunxing, Assad, Albert, Moyo, Tamara K., Savona, Michael R., Abramson, Richard G., Landman, Bennett A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1025
container_issue 4
container_start_page 1016
container_title IEEE transactions on medical imaging
container_volume 38
creator Huo, Yuankai
Xu, Zhoubing
Moon, Hyeonsoo
Bao, Shunxing
Assad, Albert
Moyo, Tamara K.
Savona, Michael R.
Abramson, Richard G.
Landman, Bennett A.
description A key limitation of deep convolutional neural network (DCNN)-based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using: 1) unpaired intensity images from source and target modalities and 2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: 1) MRI to CT splenomegaly synthetic segmentation for abdominal images and 2) CT to MRI total intracranial volume synthetic segmentation for brain images. The proposed end-to-end approach achieved superior performance to two-stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available.
doi_str_mv 10.1109/TMI.2018.2876633
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2203407077</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8494797</ieee_id><sourcerecordid>2203407077</sourcerecordid><originalsourceid>FETCH-LOGICAL-c444t-143486c542ff1850028293b8089efc79d0052a59e38507c7befc22670cbd220a3</originalsourceid><addsrcrecordid>eNpdkUtrGzEUhUVJSJzHvlAIA91kM87Va6TJolBM6hriZJEp7U7IGo2tMB4lGk3B_z4ydk2TlcQ93z2cy0HoM4YxxlDeVPPZmACWYyJFUVD6CY0w5zInnP05QiMgQuYABTlFZ33_DIAZh_IEnVKglAkpR2j6tOme7DJ_sPE2S_-4stGZLI3Wtos6Ot9lv11c-SFmlQ5LG7O5r3Xr4iabBj90dVaFIa4u0HGj295e7t9z9OvHXTX5md8_TmeT7_e5YYzFHDPKZGE4I02DJQcgkpR0IUGWtjGirAE40by0NInCiEWaElIIMIuaEND0HH3b-b4Mi7WtTQoZdKteglvrsFFeO_Ve6dxKLf1fVXBgBZbJ4HpvEPzrYPuo1q43tm11Z_3QK4IJ4aLAHCf06wf02Q-hS-eplIUyECBEomBHmeD7PtjmEAaD2rakUktq25Lat5RWrv4_4rDwr5YEfNkBzlp7kCUrmSgFfQNHs5VW</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2203407077</pqid></control><display><type>article</type><title>SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth</title><source>IEEE Electronic Library (IEL)</source><creator>Huo, Yuankai ; Xu, Zhoubing ; Moon, Hyeonsoo ; Bao, Shunxing ; Assad, Albert ; Moyo, Tamara K. ; Savona, Michael R. ; Abramson, Richard G. ; Landman, Bennett A.</creator><creatorcontrib>Huo, Yuankai ; Xu, Zhoubing ; Moon, Hyeonsoo ; Bao, Shunxing ; Assad, Albert ; Moyo, Tamara K. ; Savona, Michael R. ; Abramson, Richard G. ; Landman, Bennett A.</creatorcontrib><description>A key limitation of deep convolutional neural network (DCNN)-based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using: 1) unpaired intensity images from source and target modalities and 2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: 1) MRI to CT splenomegaly synthetic segmentation for abdominal images and 2) CT to MRI total intracranial volume synthetic segmentation for brain images. The proposed end-to-end approach achieved superior performance to two-stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available.</description><identifier>ISSN: 0278-0062</identifier><identifier>EISSN: 1558-254X</identifier><identifier>DOI: 10.1109/TMI.2018.2876633</identifier><identifier>PMID: 30334788</identifier><identifier>CODEN: ITMID4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>adversarial ; Artificial neural networks ; Brain ; Computed tomography ; convolutional ; DCNN ; GAN ; Ground truth ; Image generation ; Image processing ; Image segmentation ; Labels ; Magnetic resonance imaging ; Manuals ; Medical imaging ; Neural networks ; Neuroimaging ; Organs ; Performance evaluation ; segmentation ; Source code ; Splenomegaly ; Synthesis ; synthetic segmentation ; Target recognition ; TICV ; Training</subject><ispartof>IEEE transactions on medical imaging, 2019-04, Vol.38 (4), p.1016-1025</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c444t-143486c542ff1850028293b8089efc79d0052a59e38507c7befc22670cbd220a3</citedby><cites>FETCH-LOGICAL-c444t-143486c542ff1850028293b8089efc79d0052a59e38507c7befc22670cbd220a3</cites><orcidid>0000-0003-3763-5504 ; 0000-0002-1200-0281 ; 0000-0002-2096-8065 ; 0000-0002-8777-0977 ; 0000-0001-5733-2127</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8494797$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,776,780,792,881,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8494797$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30334788$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Huo, Yuankai</creatorcontrib><creatorcontrib>Xu, Zhoubing</creatorcontrib><creatorcontrib>Moon, Hyeonsoo</creatorcontrib><creatorcontrib>Bao, Shunxing</creatorcontrib><creatorcontrib>Assad, Albert</creatorcontrib><creatorcontrib>Moyo, Tamara K.</creatorcontrib><creatorcontrib>Savona, Michael R.</creatorcontrib><creatorcontrib>Abramson, Richard G.</creatorcontrib><creatorcontrib>Landman, Bennett A.</creatorcontrib><title>SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth</title><title>IEEE transactions on medical imaging</title><addtitle>TMI</addtitle><addtitle>IEEE Trans Med Imaging</addtitle><description>A key limitation of deep convolutional neural network (DCNN)-based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using: 1) unpaired intensity images from source and target modalities and 2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: 1) MRI to CT splenomegaly synthetic segmentation for abdominal images and 2) CT to MRI total intracranial volume synthetic segmentation for brain images. The proposed end-to-end approach achieved superior performance to two-stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available.</description><subject>adversarial</subject><subject>Artificial neural networks</subject><subject>Brain</subject><subject>Computed tomography</subject><subject>convolutional</subject><subject>DCNN</subject><subject>GAN</subject><subject>Ground truth</subject><subject>Image generation</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Labels</subject><subject>Magnetic resonance imaging</subject><subject>Manuals</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>Neuroimaging</subject><subject>Organs</subject><subject>Performance evaluation</subject><subject>segmentation</subject><subject>Source code</subject><subject>Splenomegaly</subject><subject>Synthesis</subject><subject>synthetic segmentation</subject><subject>Target recognition</subject><subject>TICV</subject><subject>Training</subject><issn>0278-0062</issn><issn>1558-254X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkUtrGzEUhUVJSJzHvlAIA91kM87Va6TJolBM6hriZJEp7U7IGo2tMB4lGk3B_z4ydk2TlcQ93z2cy0HoM4YxxlDeVPPZmACWYyJFUVD6CY0w5zInnP05QiMgQuYABTlFZ33_DIAZh_IEnVKglAkpR2j6tOme7DJ_sPE2S_-4stGZLI3Wtos6Ot9lv11c-SFmlQ5LG7O5r3Xr4iabBj90dVaFIa4u0HGj295e7t9z9OvHXTX5md8_TmeT7_e5YYzFHDPKZGE4I02DJQcgkpR0IUGWtjGirAE40by0NInCiEWaElIIMIuaEND0HH3b-b4Mi7WtTQoZdKteglvrsFFeO_Ve6dxKLf1fVXBgBZbJ4HpvEPzrYPuo1q43tm11Z_3QK4IJ4aLAHCf06wf02Q-hS-eplIUyECBEomBHmeD7PtjmEAaD2rakUktq25Lat5RWrv4_4rDwr5YEfNkBzlp7kCUrmSgFfQNHs5VW</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Huo, Yuankai</creator><creator>Xu, Zhoubing</creator><creator>Moon, Hyeonsoo</creator><creator>Bao, Shunxing</creator><creator>Assad, Albert</creator><creator>Moyo, Tamara K.</creator><creator>Savona, Michael R.</creator><creator>Abramson, Richard G.</creator><creator>Landman, Bennett A.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0003-3763-5504</orcidid><orcidid>https://orcid.org/0000-0002-1200-0281</orcidid><orcidid>https://orcid.org/0000-0002-2096-8065</orcidid><orcidid>https://orcid.org/0000-0002-8777-0977</orcidid><orcidid>https://orcid.org/0000-0001-5733-2127</orcidid></search><sort><creationdate>20190401</creationdate><title>SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth</title><author>Huo, Yuankai ; Xu, Zhoubing ; Moon, Hyeonsoo ; Bao, Shunxing ; Assad, Albert ; Moyo, Tamara K. ; Savona, Michael R. ; Abramson, Richard G. ; Landman, Bennett A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c444t-143486c542ff1850028293b8089efc79d0052a59e38507c7befc22670cbd220a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>adversarial</topic><topic>Artificial neural networks</topic><topic>Brain</topic><topic>Computed tomography</topic><topic>convolutional</topic><topic>DCNN</topic><topic>GAN</topic><topic>Ground truth</topic><topic>Image generation</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Labels</topic><topic>Magnetic resonance imaging</topic><topic>Manuals</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>Neuroimaging</topic><topic>Organs</topic><topic>Performance evaluation</topic><topic>segmentation</topic><topic>Source code</topic><topic>Splenomegaly</topic><topic>Synthesis</topic><topic>synthetic segmentation</topic><topic>Target recognition</topic><topic>TICV</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Huo, Yuankai</creatorcontrib><creatorcontrib>Xu, Zhoubing</creatorcontrib><creatorcontrib>Moon, Hyeonsoo</creatorcontrib><creatorcontrib>Bao, Shunxing</creatorcontrib><creatorcontrib>Assad, Albert</creatorcontrib><creatorcontrib>Moyo, Tamara K.</creatorcontrib><creatorcontrib>Savona, Michael R.</creatorcontrib><creatorcontrib>Abramson, Richard G.</creatorcontrib><creatorcontrib>Landman, Bennett A.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>IEEE transactions on medical imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huo, Yuankai</au><au>Xu, Zhoubing</au><au>Moon, Hyeonsoo</au><au>Bao, Shunxing</au><au>Assad, Albert</au><au>Moyo, Tamara K.</au><au>Savona, Michael R.</au><au>Abramson, Richard G.</au><au>Landman, Bennett A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth</atitle><jtitle>IEEE transactions on medical imaging</jtitle><stitle>TMI</stitle><addtitle>IEEE Trans Med Imaging</addtitle><date>2019-04-01</date><risdate>2019</risdate><volume>38</volume><issue>4</issue><spage>1016</spage><epage>1025</epage><pages>1016-1025</pages><issn>0278-0062</issn><eissn>1558-254X</eissn><coden>ITMID4</coden><abstract>A key limitation of deep convolutional neural network (DCNN)-based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using: 1) unpaired intensity images from source and target modalities and 2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: 1) MRI to CT splenomegaly synthetic segmentation for abdominal images and 2) CT to MRI total intracranial volume synthetic segmentation for brain images. The proposed end-to-end approach achieved superior performance to two-stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30334788</pmid><doi>10.1109/TMI.2018.2876633</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0003-3763-5504</orcidid><orcidid>https://orcid.org/0000-0002-1200-0281</orcidid><orcidid>https://orcid.org/0000-0002-2096-8065</orcidid><orcidid>https://orcid.org/0000-0002-8777-0977</orcidid><orcidid>https://orcid.org/0000-0001-5733-2127</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0278-0062
ispartof IEEE transactions on medical imaging, 2019-04, Vol.38 (4), p.1016-1025
issn 0278-0062
1558-254X
language eng
recordid cdi_proquest_journals_2203407077
source IEEE Electronic Library (IEL)
subjects adversarial
Artificial neural networks
Brain
Computed tomography
convolutional
DCNN
GAN
Ground truth
Image generation
Image processing
Image segmentation
Labels
Magnetic resonance imaging
Manuals
Medical imaging
Neural networks
Neuroimaging
Organs
Performance evaluation
segmentation
Source code
Splenomegaly
Synthesis
synthetic segmentation
Target recognition
TICV
Training
title SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T07%3A07%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SynSeg-Net:%20Synthetic%20Segmentation%20Without%20Target%20Modality%20Ground%20Truth&rft.jtitle=IEEE%20transactions%20on%20medical%20imaging&rft.au=Huo,%20Yuankai&rft.date=2019-04-01&rft.volume=38&rft.issue=4&rft.spage=1016&rft.epage=1025&rft.pages=1016-1025&rft.issn=0278-0062&rft.eissn=1558-254X&rft.coden=ITMID4&rft_id=info:doi/10.1109/TMI.2018.2876633&rft_dat=%3Cproquest_RIE%3E2203407077%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2203407077&rft_id=info:pmid/30334788&rft_ieee_id=8494797&rfr_iscdi=true