Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound
Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambigu...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on medical imaging 2019-01, Vol.38 (1), p.180-193 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 193 |
---|---|
container_issue | 1 |
container_start_page | 180 |
container_title | IEEE transactions on medical imaging |
container_volume | 38 |
creator | Yang, Xin Yu, Lequan Li, Shengli Wen, Huaxuan Luo, Dandan Bian, Cheng Qin, Jing Ni, Dong Heng, Pheng-Ann |
description | Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations. |
doi_str_mv | 10.1109/TMI.2018.2858779 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2162699788</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8418398</ieee_id><sourcerecordid>2076232362</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-ec52b592a852595f0fa6412ae552bfcfa6c3dd538c3ae8ebb8b620c4203371c83</originalsourceid><addsrcrecordid>eNpdkE1LxDAQhoMouq7eBUEKXrx0nSRNmx5F_FhYUXBXvIU0nUqlbTRJEf-9WXb14CWTmTzzEh5CTijMKIXycvkwnzGgcsakkEVR7pAJFUKmTGSvu2QCrJApQM4OyKH37wA0E1DukwMOkEHOxYTMl_ZLu9onV2OwvQ5YJ8_Y6yG0Jl7eehyCDq0dknZInhwOOuguebHd2GNwkVl1wWlvx6E-InuN7jweb-uUrG5vltf36eLxbn59tUgNz4qQohGsEiXTUjBRigYanWeUaRRx3pjYGV7XgkvDNUqsKlnlDEzGgPOCGsmn5GKT--Hs54g-qL71BrtOD2hHrxgUOeOMx2NKzv-h73Z0Q_ydYjRneVkWch0IG8o4673DRn24ttfuW1FQa80qalZrzWqrOa6cbYPHqsf6b-HXawRON0CLiH_PMqOSl5L_AMwGgLs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2162699788</pqid></control><display><type>article</type><title>Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound</title><source>IEEE Electronic Library (IEL)</source><creator>Yang, Xin ; Yu, Lequan ; Li, Shengli ; Wen, Huaxuan ; Luo, Dandan ; Bian, Cheng ; Qin, Jing ; Ni, Dong ; Heng, Pheng-Ann</creator><creatorcontrib>Yang, Xin ; Yu, Lequan ; Li, Shengli ; Wen, Huaxuan ; Luo, Dandan ; Bian, Cheng ; Qin, Jing ; Ni, Dong ; Heng, Pheng-Ann</creatorcontrib><description>Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations.</description><identifier>ISSN: 0278-0062</identifier><identifier>EISSN: 1558-254X</identifier><identifier>DOI: 10.1109/TMI.2018.2858779</identifier><identifier>PMID: 30040635</identifier><identifier>CODEN: ITMID4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Ambiguity ; Artificial neural networks ; Automation ; Biometrics ; Concurrency ; Female ; Fetus ; Fetus - diagnostic imaging ; Fetuses ; fully convolutional networks ; Humans ; Image contrast ; Image processing ; Image quality ; Image segmentation ; Imaging, Three-Dimensional - methods ; Information flow ; Labeling ; Labelling ; Neural networks ; Neural Networks, Computer ; Obstetrics ; Placenta ; Pregnancy ; Prenatal examination ; Recurrent neural networks ; Semantic segmentation ; Semantics ; Shape ; Three-dimensional displays ; Ultrasonic imaging ; Ultrasonography, Prenatal - methods ; Ultrasound ; volumetric ultrasound</subject><ispartof>IEEE transactions on medical imaging, 2019-01, Vol.38 (1), p.180-193</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-ec52b592a852595f0fa6412ae552bfcfa6c3dd538c3ae8ebb8b620c4203371c83</citedby><cites>FETCH-LOGICAL-c347t-ec52b592a852595f0fa6412ae552bfcfa6c3dd538c3ae8ebb8b620c4203371c83</cites><orcidid>0000-0003-3055-5034 ; 0000-0003-4653-6524 ; 0000-0002-9315-6527 ; 0000-0002-2961-0860</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8418398$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8418398$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30040635$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Xin</creatorcontrib><creatorcontrib>Yu, Lequan</creatorcontrib><creatorcontrib>Li, Shengli</creatorcontrib><creatorcontrib>Wen, Huaxuan</creatorcontrib><creatorcontrib>Luo, Dandan</creatorcontrib><creatorcontrib>Bian, Cheng</creatorcontrib><creatorcontrib>Qin, Jing</creatorcontrib><creatorcontrib>Ni, Dong</creatorcontrib><creatorcontrib>Heng, Pheng-Ann</creatorcontrib><title>Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound</title><title>IEEE transactions on medical imaging</title><addtitle>TMI</addtitle><addtitle>IEEE Trans Med Imaging</addtitle><description>Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations.</description><subject>Algorithms</subject><subject>Ambiguity</subject><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Biometrics</subject><subject>Concurrency</subject><subject>Female</subject><subject>Fetus</subject><subject>Fetus - diagnostic imaging</subject><subject>Fetuses</subject><subject>fully convolutional networks</subject><subject>Humans</subject><subject>Image contrast</subject><subject>Image processing</subject><subject>Image quality</subject><subject>Image segmentation</subject><subject>Imaging, Three-Dimensional - methods</subject><subject>Information flow</subject><subject>Labeling</subject><subject>Labelling</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Obstetrics</subject><subject>Placenta</subject><subject>Pregnancy</subject><subject>Prenatal examination</subject><subject>Recurrent neural networks</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Shape</subject><subject>Three-dimensional displays</subject><subject>Ultrasonic imaging</subject><subject>Ultrasonography, Prenatal - methods</subject><subject>Ultrasound</subject><subject>volumetric ultrasound</subject><issn>0278-0062</issn><issn>1558-254X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkE1LxDAQhoMouq7eBUEKXrx0nSRNmx5F_FhYUXBXvIU0nUqlbTRJEf-9WXb14CWTmTzzEh5CTijMKIXycvkwnzGgcsakkEVR7pAJFUKmTGSvu2QCrJApQM4OyKH37wA0E1DukwMOkEHOxYTMl_ZLu9onV2OwvQ5YJ8_Y6yG0Jl7eehyCDq0dknZInhwOOuguebHd2GNwkVl1wWlvx6E-InuN7jweb-uUrG5vltf36eLxbn59tUgNz4qQohGsEiXTUjBRigYanWeUaRRx3pjYGV7XgkvDNUqsKlnlDEzGgPOCGsmn5GKT--Hs54g-qL71BrtOD2hHrxgUOeOMx2NKzv-h73Z0Q_ydYjRneVkWch0IG8o4673DRn24ttfuW1FQa80qalZrzWqrOa6cbYPHqsf6b-HXawRON0CLiH_PMqOSl5L_AMwGgLs</recordid><startdate>201901</startdate><enddate>201901</enddate><creator>Yang, Xin</creator><creator>Yu, Lequan</creator><creator>Li, Shengli</creator><creator>Wen, Huaxuan</creator><creator>Luo, Dandan</creator><creator>Bian, Cheng</creator><creator>Qin, Jing</creator><creator>Ni, Dong</creator><creator>Heng, Pheng-Ann</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3055-5034</orcidid><orcidid>https://orcid.org/0000-0003-4653-6524</orcidid><orcidid>https://orcid.org/0000-0002-9315-6527</orcidid><orcidid>https://orcid.org/0000-0002-2961-0860</orcidid></search><sort><creationdate>201901</creationdate><title>Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound</title><author>Yang, Xin ; Yu, Lequan ; Li, Shengli ; Wen, Huaxuan ; Luo, Dandan ; Bian, Cheng ; Qin, Jing ; Ni, Dong ; Heng, Pheng-Ann</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-ec52b592a852595f0fa6412ae552bfcfa6c3dd538c3ae8ebb8b620c4203371c83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Ambiguity</topic><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Biometrics</topic><topic>Concurrency</topic><topic>Female</topic><topic>Fetus</topic><topic>Fetus - diagnostic imaging</topic><topic>Fetuses</topic><topic>fully convolutional networks</topic><topic>Humans</topic><topic>Image contrast</topic><topic>Image processing</topic><topic>Image quality</topic><topic>Image segmentation</topic><topic>Imaging, Three-Dimensional - methods</topic><topic>Information flow</topic><topic>Labeling</topic><topic>Labelling</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Obstetrics</topic><topic>Placenta</topic><topic>Pregnancy</topic><topic>Prenatal examination</topic><topic>Recurrent neural networks</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Shape</topic><topic>Three-dimensional displays</topic><topic>Ultrasonic imaging</topic><topic>Ultrasonography, Prenatal - methods</topic><topic>Ultrasound</topic><topic>volumetric ultrasound</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Xin</creatorcontrib><creatorcontrib>Yu, Lequan</creatorcontrib><creatorcontrib>Li, Shengli</creatorcontrib><creatorcontrib>Wen, Huaxuan</creatorcontrib><creatorcontrib>Luo, Dandan</creatorcontrib><creatorcontrib>Bian, Cheng</creatorcontrib><creatorcontrib>Qin, Jing</creatorcontrib><creatorcontrib>Ni, Dong</creatorcontrib><creatorcontrib>Heng, Pheng-Ann</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on medical imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Xin</au><au>Yu, Lequan</au><au>Li, Shengli</au><au>Wen, Huaxuan</au><au>Luo, Dandan</au><au>Bian, Cheng</au><au>Qin, Jing</au><au>Ni, Dong</au><au>Heng, Pheng-Ann</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound</atitle><jtitle>IEEE transactions on medical imaging</jtitle><stitle>TMI</stitle><addtitle>IEEE Trans Med Imaging</addtitle><date>2019-01</date><risdate>2019</risdate><volume>38</volume><issue>1</issue><spage>180</spage><epage>193</epage><pages>180-193</pages><issn>0278-0062</issn><eissn>1558-254X</eissn><coden>ITMID4</coden><abstract>Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30040635</pmid><doi>10.1109/TMI.2018.2858779</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-3055-5034</orcidid><orcidid>https://orcid.org/0000-0003-4653-6524</orcidid><orcidid>https://orcid.org/0000-0002-9315-6527</orcidid><orcidid>https://orcid.org/0000-0002-2961-0860</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0278-0062 |
ispartof | IEEE transactions on medical imaging, 2019-01, Vol.38 (1), p.180-193 |
issn | 0278-0062 1558-254X |
language | eng |
recordid | cdi_proquest_journals_2162699788 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Ambiguity Artificial neural networks Automation Biometrics Concurrency Female Fetus Fetus - diagnostic imaging Fetuses fully convolutional networks Humans Image contrast Image processing Image quality Image segmentation Imaging, Three-Dimensional - methods Information flow Labeling Labelling Neural networks Neural Networks, Computer Obstetrics Placenta Pregnancy Prenatal examination Recurrent neural networks Semantic segmentation Semantics Shape Three-dimensional displays Ultrasonic imaging Ultrasonography, Prenatal - methods Ultrasound volumetric ultrasound |
title | Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T06%3A08%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Automated%20Semantic%20Segmentation%20in%20Prenatal%20Volumetric%20Ultrasound&rft.jtitle=IEEE%20transactions%20on%20medical%20imaging&rft.au=Yang,%20Xin&rft.date=2019-01&rft.volume=38&rft.issue=1&rft.spage=180&rft.epage=193&rft.pages=180-193&rft.issn=0278-0062&rft.eissn=1558-254X&rft.coden=ITMID4&rft_id=info:doi/10.1109/TMI.2018.2858779&rft_dat=%3Cproquest_RIE%3E2076232362%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2162699788&rft_id=info:pmid/30040635&rft_ieee_id=8418398&rfr_iscdi=true |