Boundary Aware U-Net for Medical Image Segmentation

Automatic medical image segmentation plays an integral role in the health care system as it facilitates the cancer detection process and provides a basis to analyze and monitor cancer progress. Convolutional neural networks have proven to be an effective approach to automate medical image segmentati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Arabian journal for science and engineering (2011) 2023-08, Vol.48 (8), p.9929-9940
1. Verfasser: Alahmadi, Mohammad D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 9940
container_issue 8
container_start_page 9929
container_title Arabian journal for science and engineering (2011)
container_volume 48
creator Alahmadi, Mohammad D.
description Automatic medical image segmentation plays an integral role in the health care system as it facilitates the cancer detection process and provides a basis to analyze and monitor cancer progress. Convolutional neural networks have proven to be an effective approach to automate medical image segmentation tasks. These networks perform a set of convolutional layers followed by the activation and pooling operations to represent the object of interest in terms of texture and semantic information. Although the texture information can reveal the disorders in medical images, it pays less attention to the anatomical structure of the human tissue and is consequently less precise in the boundary area. To compensate for the boundary representation, we propose to incorporate the Vision Transformer (ViT) model on top of the bottleneck layer. In our design, we seek to model the distribution of the boundary area using the global contextual representation deriving from the ViT module. In addition, by fusing the boundary representation generated by the ViT module to each decoding block, we preserve the anatomical structure for the boundary-aware segmentation. Throughout a comprehensive evaluation of several medical image segmentation tasks, we demonstrate the effectiveness of our model. Particularly our method achieved ISIC2017: 0.905, ISIC2018: 0.898, PH2: 0.944 and the Lung segmentation task with 0.990 dice scores.
doi_str_mv 10.1007/s13369-022-07431-y
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2843078510</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2843078510</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-52c900eaa3d45f9f94ae07a2471f969d6f5665a29b1bfd28da5c27d9991cb0333</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EElXpH2CKxGzw-ew4HkvFR6UCA1Ris5zYjoLapNipUP89oUFiY7ob3uc93UPIJbBrYEzdJEDMNWWcU6YEAj2ckAkHDVTwAk6PO1KZq_dzMkupKZkoUEsAnBC87fats_GQzb9s9NmaPvs-C13MnrxrKrvJlltb--zV11vf9rZvuvaCnAW7SX72O6dkfX_3tnikq5eH5WK-ohWC7qnklWbMW4tOyKCDFtYzZblQEHSuXR5knkvLdQllcLxwVlZcOa01VCVDxCm5Gnt3sfvc-9Sbj24f2-Gk4YVApgoJbEjxMVXFLqXog9nFZjt8ZICZHz9m9GMGP-boxxwGCEcoDeG29vGv-h_qGyGMZwc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2843078510</pqid></control><display><type>article</type><title>Boundary Aware U-Net for Medical Image Segmentation</title><source>SpringerLink Journals - AutoHoldings</source><creator>Alahmadi, Mohammad D.</creator><creatorcontrib>Alahmadi, Mohammad D.</creatorcontrib><description>Automatic medical image segmentation plays an integral role in the health care system as it facilitates the cancer detection process and provides a basis to analyze and monitor cancer progress. Convolutional neural networks have proven to be an effective approach to automate medical image segmentation tasks. These networks perform a set of convolutional layers followed by the activation and pooling operations to represent the object of interest in terms of texture and semantic information. Although the texture information can reveal the disorders in medical images, it pays less attention to the anatomical structure of the human tissue and is consequently less precise in the boundary area. To compensate for the boundary representation, we propose to incorporate the Vision Transformer (ViT) model on top of the bottleneck layer. In our design, we seek to model the distribution of the boundary area using the global contextual representation deriving from the ViT module. In addition, by fusing the boundary representation generated by the ViT module to each decoding block, we preserve the anatomical structure for the boundary-aware segmentation. Throughout a comprehensive evaluation of several medical image segmentation tasks, we demonstrate the effectiveness of our model. Particularly our method achieved ISIC2017: 0.905, ISIC2018: 0.898, PH2: 0.944 and the Lung segmentation task with 0.990 dice scores.</description><identifier>ISSN: 2193-567X</identifier><identifier>ISSN: 1319-8025</identifier><identifier>EISSN: 2191-4281</identifier><identifier>DOI: 10.1007/s13369-022-07431-y</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Artificial neural networks ; Boundary representation ; Cancer ; Engineering ; Human tissues ; Humanities and Social Sciences ; Image segmentation ; Medical imaging ; Modules ; multidisciplinary ; Research Article--Computer Engineering and Computer Science ; Science ; Texture</subject><ispartof>Arabian journal for science and engineering (2011), 2023-08, Vol.48 (8), p.9929-9940</ispartof><rights>King Fahd University of Petroleum &amp; Minerals 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-52c900eaa3d45f9f94ae07a2471f969d6f5665a29b1bfd28da5c27d9991cb0333</citedby><cites>FETCH-LOGICAL-c319t-52c900eaa3d45f9f94ae07a2471f969d6f5665a29b1bfd28da5c27d9991cb0333</cites><orcidid>0000-0002-3399-2996</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s13369-022-07431-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s13369-022-07431-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Alahmadi, Mohammad D.</creatorcontrib><title>Boundary Aware U-Net for Medical Image Segmentation</title><title>Arabian journal for science and engineering (2011)</title><addtitle>Arab J Sci Eng</addtitle><description>Automatic medical image segmentation plays an integral role in the health care system as it facilitates the cancer detection process and provides a basis to analyze and monitor cancer progress. Convolutional neural networks have proven to be an effective approach to automate medical image segmentation tasks. These networks perform a set of convolutional layers followed by the activation and pooling operations to represent the object of interest in terms of texture and semantic information. Although the texture information can reveal the disorders in medical images, it pays less attention to the anatomical structure of the human tissue and is consequently less precise in the boundary area. To compensate for the boundary representation, we propose to incorporate the Vision Transformer (ViT) model on top of the bottleneck layer. In our design, we seek to model the distribution of the boundary area using the global contextual representation deriving from the ViT module. In addition, by fusing the boundary representation generated by the ViT module to each decoding block, we preserve the anatomical structure for the boundary-aware segmentation. Throughout a comprehensive evaluation of several medical image segmentation tasks, we demonstrate the effectiveness of our model. Particularly our method achieved ISIC2017: 0.905, ISIC2018: 0.898, PH2: 0.944 and the Lung segmentation task with 0.990 dice scores.</description><subject>Artificial neural networks</subject><subject>Boundary representation</subject><subject>Cancer</subject><subject>Engineering</subject><subject>Human tissues</subject><subject>Humanities and Social Sciences</subject><subject>Image segmentation</subject><subject>Medical imaging</subject><subject>Modules</subject><subject>multidisciplinary</subject><subject>Research Article--Computer Engineering and Computer Science</subject><subject>Science</subject><subject>Texture</subject><issn>2193-567X</issn><issn>1319-8025</issn><issn>2191-4281</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAQhi0EElXpH2CKxGzw-ew4HkvFR6UCA1Ris5zYjoLapNipUP89oUFiY7ob3uc93UPIJbBrYEzdJEDMNWWcU6YEAj2ckAkHDVTwAk6PO1KZq_dzMkupKZkoUEsAnBC87fats_GQzb9s9NmaPvs-C13MnrxrKrvJlltb--zV11vf9rZvuvaCnAW7SX72O6dkfX_3tnikq5eH5WK-ohWC7qnklWbMW4tOyKCDFtYzZblQEHSuXR5knkvLdQllcLxwVlZcOa01VCVDxCm5Gnt3sfvc-9Sbj24f2-Gk4YVApgoJbEjxMVXFLqXog9nFZjt8ZICZHz9m9GMGP-boxxwGCEcoDeG29vGv-h_qGyGMZwc</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Alahmadi, Mohammad D.</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-3399-2996</orcidid></search><sort><creationdate>20230801</creationdate><title>Boundary Aware U-Net for Medical Image Segmentation</title><author>Alahmadi, Mohammad D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-52c900eaa3d45f9f94ae07a2471f969d6f5665a29b1bfd28da5c27d9991cb0333</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Boundary representation</topic><topic>Cancer</topic><topic>Engineering</topic><topic>Human tissues</topic><topic>Humanities and Social Sciences</topic><topic>Image segmentation</topic><topic>Medical imaging</topic><topic>Modules</topic><topic>multidisciplinary</topic><topic>Research Article--Computer Engineering and Computer Science</topic><topic>Science</topic><topic>Texture</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Alahmadi, Mohammad D.</creatorcontrib><collection>CrossRef</collection><jtitle>Arabian journal for science and engineering (2011)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alahmadi, Mohammad D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Boundary Aware U-Net for Medical Image Segmentation</atitle><jtitle>Arabian journal for science and engineering (2011)</jtitle><stitle>Arab J Sci Eng</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>48</volume><issue>8</issue><spage>9929</spage><epage>9940</epage><pages>9929-9940</pages><issn>2193-567X</issn><issn>1319-8025</issn><eissn>2191-4281</eissn><abstract>Automatic medical image segmentation plays an integral role in the health care system as it facilitates the cancer detection process and provides a basis to analyze and monitor cancer progress. Convolutional neural networks have proven to be an effective approach to automate medical image segmentation tasks. These networks perform a set of convolutional layers followed by the activation and pooling operations to represent the object of interest in terms of texture and semantic information. Although the texture information can reveal the disorders in medical images, it pays less attention to the anatomical structure of the human tissue and is consequently less precise in the boundary area. To compensate for the boundary representation, we propose to incorporate the Vision Transformer (ViT) model on top of the bottleneck layer. In our design, we seek to model the distribution of the boundary area using the global contextual representation deriving from the ViT module. In addition, by fusing the boundary representation generated by the ViT module to each decoding block, we preserve the anatomical structure for the boundary-aware segmentation. Throughout a comprehensive evaluation of several medical image segmentation tasks, we demonstrate the effectiveness of our model. Particularly our method achieved ISIC2017: 0.905, ISIC2018: 0.898, PH2: 0.944 and the Lung segmentation task with 0.990 dice scores.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s13369-022-07431-y</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-3399-2996</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2193-567X
ispartof Arabian journal for science and engineering (2011), 2023-08, Vol.48 (8), p.9929-9940
issn 2193-567X
1319-8025
2191-4281
language eng
recordid cdi_proquest_journals_2843078510
source SpringerLink Journals - AutoHoldings
subjects Artificial neural networks
Boundary representation
Cancer
Engineering
Human tissues
Humanities and Social Sciences
Image segmentation
Medical imaging
Modules
multidisciplinary
Research Article--Computer Engineering and Computer Science
Science
Texture
title Boundary Aware U-Net for Medical Image Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T08%3A58%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Boundary%20Aware%20U-Net%20for%20Medical%20Image%20Segmentation&rft.jtitle=Arabian%20journal%20for%20science%20and%20engineering%20(2011)&rft.au=Alahmadi,%20Mohammad%20D.&rft.date=2023-08-01&rft.volume=48&rft.issue=8&rft.spage=9929&rft.epage=9940&rft.pages=9929-9940&rft.issn=2193-567X&rft.eissn=2191-4281&rft_id=info:doi/10.1007/s13369-022-07431-y&rft_dat=%3Cproquest_cross%3E2843078510%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2843078510&rft_id=info:pmid/&rfr_iscdi=true