Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation
Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2024-12, Vol.13 (24), p.4986 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 24 |
container_start_page | 4986 |
container_title | Electronics (Basel) |
container_volume | 13 |
creator | Chen, Yu Wu, Yun Wu, Jiahua Zhang, Xinxin Wang, Dahan Zhu, Shunzhi |
description | Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In this paper, we introduce a Boundary-guided Context Fusion U-Net (BCF-UNet), a novel approach designed to tackle a critical shortcoming in current methods: the inability to effectively integrate boundary information with semantic context. The BCF-UNet introduces a Adaptive Multi-Frequency Encoder (AMFE), which uses multi-frequency analysis inspired by the Wavelet Transform (WT) to capture both local and global features efficiently. The Adaptive Multi-Frequency Encoder (AMFE) decomposes images into different frequency components and adapts more effectively to boundary texture information through a learnable activation function. Additionally, we introduce a new multi-scale feature fusion module, the Atten-kernel Adaptive Fusion Module (AKAFM), designed to integrate deep semantic information with shallow texture details, significantly bridging the gap between features at different scales. Furthermore, each layer of the encoder sub-network integrates a Boundary-aware Pyramid Module (BAPM), which utilizes a simple and effective method and combines it with a priori knowledge to extract multi-scale edge features to improve the accuracy of boundary segmentation. In BCF-UNet, semantic context is used to guide edge information extraction, enabling the model to more effectively comprehend and identify relationships among various organizational structures. Comprehensive experimental evaluations on two datasets demonstrate that the proposed BCF-UNet achieves superior performance compared to existing state-of-the-art methods. |
doi_str_mv | 10.3390/electronics13244986 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3149599753</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A821763353</galeid><sourcerecordid>A821763353</sourcerecordid><originalsourceid>FETCH-LOGICAL-c224t-9c001444d98074d76b321623191f13c7132a1ccd51e79ad0e1223357dc48ee5b3</originalsourceid><addsrcrecordid>eNptUdtKAzEQXUTBUv0CXwI-b81tL_GtFq2CF7D6vKTJ7BK7m2iSRfv3RhQv4MzDDIczZ5gzWXZE8IwxgU-gBxW9s0YFwijnoi53sgnFlcgFFXT3V7-fHYbwhFMIwmqGJ9nmHjS0xhrboYWzEd7iKHskrUZnbrRa-i1abS34bnuK5t9YvhyNBo0uxmCcRbcQX53foNZ5dAPaqCRxNcgO0Aq6AWyUMdEOsr1W9gEOv-o0e7w4f1hc5td3y6vF_DpXlPKYC4Ux4ZxrUeOK66pcM0pKyoggLWGqSkdKopQuCFRCagyEUsaKSiteAxRrNs2OP3WfvXsZIcTmyY3eppUNI1wUQlQF-2F1sofG2NZFL9VggmrmNSVVmTQ_WLN_WCk1DEY5m7xL-J8B9jmgvAvBQ9s8ezMkyxqCm49_Nf_8i70DcV2J4w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149599753</pqid></control><display><type>article</type><title>Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Chen, Yu ; Wu, Yun ; Wu, Jiahua ; Zhang, Xinxin ; Wang, Dahan ; Zhu, Shunzhi</creator><creatorcontrib>Chen, Yu ; Wu, Yun ; Wu, Jiahua ; Zhang, Xinxin ; Wang, Dahan ; Zhu, Shunzhi</creatorcontrib><description>Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In this paper, we introduce a Boundary-guided Context Fusion U-Net (BCF-UNet), a novel approach designed to tackle a critical shortcoming in current methods: the inability to effectively integrate boundary information with semantic context. The BCF-UNet introduces a Adaptive Multi-Frequency Encoder (AMFE), which uses multi-frequency analysis inspired by the Wavelet Transform (WT) to capture both local and global features efficiently. The Adaptive Multi-Frequency Encoder (AMFE) decomposes images into different frequency components and adapts more effectively to boundary texture information through a learnable activation function. Additionally, we introduce a new multi-scale feature fusion module, the Atten-kernel Adaptive Fusion Module (AKAFM), designed to integrate deep semantic information with shallow texture details, significantly bridging the gap between features at different scales. Furthermore, each layer of the encoder sub-network integrates a Boundary-aware Pyramid Module (BAPM), which utilizes a simple and effective method and combines it with a priori knowledge to extract multi-scale edge features to improve the accuracy of boundary segmentation. In BCF-UNet, semantic context is used to guide edge information extraction, enabling the model to more effectively comprehend and identify relationships among various organizational structures. Comprehensive experimental evaluations on two datasets demonstrate that the proposed BCF-UNet achieves superior performance compared to existing state-of-the-art methods.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13244986</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Coders ; Computer vision ; Context ; Decomposition ; Deep learning ; Feature extraction ; Image processing ; Image segmentation ; Information retrieval ; Liu, Timothy ; Medical imaging ; Medical imaging equipment ; Modules ; Neural networks ; Organizational structure ; Semantics ; Texture ; Wavelet analysis ; Wavelet transforms</subject><ispartof>Electronics (Basel), 2024-12, Vol.13 (24), p.4986</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c224t-9c001444d98074d76b321623191f13c7132a1ccd51e79ad0e1223357dc48ee5b3</cites><orcidid>0000-0002-5901-0778</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Chen, Yu</creatorcontrib><creatorcontrib>Wu, Yun</creatorcontrib><creatorcontrib>Wu, Jiahua</creatorcontrib><creatorcontrib>Zhang, Xinxin</creatorcontrib><creatorcontrib>Wang, Dahan</creatorcontrib><creatorcontrib>Zhu, Shunzhi</creatorcontrib><title>Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation</title><title>Electronics (Basel)</title><description>Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In this paper, we introduce a Boundary-guided Context Fusion U-Net (BCF-UNet), a novel approach designed to tackle a critical shortcoming in current methods: the inability to effectively integrate boundary information with semantic context. The BCF-UNet introduces a Adaptive Multi-Frequency Encoder (AMFE), which uses multi-frequency analysis inspired by the Wavelet Transform (WT) to capture both local and global features efficiently. The Adaptive Multi-Frequency Encoder (AMFE) decomposes images into different frequency components and adapts more effectively to boundary texture information through a learnable activation function. Additionally, we introduce a new multi-scale feature fusion module, the Atten-kernel Adaptive Fusion Module (AKAFM), designed to integrate deep semantic information with shallow texture details, significantly bridging the gap between features at different scales. Furthermore, each layer of the encoder sub-network integrates a Boundary-aware Pyramid Module (BAPM), which utilizes a simple and effective method and combines it with a priori knowledge to extract multi-scale edge features to improve the accuracy of boundary segmentation. In BCF-UNet, semantic context is used to guide edge information extraction, enabling the model to more effectively comprehend and identify relationships among various organizational structures. Comprehensive experimental evaluations on two datasets demonstrate that the proposed BCF-UNet achieves superior performance compared to existing state-of-the-art methods.</description><subject>Accuracy</subject><subject>Coders</subject><subject>Computer vision</subject><subject>Context</subject><subject>Decomposition</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Information retrieval</subject><subject>Liu, Timothy</subject><subject>Medical imaging</subject><subject>Medical imaging equipment</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Organizational structure</subject><subject>Semantics</subject><subject>Texture</subject><subject>Wavelet analysis</subject><subject>Wavelet transforms</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptUdtKAzEQXUTBUv0CXwI-b81tL_GtFq2CF7D6vKTJ7BK7m2iSRfv3RhQv4MzDDIczZ5gzWXZE8IwxgU-gBxW9s0YFwijnoi53sgnFlcgFFXT3V7-fHYbwhFMIwmqGJ9nmHjS0xhrboYWzEd7iKHskrUZnbrRa-i1abS34bnuK5t9YvhyNBo0uxmCcRbcQX53foNZ5dAPaqCRxNcgO0Aq6AWyUMdEOsr1W9gEOv-o0e7w4f1hc5td3y6vF_DpXlPKYC4Ux4ZxrUeOK66pcM0pKyoggLWGqSkdKopQuCFRCagyEUsaKSiteAxRrNs2OP3WfvXsZIcTmyY3eppUNI1wUQlQF-2F1sofG2NZFL9VggmrmNSVVmTQ_WLN_WCk1DEY5m7xL-J8B9jmgvAvBQ9s8ezMkyxqCm49_Nf_8i70DcV2J4w</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Chen, Yu</creator><creator>Wu, Yun</creator><creator>Wu, Jiahua</creator><creator>Zhang, Xinxin</creator><creator>Wang, Dahan</creator><creator>Zhu, Shunzhi</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-5901-0778</orcidid></search><sort><creationdate>20241201</creationdate><title>Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation</title><author>Chen, Yu ; Wu, Yun ; Wu, Jiahua ; Zhang, Xinxin ; Wang, Dahan ; Zhu, Shunzhi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c224t-9c001444d98074d76b321623191f13c7132a1ccd51e79ad0e1223357dc48ee5b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Coders</topic><topic>Computer vision</topic><topic>Context</topic><topic>Decomposition</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Information retrieval</topic><topic>Liu, Timothy</topic><topic>Medical imaging</topic><topic>Medical imaging equipment</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Organizational structure</topic><topic>Semantics</topic><topic>Texture</topic><topic>Wavelet analysis</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yu</creatorcontrib><creatorcontrib>Wu, Yun</creatorcontrib><creatorcontrib>Wu, Jiahua</creatorcontrib><creatorcontrib>Zhang, Xinxin</creatorcontrib><creatorcontrib>Wang, Dahan</creatorcontrib><creatorcontrib>Zhu, Shunzhi</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Yu</au><au>Wu, Yun</au><au>Wu, Jiahua</au><au>Zhang, Xinxin</au><au>Wang, Dahan</au><au>Zhu, Shunzhi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-12-01</date><risdate>2024</risdate><volume>13</volume><issue>24</issue><spage>4986</spage><pages>4986-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Medical image segmentation plays a crucial role in medical image processing, focusing on the automated extraction of regions of interest (such as organs, lesions, etc.) from medical images. This process supports various clinical applications, including diagnosis, surgical planning, and treatment. In this paper, we introduce a Boundary-guided Context Fusion U-Net (BCF-UNet), a novel approach designed to tackle a critical shortcoming in current methods: the inability to effectively integrate boundary information with semantic context. The BCF-UNet introduces a Adaptive Multi-Frequency Encoder (AMFE), which uses multi-frequency analysis inspired by the Wavelet Transform (WT) to capture both local and global features efficiently. The Adaptive Multi-Frequency Encoder (AMFE) decomposes images into different frequency components and adapts more effectively to boundary texture information through a learnable activation function. Additionally, we introduce a new multi-scale feature fusion module, the Atten-kernel Adaptive Fusion Module (AKAFM), designed to integrate deep semantic information with shallow texture details, significantly bridging the gap between features at different scales. Furthermore, each layer of the encoder sub-network integrates a Boundary-aware Pyramid Module (BAPM), which utilizes a simple and effective method and combines it with a priori knowledge to extract multi-scale edge features to improve the accuracy of boundary segmentation. In BCF-UNet, semantic context is used to guide edge information extraction, enabling the model to more effectively comprehend and identify relationships among various organizational structures. Comprehensive experimental evaluations on two datasets demonstrate that the proposed BCF-UNet achieves superior performance compared to existing state-of-the-art methods.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13244986</doi><orcidid>https://orcid.org/0000-0002-5901-0778</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2024-12, Vol.13 (24), p.4986 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_3149599753 |
source | MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Accuracy Coders Computer vision Context Decomposition Deep learning Feature extraction Image processing Image segmentation Information retrieval Liu, Timothy Medical imaging Medical imaging equipment Modules Neural networks Organizational structure Semantics Texture Wavelet analysis Wavelet transforms |
title | Redefining Contextual and Boundary Synergy: A Boundary-Guided Fusion Network for Medical Image Segmentation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T20%3A40%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Redefining%20Contextual%20and%20Boundary%20Synergy:%20A%20Boundary-Guided%20Fusion%20Network%20for%20Medical%20Image%20Segmentation&rft.jtitle=Electronics%20(Basel)&rft.au=Chen,%20Yu&rft.date=2024-12-01&rft.volume=13&rft.issue=24&rft.spage=4986&rft.pages=4986-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13244986&rft_dat=%3Cgale_proqu%3EA821763353%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3149599753&rft_id=info:pmid/&rft_galeid=A821763353&rfr_iscdi=true |