Class key feature extraction and fusion for 2D medical image segmentation
Background The size variation, complex semantic environment and high similarity in medical images often prevent deep learning models from achieving good performance. Purpose To overcome these problems and improve the model segmentation performance and generalizability. Methods We propose the key cla...
Gespeichert in:
Veröffentlicht in: | Medical physics (Lancaster) 2024-02, Vol.51 (2), p.1263-1276 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1276 |
---|---|
container_issue | 2 |
container_start_page | 1263 |
container_title | Medical physics (Lancaster) |
container_volume | 51 |
creator | Zhang, Dezhi Fan, Xin Kang, Xiaojing Tian, Shengwei Xiao, Guangli Yu, Long Wu, Weidong |
description | Background
The size variation, complex semantic environment and high similarity in medical images often prevent deep learning models from achieving good performance.
Purpose
To overcome these problems and improve the model segmentation performance and generalizability.
Methods
We propose the key class feature reconstruction module (KCRM), which ranks channel weights and selects key features (KFs) that contribute more to the segmentation results for each class. Meanwhile, KCRM reconstructs all local features to establish the dependence relationship from local features to KFs. In addition, we propose the spatial gating module (SGM), which employs KFs to generate two spatial maps to suppress irrelevant regions, strengthening the ability to locate semantic objects. Finally, we enable the model to adapt to size variations by diversifying the receptive field.
Results
We integrate these modules into class key feature extraction and fusion network (CKFFNet) and validate its performance on three public medical datasets: CHAOS, UW‐Madison, and ISIC2017. The experimental results show that our method achieves better segmentation results and generalizability than those of mainstream methods.
Conclusion
Through quantitative and qualitative research, the proposed module improves the segmentation results and enhances the model generalizability, making it suitable for application and expansion. |
doi_str_mv | 10.1002/mp.16636 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2847745925</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2847745925</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2826-e9dd8d4568254540e6f733aea5691fe573917db80e857695a7f0bbd06604377e3</originalsourceid><addsrcrecordid>eNp1kMlOwzAQQC0EoqUg8QXIRy4pE6_JEZWtUhEc4Gw5ybgKZMNOBP17Uspy4jRzeHqaeYScxjCPAdhF3c1jpbjaI1MmNI8Eg3SfTAFSETEBckKOQngBAMUlHJIJ11IyydiULBeVDYG-4oY6tP3gkeJH723el21DbVNQN4Tt6lpP2RWtsShzW9GytmukAdc1Nr3dwsfkwNkq4Mn3nJHnm-unxV20erhdLi5XUc4SpiJMiyIphFQJk0IKQOU05xatVGnsUGqexrrIEsBEapVKqx1kWQFKgeBaI5-R85238-3bgKE3dRlyrCrbYDsEwxKhtZApk39o7tsQPDrT-fFwvzExmG04U3fmK9yInn1bh2z88Rf8KTUC0Q54Lyvc_Csy94874Sd1BnTo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2847745925</pqid></control><display><type>article</type><title>Class key feature extraction and fusion for 2D medical image segmentation</title><source>Access via Wiley Online Library</source><creator>Zhang, Dezhi ; Fan, Xin ; Kang, Xiaojing ; Tian, Shengwei ; Xiao, Guangli ; Yu, Long ; Wu, Weidong</creator><creatorcontrib>Zhang, Dezhi ; Fan, Xin ; Kang, Xiaojing ; Tian, Shengwei ; Xiao, Guangli ; Yu, Long ; Wu, Weidong</creatorcontrib><description>Background
The size variation, complex semantic environment and high similarity in medical images often prevent deep learning models from achieving good performance.
Purpose
To overcome these problems and improve the model segmentation performance and generalizability.
Methods
We propose the key class feature reconstruction module (KCRM), which ranks channel weights and selects key features (KFs) that contribute more to the segmentation results for each class. Meanwhile, KCRM reconstructs all local features to establish the dependence relationship from local features to KFs. In addition, we propose the spatial gating module (SGM), which employs KFs to generate two spatial maps to suppress irrelevant regions, strengthening the ability to locate semantic objects. Finally, we enable the model to adapt to size variations by diversifying the receptive field.
Results
We integrate these modules into class key feature extraction and fusion network (CKFFNet) and validate its performance on three public medical datasets: CHAOS, UW‐Madison, and ISIC2017. The experimental results show that our method achieves better segmentation results and generalizability than those of mainstream methods.
Conclusion
Through quantitative and qualitative research, the proposed module improves the segmentation results and enhances the model generalizability, making it suitable for application and expansion.</description><identifier>ISSN: 0094-2405</identifier><identifier>EISSN: 2473-4209</identifier><identifier>DOI: 10.1002/mp.16636</identifier><identifier>PMID: 37552522</identifier><language>eng</language><publisher>United States</publisher><subject>feature extraction ; fusion ; medical images ; ranking channels ; semantic segmentation</subject><ispartof>Medical physics (Lancaster), 2024-02, Vol.51 (2), p.1263-1276</ispartof><rights>2023 American Association of Physicists in Medicine.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2826-e9dd8d4568254540e6f733aea5691fe573917db80e857695a7f0bbd06604377e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fmp.16636$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fmp.16636$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37552522$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Dezhi</creatorcontrib><creatorcontrib>Fan, Xin</creatorcontrib><creatorcontrib>Kang, Xiaojing</creatorcontrib><creatorcontrib>Tian, Shengwei</creatorcontrib><creatorcontrib>Xiao, Guangli</creatorcontrib><creatorcontrib>Yu, Long</creatorcontrib><creatorcontrib>Wu, Weidong</creatorcontrib><title>Class key feature extraction and fusion for 2D medical image segmentation</title><title>Medical physics (Lancaster)</title><addtitle>Med Phys</addtitle><description>Background
The size variation, complex semantic environment and high similarity in medical images often prevent deep learning models from achieving good performance.
Purpose
To overcome these problems and improve the model segmentation performance and generalizability.
Methods
We propose the key class feature reconstruction module (KCRM), which ranks channel weights and selects key features (KFs) that contribute more to the segmentation results for each class. Meanwhile, KCRM reconstructs all local features to establish the dependence relationship from local features to KFs. In addition, we propose the spatial gating module (SGM), which employs KFs to generate two spatial maps to suppress irrelevant regions, strengthening the ability to locate semantic objects. Finally, we enable the model to adapt to size variations by diversifying the receptive field.
Results
We integrate these modules into class key feature extraction and fusion network (CKFFNet) and validate its performance on three public medical datasets: CHAOS, UW‐Madison, and ISIC2017. The experimental results show that our method achieves better segmentation results and generalizability than those of mainstream methods.
Conclusion
Through quantitative and qualitative research, the proposed module improves the segmentation results and enhances the model generalizability, making it suitable for application and expansion.</description><subject>feature extraction</subject><subject>fusion</subject><subject>medical images</subject><subject>ranking channels</subject><subject>semantic segmentation</subject><issn>0094-2405</issn><issn>2473-4209</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp1kMlOwzAQQC0EoqUg8QXIRy4pE6_JEZWtUhEc4Gw5ybgKZMNOBP17Uspy4jRzeHqaeYScxjCPAdhF3c1jpbjaI1MmNI8Eg3SfTAFSETEBckKOQngBAMUlHJIJ11IyydiULBeVDYG-4oY6tP3gkeJH723el21DbVNQN4Tt6lpP2RWtsShzW9GytmukAdc1Nr3dwsfkwNkq4Mn3nJHnm-unxV20erhdLi5XUc4SpiJMiyIphFQJk0IKQOU05xatVGnsUGqexrrIEsBEapVKqx1kWQFKgeBaI5-R85238-3bgKE3dRlyrCrbYDsEwxKhtZApk39o7tsQPDrT-fFwvzExmG04U3fmK9yInn1bh2z88Rf8KTUC0Q54Lyvc_Csy94874Sd1BnTo</recordid><startdate>202402</startdate><enddate>202402</enddate><creator>Zhang, Dezhi</creator><creator>Fan, Xin</creator><creator>Kang, Xiaojing</creator><creator>Tian, Shengwei</creator><creator>Xiao, Guangli</creator><creator>Yu, Long</creator><creator>Wu, Weidong</creator><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202402</creationdate><title>Class key feature extraction and fusion for 2D medical image segmentation</title><author>Zhang, Dezhi ; Fan, Xin ; Kang, Xiaojing ; Tian, Shengwei ; Xiao, Guangli ; Yu, Long ; Wu, Weidong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2826-e9dd8d4568254540e6f733aea5691fe573917db80e857695a7f0bbd06604377e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>feature extraction</topic><topic>fusion</topic><topic>medical images</topic><topic>ranking channels</topic><topic>semantic segmentation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Dezhi</creatorcontrib><creatorcontrib>Fan, Xin</creatorcontrib><creatorcontrib>Kang, Xiaojing</creatorcontrib><creatorcontrib>Tian, Shengwei</creatorcontrib><creatorcontrib>Xiao, Guangli</creatorcontrib><creatorcontrib>Yu, Long</creatorcontrib><creatorcontrib>Wu, Weidong</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical physics (Lancaster)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Dezhi</au><au>Fan, Xin</au><au>Kang, Xiaojing</au><au>Tian, Shengwei</au><au>Xiao, Guangli</au><au>Yu, Long</au><au>Wu, Weidong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Class key feature extraction and fusion for 2D medical image segmentation</atitle><jtitle>Medical physics (Lancaster)</jtitle><addtitle>Med Phys</addtitle><date>2024-02</date><risdate>2024</risdate><volume>51</volume><issue>2</issue><spage>1263</spage><epage>1276</epage><pages>1263-1276</pages><issn>0094-2405</issn><eissn>2473-4209</eissn><abstract>Background
The size variation, complex semantic environment and high similarity in medical images often prevent deep learning models from achieving good performance.
Purpose
To overcome these problems and improve the model segmentation performance and generalizability.
Methods
We propose the key class feature reconstruction module (KCRM), which ranks channel weights and selects key features (KFs) that contribute more to the segmentation results for each class. Meanwhile, KCRM reconstructs all local features to establish the dependence relationship from local features to KFs. In addition, we propose the spatial gating module (SGM), which employs KFs to generate two spatial maps to suppress irrelevant regions, strengthening the ability to locate semantic objects. Finally, we enable the model to adapt to size variations by diversifying the receptive field.
Results
We integrate these modules into class key feature extraction and fusion network (CKFFNet) and validate its performance on three public medical datasets: CHAOS, UW‐Madison, and ISIC2017. The experimental results show that our method achieves better segmentation results and generalizability than those of mainstream methods.
Conclusion
Through quantitative and qualitative research, the proposed module improves the segmentation results and enhances the model generalizability, making it suitable for application and expansion.</abstract><cop>United States</cop><pmid>37552522</pmid><doi>10.1002/mp.16636</doi><tpages>14</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0094-2405 |
ispartof | Medical physics (Lancaster), 2024-02, Vol.51 (2), p.1263-1276 |
issn | 0094-2405 2473-4209 |
language | eng |
recordid | cdi_proquest_miscellaneous_2847745925 |
source | Access via Wiley Online Library |
subjects | feature extraction fusion medical images ranking channels semantic segmentation |
title | Class key feature extraction and fusion for 2D medical image segmentation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T19%3A35%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Class%20key%20feature%20extraction%20and%20fusion%20for%202D%20medical%20image%20segmentation&rft.jtitle=Medical%20physics%20(Lancaster)&rft.au=Zhang,%20Dezhi&rft.date=2024-02&rft.volume=51&rft.issue=2&rft.spage=1263&rft.epage=1276&rft.pages=1263-1276&rft.issn=0094-2405&rft.eissn=2473-4209&rft_id=info:doi/10.1002/mp.16636&rft_dat=%3Cproquest_cross%3E2847745925%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2847745925&rft_id=info:pmid/37552522&rfr_iscdi=true |