CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation
Automatic and precise medical image segmentation (MIS) is of vital importance for clinical diagnosis and analysis. Current MIS methods mainly rely on the convolutional neural network (CNN) or self-attention mechanism (Transformer) for feature modeling. However, CNN-based methods suffer from the inac...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2024-12, p.1-1 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | |
creator | Wu, Lanhu Zhang, Miao Piao, Yongri Yao, Zhenyan Sun, Weibing Tian, Feng Lu, Huchuan |
description | Automatic and precise medical image segmentation (MIS) is of vital importance for clinical diagnosis and analysis. Current MIS methods mainly rely on the convolutional neural network (CNN) or self-attention mechanism (Transformer) for feature modeling. However, CNN-based methods suffer from the inaccurate localization owing to the limited global dependency while Transformer-based methods always present the coarse boundary for the lack of local emphasis. Although some CNN-Transformer hybrid methods are designed to synthesize the complementary local and global information for better performance, the combination of CNN and Transformer introduces numerous parameters and increases the computation cost. To this end, this paper proposes a CNN-Transformer rectified collaborative learning (CTRCL) framework to learn stronger CNN-based and Transformer-based models for MIS tasks via the bi-directional knowledge transfer between them. Specifically, we propose a rectified logit-wise collaborative learning (RLCL) strategy which introduces the ground truth to adaptively select and rectify the wrong regions in student soft labels for accurate knowledge transfer in the logit space. We also propose a class-aware feature-wise collaborative learning (CFCL) strategy to achieve effective knowledge transfer between CNN-based and Transformer-based models in the feature space by granting their intermediate features the similar capability of category perception. Extensive experiments on three popular MIS benchmarks demonstrate that our CTRCL outperforms most state-of-the-art collaborative learning methods under different evaluation metrics. |
doi_str_mv | 10.1109/TCSVT.2024.3523316 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10816601</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10816601</ieee_id><sourcerecordid>10_1109_TCSVT_2024_3523316</sourcerecordid><originalsourceid>FETCH-LOGICAL-c641-190df46b32532150134b6ec078ea20b19724647a4b6da72d037495e6285a89543</originalsourceid><addsrcrecordid>eNpNkNtKw0AQhhdRsFZfQLzYF0id2VM2lxLUFmoLNngbNsmkrOQgmyL49m5tL7yaYfi_4edj7B5hgQjZY5HvPoqFAKEWUgsp0VywGWptEyFAX8YdNCZWoL5mN9P0CYDKqnTGtvlmkxTBDVM7hp4Cf6f64FtPDc_HrnPVGNzBfxNfkwuDH_Y85vgbNb52HV_1bk98R_uehkPMjcMtu2pdN9Hdec5Z8fJc5MtkvX1d5U_rpDYKE8ygaZWppNAydgKUqjJUQ2rJCagwS4UyKnXx2rhUNCBTlWkywmpnM63knInT2zqM0xSoLb-C7134KRHKo5Hyz0h5NFKejUTo4QR5IvoHWDQmVvgFfvFcBg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation</title><source>IEEE Electronic Library (IEL)</source><creator>Wu, Lanhu ; Zhang, Miao ; Piao, Yongri ; Yao, Zhenyan ; Sun, Weibing ; Tian, Feng ; Lu, Huchuan</creator><creatorcontrib>Wu, Lanhu ; Zhang, Miao ; Piao, Yongri ; Yao, Zhenyan ; Sun, Weibing ; Tian, Feng ; Lu, Huchuan</creatorcontrib><description>Automatic and precise medical image segmentation (MIS) is of vital importance for clinical diagnosis and analysis. Current MIS methods mainly rely on the convolutional neural network (CNN) or self-attention mechanism (Transformer) for feature modeling. However, CNN-based methods suffer from the inaccurate localization owing to the limited global dependency while Transformer-based methods always present the coarse boundary for the lack of local emphasis. Although some CNN-Transformer hybrid methods are designed to synthesize the complementary local and global information for better performance, the combination of CNN and Transformer introduces numerous parameters and increases the computation cost. To this end, this paper proposes a CNN-Transformer rectified collaborative learning (CTRCL) framework to learn stronger CNN-based and Transformer-based models for MIS tasks via the bi-directional knowledge transfer between them. Specifically, we propose a rectified logit-wise collaborative learning (RLCL) strategy which introduces the ground truth to adaptively select and rectify the wrong regions in student soft labels for accurate knowledge transfer in the logit space. We also propose a class-aware feature-wise collaborative learning (CFCL) strategy to achieve effective knowledge transfer between CNN-based and Transformer-based models in the feature space by granting their intermediate features the similar capability of category perception. Extensive experiments on three popular MIS benchmarks demonstrate that our CTRCL outperforms most state-of-the-art collaborative learning methods under different evaluation metrics.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2024.3523316</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; CNN ; collaborative learning ; Convolutional neural networks ; Decoding ; Feature extraction ; Federated learning ; Image segmentation ; Knowledge transfer ; Location awareness ; Medical image segmentation ; Semantics ; Transformer ; Transformers</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-12, p.1-1</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-0860-252X ; 0000-0002-6668-9758 ; 0000-0002-7972-7047</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10816601$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27915,27916,54749</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10816601$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wu, Lanhu</creatorcontrib><creatorcontrib>Zhang, Miao</creatorcontrib><creatorcontrib>Piao, Yongri</creatorcontrib><creatorcontrib>Yao, Zhenyan</creatorcontrib><creatorcontrib>Sun, Weibing</creatorcontrib><creatorcontrib>Tian, Feng</creatorcontrib><creatorcontrib>Lu, Huchuan</creatorcontrib><title>CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Automatic and precise medical image segmentation (MIS) is of vital importance for clinical diagnosis and analysis. Current MIS methods mainly rely on the convolutional neural network (CNN) or self-attention mechanism (Transformer) for feature modeling. However, CNN-based methods suffer from the inaccurate localization owing to the limited global dependency while Transformer-based methods always present the coarse boundary for the lack of local emphasis. Although some CNN-Transformer hybrid methods are designed to synthesize the complementary local and global information for better performance, the combination of CNN and Transformer introduces numerous parameters and increases the computation cost. To this end, this paper proposes a CNN-Transformer rectified collaborative learning (CTRCL) framework to learn stronger CNN-based and Transformer-based models for MIS tasks via the bi-directional knowledge transfer between them. Specifically, we propose a rectified logit-wise collaborative learning (RLCL) strategy which introduces the ground truth to adaptively select and rectify the wrong regions in student soft labels for accurate knowledge transfer in the logit space. We also propose a class-aware feature-wise collaborative learning (CFCL) strategy to achieve effective knowledge transfer between CNN-based and Transformer-based models in the feature space by granting their intermediate features the similar capability of category perception. Extensive experiments on three popular MIS benchmarks demonstrate that our CTRCL outperforms most state-of-the-art collaborative learning methods under different evaluation metrics.</description><subject>Accuracy</subject><subject>CNN</subject><subject>collaborative learning</subject><subject>Convolutional neural networks</subject><subject>Decoding</subject><subject>Feature extraction</subject><subject>Federated learning</subject><subject>Image segmentation</subject><subject>Knowledge transfer</subject><subject>Location awareness</subject><subject>Medical image segmentation</subject><subject>Semantics</subject><subject>Transformer</subject><subject>Transformers</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkNtKw0AQhhdRsFZfQLzYF0id2VM2lxLUFmoLNngbNsmkrOQgmyL49m5tL7yaYfi_4edj7B5hgQjZY5HvPoqFAKEWUgsp0VywGWptEyFAX8YdNCZWoL5mN9P0CYDKqnTGtvlmkxTBDVM7hp4Cf6f64FtPDc_HrnPVGNzBfxNfkwuDH_Y85vgbNb52HV_1bk98R_uehkPMjcMtu2pdN9Hdec5Z8fJc5MtkvX1d5U_rpDYKE8ygaZWppNAydgKUqjJUQ2rJCagwS4UyKnXx2rhUNCBTlWkywmpnM63knInT2zqM0xSoLb-C7134KRHKo5Hyz0h5NFKejUTo4QR5IvoHWDQmVvgFfvFcBg</recordid><startdate>20241226</startdate><enddate>20241226</enddate><creator>Wu, Lanhu</creator><creator>Zhang, Miao</creator><creator>Piao, Yongri</creator><creator>Yao, Zhenyan</creator><creator>Sun, Weibing</creator><creator>Tian, Feng</creator><creator>Lu, Huchuan</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0860-252X</orcidid><orcidid>https://orcid.org/0000-0002-6668-9758</orcidid><orcidid>https://orcid.org/0000-0002-7972-7047</orcidid></search><sort><creationdate>20241226</creationdate><title>CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation</title><author>Wu, Lanhu ; Zhang, Miao ; Piao, Yongri ; Yao, Zhenyan ; Sun, Weibing ; Tian, Feng ; Lu, Huchuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c641-190df46b32532150134b6ec078ea20b19724647a4b6da72d037495e6285a89543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>CNN</topic><topic>collaborative learning</topic><topic>Convolutional neural networks</topic><topic>Decoding</topic><topic>Feature extraction</topic><topic>Federated learning</topic><topic>Image segmentation</topic><topic>Knowledge transfer</topic><topic>Location awareness</topic><topic>Medical image segmentation</topic><topic>Semantics</topic><topic>Transformer</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Lanhu</creatorcontrib><creatorcontrib>Zhang, Miao</creatorcontrib><creatorcontrib>Piao, Yongri</creatorcontrib><creatorcontrib>Yao, Zhenyan</creatorcontrib><creatorcontrib>Sun, Weibing</creatorcontrib><creatorcontrib>Tian, Feng</creatorcontrib><creatorcontrib>Lu, Huchuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Lanhu</au><au>Zhang, Miao</au><au>Piao, Yongri</au><au>Yao, Zhenyan</au><au>Sun, Weibing</au><au>Tian, Feng</au><au>Lu, Huchuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-12-26</date><risdate>2024</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Automatic and precise medical image segmentation (MIS) is of vital importance for clinical diagnosis and analysis. Current MIS methods mainly rely on the convolutional neural network (CNN) or self-attention mechanism (Transformer) for feature modeling. However, CNN-based methods suffer from the inaccurate localization owing to the limited global dependency while Transformer-based methods always present the coarse boundary for the lack of local emphasis. Although some CNN-Transformer hybrid methods are designed to synthesize the complementary local and global information for better performance, the combination of CNN and Transformer introduces numerous parameters and increases the computation cost. To this end, this paper proposes a CNN-Transformer rectified collaborative learning (CTRCL) framework to learn stronger CNN-based and Transformer-based models for MIS tasks via the bi-directional knowledge transfer between them. Specifically, we propose a rectified logit-wise collaborative learning (RLCL) strategy which introduces the ground truth to adaptively select and rectify the wrong regions in student soft labels for accurate knowledge transfer in the logit space. We also propose a class-aware feature-wise collaborative learning (CFCL) strategy to achieve effective knowledge transfer between CNN-based and Transformer-based models in the feature space by granting their intermediate features the similar capability of category perception. Extensive experiments on three popular MIS benchmarks demonstrate that our CTRCL outperforms most state-of-the-art collaborative learning methods under different evaluation metrics.</abstract><pub>IEEE</pub><doi>10.1109/TCSVT.2024.3523316</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-0860-252X</orcidid><orcidid>https://orcid.org/0000-0002-6668-9758</orcidid><orcidid>https://orcid.org/0000-0002-7972-7047</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2024-12, p.1-1 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_ieee_primary_10816601 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy CNN collaborative learning Convolutional neural networks Decoding Feature extraction Federated learning Image segmentation Knowledge transfer Location awareness Medical image segmentation Semantics Transformer Transformers |
title | CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T00%3A16%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CNN-Transformer%20Rectified%20Collaborative%20Learning%20for%20Medical%20Image%20Segmentation&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Wu,%20Lanhu&rft.date=2024-12-26&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2024.3523316&rft_dat=%3Ccrossref_RIE%3E10_1109_TCSVT_2024_3523316%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10816601&rfr_iscdi=true |