MatchSeg: Towards Better Segmentation via Reference Image Matching

Recently, automated medical image segmentation methods based on deep learning have achieved great success. However, they heavily rely on large annotated datasets, which are costly and time-consuming to acquire. Few-shot learning aims to overcome the need for annotated data by using a small labeled d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-08
Hauptverfasser: Huo, Jiayu, Xiao, Ruiqiang, Zheng, Haotian, Liu, Yang, Ourselin, Sebastien, Sparks, Rachel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Huo, Jiayu
Xiao, Ruiqiang
Zheng, Haotian
Liu, Yang
Ourselin, Sebastien
Sparks, Rachel
description Recently, automated medical image segmentation methods based on deep learning have achieved great success. However, they heavily rely on large annotated datasets, which are costly and time-consuming to acquire. Few-shot learning aims to overcome the need for annotated data by using a small labeled dataset, known as a support set, to guide predicting labels for new, unlabeled images, known as the query set. Inspired by this paradigm, we introduce MatchSeg, a novel framework that enhances medical image segmentation through strategic reference image matching. We leverage contrastive language-image pre-training (CLIP) to select highly relevant samples when defining the support set. Additionally, we design a joint attention module to strengthen the interaction between support and query features, facilitating a more effective knowledge transfer between support and query sets. We validated our method across four public datasets. Experimental results demonstrate superior segmentation performance and powerful domain generalization ability of MatchSeg against existing methods for domain-specific and cross-domain segmentation tasks. Our code is made available at https://github.com/keeplearning-again/MatchSeg
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2986604300</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2986604300</sourcerecordid><originalsourceid>FETCH-proquest_journals_29866043003</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw8k0sSc4ITk23UgjJL08sSilWcEotKUktUgCK5abmlSSWZObnKZRlJioEpaalFqXmJacqeOYmpqcqgHVm5qXzMLCmJeYUp_JCaW4GZTfXEGcP3YKi_MLS1OKS-Kz80qI8oFS8kaWFmZmBibGBgTFxqgB9fDlO</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2986604300</pqid></control><display><type>article</type><title>MatchSeg: Towards Better Segmentation via Reference Image Matching</title><source>Free E- Journals</source><creator>Huo, Jiayu ; Xiao, Ruiqiang ; Zheng, Haotian ; Liu, Yang ; Ourselin, Sebastien ; Sparks, Rachel</creator><creatorcontrib>Huo, Jiayu ; Xiao, Ruiqiang ; Zheng, Haotian ; Liu, Yang ; Ourselin, Sebastien ; Sparks, Rachel</creatorcontrib><description>Recently, automated medical image segmentation methods based on deep learning have achieved great success. However, they heavily rely on large annotated datasets, which are costly and time-consuming to acquire. Few-shot learning aims to overcome the need for annotated data by using a small labeled dataset, known as a support set, to guide predicting labels for new, unlabeled images, known as the query set. Inspired by this paradigm, we introduce MatchSeg, a novel framework that enhances medical image segmentation through strategic reference image matching. We leverage contrastive language-image pre-training (CLIP) to select highly relevant samples when defining the support set. Additionally, we design a joint attention module to strengthen the interaction between support and query features, facilitating a more effective knowledge transfer between support and query sets. We validated our method across four public datasets. Experimental results demonstrate superior segmentation performance and powerful domain generalization ability of MatchSeg against existing methods for domain-specific and cross-domain segmentation tasks. Our code is made available at https://github.com/keeplearning-again/MatchSeg</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Deep learning ; Image segmentation ; Knowledge management ; Matching ; Medical imaging ; Queries</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Huo, Jiayu</creatorcontrib><creatorcontrib>Xiao, Ruiqiang</creatorcontrib><creatorcontrib>Zheng, Haotian</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Ourselin, Sebastien</creatorcontrib><creatorcontrib>Sparks, Rachel</creatorcontrib><title>MatchSeg: Towards Better Segmentation via Reference Image Matching</title><title>arXiv.org</title><description>Recently, automated medical image segmentation methods based on deep learning have achieved great success. However, they heavily rely on large annotated datasets, which are costly and time-consuming to acquire. Few-shot learning aims to overcome the need for annotated data by using a small labeled dataset, known as a support set, to guide predicting labels for new, unlabeled images, known as the query set. Inspired by this paradigm, we introduce MatchSeg, a novel framework that enhances medical image segmentation through strategic reference image matching. We leverage contrastive language-image pre-training (CLIP) to select highly relevant samples when defining the support set. Additionally, we design a joint attention module to strengthen the interaction between support and query features, facilitating a more effective knowledge transfer between support and query sets. We validated our method across four public datasets. Experimental results demonstrate superior segmentation performance and powerful domain generalization ability of MatchSeg against existing methods for domain-specific and cross-domain segmentation tasks. Our code is made available at https://github.com/keeplearning-again/MatchSeg</description><subject>Datasets</subject><subject>Deep learning</subject><subject>Image segmentation</subject><subject>Knowledge management</subject><subject>Matching</subject><subject>Medical imaging</subject><subject>Queries</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw8k0sSc4ITk23UgjJL08sSilWcEotKUktUgCK5abmlSSWZObnKZRlJioEpaalFqXmJacqeOYmpqcqgHVm5qXzMLCmJeYUp_JCaW4GZTfXEGcP3YKi_MLS1OKS-Kz80qI8oFS8kaWFmZmBibGBgTFxqgB9fDlO</recordid><startdate>20240817</startdate><enddate>20240817</enddate><creator>Huo, Jiayu</creator><creator>Xiao, Ruiqiang</creator><creator>Zheng, Haotian</creator><creator>Liu, Yang</creator><creator>Ourselin, Sebastien</creator><creator>Sparks, Rachel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240817</creationdate><title>MatchSeg: Towards Better Segmentation via Reference Image Matching</title><author>Huo, Jiayu ; Xiao, Ruiqiang ; Zheng, Haotian ; Liu, Yang ; Ourselin, Sebastien ; Sparks, Rachel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29866043003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Deep learning</topic><topic>Image segmentation</topic><topic>Knowledge management</topic><topic>Matching</topic><topic>Medical imaging</topic><topic>Queries</topic><toplevel>online_resources</toplevel><creatorcontrib>Huo, Jiayu</creatorcontrib><creatorcontrib>Xiao, Ruiqiang</creatorcontrib><creatorcontrib>Zheng, Haotian</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Ourselin, Sebastien</creatorcontrib><creatorcontrib>Sparks, Rachel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huo, Jiayu</au><au>Xiao, Ruiqiang</au><au>Zheng, Haotian</au><au>Liu, Yang</au><au>Ourselin, Sebastien</au><au>Sparks, Rachel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MatchSeg: Towards Better Segmentation via Reference Image Matching</atitle><jtitle>arXiv.org</jtitle><date>2024-08-17</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recently, automated medical image segmentation methods based on deep learning have achieved great success. However, they heavily rely on large annotated datasets, which are costly and time-consuming to acquire. Few-shot learning aims to overcome the need for annotated data by using a small labeled dataset, known as a support set, to guide predicting labels for new, unlabeled images, known as the query set. Inspired by this paradigm, we introduce MatchSeg, a novel framework that enhances medical image segmentation through strategic reference image matching. We leverage contrastive language-image pre-training (CLIP) to select highly relevant samples when defining the support set. Additionally, we design a joint attention module to strengthen the interaction between support and query features, facilitating a more effective knowledge transfer between support and query sets. We validated our method across four public datasets. Experimental results demonstrate superior segmentation performance and powerful domain generalization ability of MatchSeg against existing methods for domain-specific and cross-domain segmentation tasks. Our code is made available at https://github.com/keeplearning-again/MatchSeg</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2986604300
source Free E- Journals
subjects Datasets
Deep learning
Image segmentation
Knowledge management
Matching
Medical imaging
Queries
title MatchSeg: Towards Better Segmentation via Reference Image Matching
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T00%3A54%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MatchSeg:%20Towards%20Better%20Segmentation%20via%20Reference%20Image%20Matching&rft.jtitle=arXiv.org&rft.au=Huo,%20Jiayu&rft.date=2024-08-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2986604300%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2986604300&rft_id=info:pmid/&rfr_iscdi=true