Robust Saliency-Aware Distillation for Few-Shot Fine-Grained Visual Recognition

Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision. Existing literature addresses this challenge by employing local-based representation approaches, which may not sufficiently facilitate meaningful object-specific semantic understan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2024, Vol.26, p.7529-7542
Hauptverfasser: Liu, Haiqi, Chen, C. L. Philip, Gong, Xinrong, Zhang, Tong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7542
container_issue
container_start_page 7529
container_title IEEE transactions on multimedia
container_volume 26
creator Liu, Haiqi
Chen, C. L. Philip
Gong, Xinrong
Zhang, Tong
description Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision. Existing literature addresses this challenge by employing local-based representation approaches, which may not sufficiently facilitate meaningful object-specific semantic understanding, leading to a reliance on apparent background correlations. Moreover, they primarily rely on high-dimensional local descriptors to construct complex embedding space, potentially limiting the generalization. To address the above challenges, this article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition. RSaD introduces additional saliency-aware supervision via saliency detection to guide the model toward focusing on the intrinsic discriminative regions. Specifically, RSaD utilizes the saliency detection model to emphasize the critical regions of each sub-category, providing additional object-specific information for fine-grained prediction. RSaD transfers such information with two symmetric branches in a mutual learning paradigm. Furthermore, RSaD exploits inter-regional relationships to enhance the informativeness of the representation and subsequently summarize the highlighted details into contextual embeddings to facilitate the effective transfer, enabling quick generalization to novel sub-categories. The proposed approach is empirically evaluated on three widely used benchmarks, demonstrating its superior performance.
doi_str_mv 10.1109/TMM.2024.3369870
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10445009</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10445009</ieee_id><sourcerecordid>3044650196</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-7e1d0d1e8256fcbe2da65ac009e16006253f253a0f98cf5842caf4b6df34f06e3</originalsourceid><addsrcrecordid>eNpNkEFLAzEQRoMoWKt3Dx4WPKdOstns5ljUVqGl0FavIc0mmrJuarJL6b83pT14GL45vG8GHkL3BEaEgHhaz-cjCpSN8pyLqoQLNCCCEQxQlpdpLyhgQQlco5sYtwCEFVAO0GLpN33sspVqnGn1AY_3KpjsxcXONY3qnG8z60M2MXu8-vZdNnGtwdOgUtTZp4u9arKl0f6rdUf4Fl1Z1URzd84h-pi8rp_f8GwxfX8ez7Cmgna4NKSGmpiKFtzqjaG14oXSAMIQDsBpkds0CqyotC0qRrWybMNrmzML3ORD9Hi6uwv-tzexk1vfhza9lDkwxgsggicKTpQOPsZgrNwF96PCQRKQR20yaZNHbfKsLVUeThVnjPmHs-QLRP4HGUZorg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3044650196</pqid></control><display><type>article</type><title>Robust Saliency-Aware Distillation for Few-Shot Fine-Grained Visual Recognition</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Haiqi ; Chen, C. L. Philip ; Gong, Xinrong ; Zhang, Tong</creator><creatorcontrib>Liu, Haiqi ; Chen, C. L. Philip ; Gong, Xinrong ; Zhang, Tong</creatorcontrib><description>Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision. Existing literature addresses this challenge by employing local-based representation approaches, which may not sufficiently facilitate meaningful object-specific semantic understanding, leading to a reliance on apparent background correlations. Moreover, they primarily rely on high-dimensional local descriptors to construct complex embedding space, potentially limiting the generalization. To address the above challenges, this article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition. RSaD introduces additional saliency-aware supervision via saliency detection to guide the model toward focusing on the intrinsic discriminative regions. Specifically, RSaD utilizes the saliency detection model to emphasize the critical regions of each sub-category, providing additional object-specific information for fine-grained prediction. RSaD transfers such information with two symmetric branches in a mutual learning paradigm. Furthermore, RSaD exploits inter-regional relationships to enhance the informativeness of the representation and subsequently summarize the highlighted details into contextual embeddings to facilitate the effective transfer, enabling quick generalization to novel sub-categories. The proposed approach is empirically evaluated on three widely used benchmarks, demonstrating its superior performance.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2024.3369870</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Computational modeling ; Computer vision ; Distillation ; Few-shot fine-grained visual recognition ; few-shot learning ; mutual learning ; Probability distribution ; Recognition ; Representations ; Robustness ; Salience ; Saliency detection ; Semantics ; Task analysis ; Training ; Visualization</subject><ispartof>IEEE transactions on multimedia, 2024, Vol.26, p.7529-7542</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-7e1d0d1e8256fcbe2da65ac009e16006253f253a0f98cf5842caf4b6df34f06e3</citedby><cites>FETCH-LOGICAL-c292t-7e1d0d1e8256fcbe2da65ac009e16006253f253a0f98cf5842caf4b6df34f06e3</cites><orcidid>0000-0002-7025-6365 ; 0009-0002-1456-8548 ; 0000-0001-5821-6283 ; 0000-0001-5451-7230</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10445009$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4022,27922,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10445009$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Haiqi</creatorcontrib><creatorcontrib>Chen, C. L. Philip</creatorcontrib><creatorcontrib>Gong, Xinrong</creatorcontrib><creatorcontrib>Zhang, Tong</creatorcontrib><title>Robust Saliency-Aware Distillation for Few-Shot Fine-Grained Visual Recognition</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision. Existing literature addresses this challenge by employing local-based representation approaches, which may not sufficiently facilitate meaningful object-specific semantic understanding, leading to a reliance on apparent background correlations. Moreover, they primarily rely on high-dimensional local descriptors to construct complex embedding space, potentially limiting the generalization. To address the above challenges, this article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition. RSaD introduces additional saliency-aware supervision via saliency detection to guide the model toward focusing on the intrinsic discriminative regions. Specifically, RSaD utilizes the saliency detection model to emphasize the critical regions of each sub-category, providing additional object-specific information for fine-grained prediction. RSaD transfers such information with two symmetric branches in a mutual learning paradigm. Furthermore, RSaD exploits inter-regional relationships to enhance the informativeness of the representation and subsequently summarize the highlighted details into contextual embeddings to facilitate the effective transfer, enabling quick generalization to novel sub-categories. The proposed approach is empirically evaluated on three widely used benchmarks, demonstrating its superior performance.</description><subject>Computational modeling</subject><subject>Computer vision</subject><subject>Distillation</subject><subject>Few-shot fine-grained visual recognition</subject><subject>few-shot learning</subject><subject>mutual learning</subject><subject>Probability distribution</subject><subject>Recognition</subject><subject>Representations</subject><subject>Robustness</subject><subject>Salience</subject><subject>Saliency detection</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Training</subject><subject>Visualization</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEFLAzEQRoMoWKt3Dx4WPKdOstns5ljUVqGl0FavIc0mmrJuarJL6b83pT14GL45vG8GHkL3BEaEgHhaz-cjCpSN8pyLqoQLNCCCEQxQlpdpLyhgQQlco5sYtwCEFVAO0GLpN33sspVqnGn1AY_3KpjsxcXONY3qnG8z60M2MXu8-vZdNnGtwdOgUtTZp4u9arKl0f6rdUf4Fl1Z1URzd84h-pi8rp_f8GwxfX8ez7Cmgna4NKSGmpiKFtzqjaG14oXSAMIQDsBpkds0CqyotC0qRrWybMNrmzML3ORD9Hi6uwv-tzexk1vfhza9lDkwxgsggicKTpQOPsZgrNwF96PCQRKQR20yaZNHbfKsLVUeThVnjPmHs-QLRP4HGUZorg</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Liu, Haiqi</creator><creator>Chen, C. L. Philip</creator><creator>Gong, Xinrong</creator><creator>Zhang, Tong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-7025-6365</orcidid><orcidid>https://orcid.org/0009-0002-1456-8548</orcidid><orcidid>https://orcid.org/0000-0001-5821-6283</orcidid><orcidid>https://orcid.org/0000-0001-5451-7230</orcidid></search><sort><creationdate>2024</creationdate><title>Robust Saliency-Aware Distillation for Few-Shot Fine-Grained Visual Recognition</title><author>Liu, Haiqi ; Chen, C. L. Philip ; Gong, Xinrong ; Zhang, Tong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-7e1d0d1e8256fcbe2da65ac009e16006253f253a0f98cf5842caf4b6df34f06e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computational modeling</topic><topic>Computer vision</topic><topic>Distillation</topic><topic>Few-shot fine-grained visual recognition</topic><topic>few-shot learning</topic><topic>mutual learning</topic><topic>Probability distribution</topic><topic>Recognition</topic><topic>Representations</topic><topic>Robustness</topic><topic>Salience</topic><topic>Saliency detection</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Training</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Haiqi</creatorcontrib><creatorcontrib>Chen, C. L. Philip</creatorcontrib><creatorcontrib>Gong, Xinrong</creatorcontrib><creatorcontrib>Zhang, Tong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Haiqi</au><au>Chen, C. L. Philip</au><au>Gong, Xinrong</au><au>Zhang, Tong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Saliency-Aware Distillation for Few-Shot Fine-Grained Visual Recognition</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2024</date><risdate>2024</risdate><volume>26</volume><spage>7529</spage><epage>7542</epage><pages>7529-7542</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision. Existing literature addresses this challenge by employing local-based representation approaches, which may not sufficiently facilitate meaningful object-specific semantic understanding, leading to a reliance on apparent background correlations. Moreover, they primarily rely on high-dimensional local descriptors to construct complex embedding space, potentially limiting the generalization. To address the above challenges, this article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition. RSaD introduces additional saliency-aware supervision via saliency detection to guide the model toward focusing on the intrinsic discriminative regions. Specifically, RSaD utilizes the saliency detection model to emphasize the critical regions of each sub-category, providing additional object-specific information for fine-grained prediction. RSaD transfers such information with two symmetric branches in a mutual learning paradigm. Furthermore, RSaD exploits inter-regional relationships to enhance the informativeness of the representation and subsequently summarize the highlighted details into contextual embeddings to facilitate the effective transfer, enabling quick generalization to novel sub-categories. The proposed approach is empirically evaluated on three widely used benchmarks, demonstrating its superior performance.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2024.3369870</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-7025-6365</orcidid><orcidid>https://orcid.org/0009-0002-1456-8548</orcidid><orcidid>https://orcid.org/0000-0001-5821-6283</orcidid><orcidid>https://orcid.org/0000-0001-5451-7230</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-9210
ispartof IEEE transactions on multimedia, 2024, Vol.26, p.7529-7542
issn 1520-9210
1941-0077
language eng
recordid cdi_ieee_primary_10445009
source IEEE Electronic Library (IEL)
subjects Computational modeling
Computer vision
Distillation
Few-shot fine-grained visual recognition
few-shot learning
mutual learning
Probability distribution
Recognition
Representations
Robustness
Salience
Saliency detection
Semantics
Task analysis
Training
Visualization
title Robust Saliency-Aware Distillation for Few-Shot Fine-Grained Visual Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T05%3A18%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Saliency-Aware%20Distillation%20for%20Few-Shot%20Fine-Grained%20Visual%20Recognition&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Liu,%20Haiqi&rft.date=2024&rft.volume=26&rft.spage=7529&rft.epage=7542&rft.pages=7529-7542&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2024.3369870&rft_dat=%3Cproquest_RIE%3E3044650196%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3044650196&rft_id=info:pmid/&rft_ieee_id=10445009&rfr_iscdi=true