Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment
Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples. Undoubtedly, this task inherits the main challenges from both few-shot learning and fine-grained recognition. First, the lack of labeled samples makes the learned mo...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2023-08, Vol.33 (8), p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | 8 |
container_start_page | 1 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 33 |
creator | Zha, Zican Tang, Hao Sun, Yunlian Tang, Jinhui |
description | Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples. Undoubtedly, this task inherits the main challenges from both few-shot learning and fine-grained recognition. First, the lack of labeled samples makes the learned model easy to overfit. Second, it also suffers from high intra-class variance and low inter-class differences in the datasets. To address this challenging task, we propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local-to-local (L2L) similarity metric. Specifically, the BAS is introduced to generate a foreground mask for localization to weaken back-ground disturbance and enhance dominative foreground objects. The FOA then reconstructs the feature map of each support sample according to its correction to the query ones, which addresses the problem of misalignment between support-query image pairs. To enable the proposed method to have the ability to capture subtle differences in confused samples, we present a novel L2L similarity metric to further measure the local similarity between a pair of aligned spatial features in the embedding space. What's more, considering that background interference brings poor robustness, we infer the pairwise similarity of feature maps using both the raw image and the refined image. Extensive experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state of the art by a large margin. The source codes are available at: https://github.com/CSer-Tang-hao/BSFA-FSFG. |
doi_str_mv | 10.1109/TCSVT.2023.3236636 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCSVT_2023_3236636</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10018260</ieee_id><sourcerecordid>2845761464</sourcerecordid><originalsourceid>FETCH-LOGICAL-c296t-f7ab0f913067f7a0f517080bd3d6e6194e7c7cbdcab744f301c6c1c401435e183</originalsourceid><addsrcrecordid>eNpNUE1Lw0AUDKJgrf4B8RDwnPr2Ozm2xaggCLZ6XZLNJt3a7tbdhOK_d2t78DTzmJn3eJMktwgmCEHxsJwvPpcTDJhMCCacE36WjBBjeYYxsPPIgaEsx4hdJlchrAEQzakYJXLmXOiN7dJS77Owcn1aGquzzlcRmvRdK9dZ0xtn073pV-msUl-dd4Nt0sWw23kdwkGr4lw6r0_SdGM6u9W2v04u2moT9M0Jx8lH-bicP2evb08v8-lrpnDB-6wVVQ1tgQhwETm0DAnIoW5IwzVHBdVCCVU3qqoFpS0BpLhCisY3CNMoJ-Pk_rh35933oEMv127wNp6UOKdMcEQ5jS58dCnvQvC6lTtvtpX_kQjkoUj5V6Q8FClPRcbQ3TFktNb_AoByzIH8AnvOcEQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2845761464</pqid></control><display><type>article</type><title>Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment</title><source>IEEE Xplore</source><creator>Zha, Zican ; Tang, Hao ; Sun, Yunlian ; Tang, Jinhui</creator><creatorcontrib>Zha, Zican ; Tang, Hao ; Sun, Yunlian ; Tang, Jinhui</creatorcontrib><description>Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples. Undoubtedly, this task inherits the main challenges from both few-shot learning and fine-grained recognition. First, the lack of labeled samples makes the learned model easy to overfit. Second, it also suffers from high intra-class variance and low inter-class differences in the datasets. To address this challenging task, we propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local-to-local (L2L) similarity metric. Specifically, the BAS is introduced to generate a foreground mask for localization to weaken back-ground disturbance and enhance dominative foreground objects. The FOA then reconstructs the feature map of each support sample according to its correction to the query ones, which addresses the problem of misalignment between support-query image pairs. To enable the proposed method to have the ability to capture subtle differences in confused samples, we present a novel L2L similarity metric to further measure the local similarity between a pair of aligned spatial features in the embedding space. What's more, considering that background interference brings poor robustness, we infer the pairwise similarity of feature maps using both the raw image and the refined image. Extensive experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state of the art by a large margin. The source codes are available at: https://github.com/CSer-Tang-hao/BSFA-FSFG.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2023.3236636</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Alignment ; Annotations ; Background suppression ; Birds ; Feature extraction ; Feature maps ; Few-shot learning ; Fine-grained recognition ; Foreground alignment ; Measurement ; Misalignment ; Modules ; Recognition ; Similarity ; Sun ; Task analysis ; Training</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2023-08, Vol.33 (8), p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c296t-f7ab0f913067f7a0f517080bd3d6e6194e7c7cbdcab744f301c6c1c401435e183</citedby><cites>FETCH-LOGICAL-c296t-f7ab0f913067f7a0f517080bd3d6e6194e7c7cbdcab744f301c6c1c401435e183</cites><orcidid>0000-0002-4696-8848 ; 0000-0002-6973-8121 ; 0000-0001-9008-222X ; 0000-0002-6227-2904</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10018260$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10018260$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zha, Zican</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Sun, Yunlian</creatorcontrib><creatorcontrib>Tang, Jinhui</creatorcontrib><title>Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples. Undoubtedly, this task inherits the main challenges from both few-shot learning and fine-grained recognition. First, the lack of labeled samples makes the learned model easy to overfit. Second, it also suffers from high intra-class variance and low inter-class differences in the datasets. To address this challenging task, we propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local-to-local (L2L) similarity metric. Specifically, the BAS is introduced to generate a foreground mask for localization to weaken back-ground disturbance and enhance dominative foreground objects. The FOA then reconstructs the feature map of each support sample according to its correction to the query ones, which addresses the problem of misalignment between support-query image pairs. To enable the proposed method to have the ability to capture subtle differences in confused samples, we present a novel L2L similarity metric to further measure the local similarity between a pair of aligned spatial features in the embedding space. What's more, considering that background interference brings poor robustness, we infer the pairwise similarity of feature maps using both the raw image and the refined image. Extensive experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state of the art by a large margin. The source codes are available at: https://github.com/CSer-Tang-hao/BSFA-FSFG.</description><subject>Alignment</subject><subject>Annotations</subject><subject>Background suppression</subject><subject>Birds</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Few-shot learning</subject><subject>Fine-grained recognition</subject><subject>Foreground alignment</subject><subject>Measurement</subject><subject>Misalignment</subject><subject>Modules</subject><subject>Recognition</subject><subject>Similarity</subject><subject>Sun</subject><subject>Task analysis</subject><subject>Training</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNUE1Lw0AUDKJgrf4B8RDwnPr2Ozm2xaggCLZ6XZLNJt3a7tbdhOK_d2t78DTzmJn3eJMktwgmCEHxsJwvPpcTDJhMCCacE36WjBBjeYYxsPPIgaEsx4hdJlchrAEQzakYJXLmXOiN7dJS77Owcn1aGquzzlcRmvRdK9dZ0xtn073pV-msUl-dd4Nt0sWw23kdwkGr4lw6r0_SdGM6u9W2v04u2moT9M0Jx8lH-bicP2evb08v8-lrpnDB-6wVVQ1tgQhwETm0DAnIoW5IwzVHBdVCCVU3qqoFpS0BpLhCisY3CNMoJ-Pk_rh35933oEMv127wNp6UOKdMcEQ5jS58dCnvQvC6lTtvtpX_kQjkoUj5V6Q8FClPRcbQ3TFktNb_AoByzIH8AnvOcEQ</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Zha, Zican</creator><creator>Tang, Hao</creator><creator>Sun, Yunlian</creator><creator>Tang, Jinhui</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-4696-8848</orcidid><orcidid>https://orcid.org/0000-0002-6973-8121</orcidid><orcidid>https://orcid.org/0000-0001-9008-222X</orcidid><orcidid>https://orcid.org/0000-0002-6227-2904</orcidid></search><sort><creationdate>20230801</creationdate><title>Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment</title><author>Zha, Zican ; Tang, Hao ; Sun, Yunlian ; Tang, Jinhui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c296t-f7ab0f913067f7a0f517080bd3d6e6194e7c7cbdcab744f301c6c1c401435e183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Alignment</topic><topic>Annotations</topic><topic>Background suppression</topic><topic>Birds</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Few-shot learning</topic><topic>Fine-grained recognition</topic><topic>Foreground alignment</topic><topic>Measurement</topic><topic>Misalignment</topic><topic>Modules</topic><topic>Recognition</topic><topic>Similarity</topic><topic>Sun</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zha, Zican</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Sun, Yunlian</creatorcontrib><creatorcontrib>Tang, Jinhui</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zha, Zican</au><au>Tang, Hao</au><au>Sun, Yunlian</au><au>Tang, Jinhui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>33</volume><issue>8</issue><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples. Undoubtedly, this task inherits the main challenges from both few-shot learning and fine-grained recognition. First, the lack of labeled samples makes the learned model easy to overfit. Second, it also suffers from high intra-class variance and low inter-class differences in the datasets. To address this challenging task, we propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local-to-local (L2L) similarity metric. Specifically, the BAS is introduced to generate a foreground mask for localization to weaken back-ground disturbance and enhance dominative foreground objects. The FOA then reconstructs the feature map of each support sample according to its correction to the query ones, which addresses the problem of misalignment between support-query image pairs. To enable the proposed method to have the ability to capture subtle differences in confused samples, we present a novel L2L similarity metric to further measure the local similarity between a pair of aligned spatial features in the embedding space. What's more, considering that background interference brings poor robustness, we infer the pairwise similarity of feature maps using both the raw image and the refined image. Extensive experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state of the art by a large margin. The source codes are available at: https://github.com/CSer-Tang-hao/BSFA-FSFG.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2023.3236636</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4696-8848</orcidid><orcidid>https://orcid.org/0000-0002-6973-8121</orcidid><orcidid>https://orcid.org/0000-0001-9008-222X</orcidid><orcidid>https://orcid.org/0000-0002-6227-2904</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2023-08, Vol.33 (8), p.1-1 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TCSVT_2023_3236636 |
source | IEEE Xplore |
subjects | Alignment Annotations Background suppression Birds Feature extraction Feature maps Few-shot learning Fine-grained recognition Foreground alignment Measurement Misalignment Modules Recognition Similarity Sun Task analysis Training |
title | Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T14%3A48%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Boosting%20Few-shot%20Fine-grained%20Recognition%20with%20Background%20Suppression%20and%20Foreground%20Alignment&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Zha,%20Zican&rft.date=2023-08-01&rft.volume=33&rft.issue=8&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2023.3236636&rft_dat=%3Cproquest_RIE%3E2845761464%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2845761464&rft_id=info:pmid/&rft_ieee_id=10018260&rfr_iscdi=true |