Counterfactual Generation Framework for Few-Shot Learning
Few-shot learning (FSL) that aims to recognize novel classes with few labeled samples is troubled by its data scarcity. Though recent works tackle FSL with data augmentation-based methods, these models fail to maintain the discrimination and diversity of the generated samples due to the distribution...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2023-08, Vol.33 (8), p.1-1 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | 8 |
container_start_page | 1 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 33 |
creator | Dang, Zhuohang Luo, Minnan Jia, Chengyou Yan, Caixia Chang, Xiaojun Zheng, Qinghua |
description | Few-shot learning (FSL) that aims to recognize novel classes with few labeled samples is troubled by its data scarcity. Though recent works tackle FSL with data augmentation-based methods, these models fail to maintain the discrimination and diversity of the generated samples due to the distribution shift and intra-class bias caused by the data scarcity, therefore greatly undermining the performance. To this end, we use causal mechanisms, which are constant among independent variables across data distribution, to alleviate such effects. In this sense, we decompose the image information into two independent components: sample-specific and class-agnostic information, and further propose a novel Counterfactual Generation Framework (CGF) to learn the underlying causal mechanisms to synthesize faithful samples for FSL. Specifically, based on the counterfactual inference, we design a class-agnostic feature extractor to capture the sample-specific information, together with a counterfactual generation network to simulate the data generation process from a causal perspective. Moreover, to leverage the power of CGF in counterfactual inference, we further develop a novel classifier that classifies samples based on their distributions of counterfactual generations. Extensive experiments demonstrate the effectiveness of CGF on four FSL benchmarks, e.g ., 80.12/86.13% accuracy on 5-way 1/5-shot miniImageNet FSL tasks, significantly improving the performance. Our codes and models are available at https://github.com/eric-hang/CGF. |
doi_str_mv | 10.1109/TCSVT.2023.3241651 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2845762088</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10035001</ieee_id><sourcerecordid>2845762088</sourcerecordid><originalsourceid>FETCH-LOGICAL-c296t-7ddfb36d056e553006e94307bb529c0086497b06e999028ec3669b25b74ee8533</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhhdRsFb_gHgIeE6d_ZjN7lGCrULBQ6vXZZNONLXN1k1C8d-b2h48zcvwPjPwMHbLYcI52IdlvnhfTgQIOZFCcY38jI04okmFADwfMiBPjeB4ya7adg3AlVHZiNk89E1HsfJl1_tNMqOGou_q0CTT6Le0D_ErqUJMprRPF5-hS-bkY1M3H9fsovKblm5Oc8zepk_L_Dmdv85e8sd5WgqruzRbrapC6hWgJkQJoMkqCVlRoLAlgNHKZsVhay0IQ6XU2hYCi0wRGZRyzO6Pd3cxfPfUdm4d-tgML50wCjMtwJihJY6tMoa2jVS5Xay3Pv44Du6gyP0pcgdF7qRogO6OUE1E_wCQOAiSvwN3YMU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2845762088</pqid></control><display><type>article</type><title>Counterfactual Generation Framework for Few-Shot Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Dang, Zhuohang ; Luo, Minnan ; Jia, Chengyou ; Yan, Caixia ; Chang, Xiaojun ; Zheng, Qinghua</creator><creatorcontrib>Dang, Zhuohang ; Luo, Minnan ; Jia, Chengyou ; Yan, Caixia ; Chang, Xiaojun ; Zheng, Qinghua</creatorcontrib><description>Few-shot learning (FSL) that aims to recognize novel classes with few labeled samples is troubled by its data scarcity. Though recent works tackle FSL with data augmentation-based methods, these models fail to maintain the discrimination and diversity of the generated samples due to the distribution shift and intra-class bias caused by the data scarcity, therefore greatly undermining the performance. To this end, we use causal mechanisms, which are constant among independent variables across data distribution, to alleviate such effects. In this sense, we decompose the image information into two independent components: sample-specific and class-agnostic information, and further propose a novel Counterfactual Generation Framework (CGF) to learn the underlying causal mechanisms to synthesize faithful samples for FSL. Specifically, based on the counterfactual inference, we design a class-agnostic feature extractor to capture the sample-specific information, together with a counterfactual generation network to simulate the data generation process from a causal perspective. Moreover, to leverage the power of CGF in counterfactual inference, we further develop a novel classifier that classifies samples based on their distributions of counterfactual generations. Extensive experiments demonstrate the effectiveness of CGF on four FSL benchmarks, e.g ., 80.12/86.13% accuracy on 5-way 1/5-shot miniImageNet FSL tasks, significantly improving the performance. Our codes and models are available at https://github.com/eric-hang/CGF.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2023.3241651</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>counterfactual inference ; Data augmentation ; Data mining ; Data models ; Feature extraction ; few-shot learning ; Generators ; Independent variables ; Inference ; Learning ; prototype learning ; Prototypes ; Semantics ; Task analysis</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2023-08, Vol.33 (8), p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c296t-7ddfb36d056e553006e94307bb529c0086497b06e999028ec3669b25b74ee8533</citedby><cites>FETCH-LOGICAL-c296t-7ddfb36d056e553006e94307bb529c0086497b06e999028ec3669b25b74ee8533</cites><orcidid>0000-0002-0140-7860 ; 0000-0002-7778-8807 ; 0000-0003-2595-0398 ; 0000-0002-3775-3722 ; 0000-0001-6921-0303</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10035001$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10035001$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Dang, Zhuohang</creatorcontrib><creatorcontrib>Luo, Minnan</creatorcontrib><creatorcontrib>Jia, Chengyou</creatorcontrib><creatorcontrib>Yan, Caixia</creatorcontrib><creatorcontrib>Chang, Xiaojun</creatorcontrib><creatorcontrib>Zheng, Qinghua</creatorcontrib><title>Counterfactual Generation Framework for Few-Shot Learning</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Few-shot learning (FSL) that aims to recognize novel classes with few labeled samples is troubled by its data scarcity. Though recent works tackle FSL with data augmentation-based methods, these models fail to maintain the discrimination and diversity of the generated samples due to the distribution shift and intra-class bias caused by the data scarcity, therefore greatly undermining the performance. To this end, we use causal mechanisms, which are constant among independent variables across data distribution, to alleviate such effects. In this sense, we decompose the image information into two independent components: sample-specific and class-agnostic information, and further propose a novel Counterfactual Generation Framework (CGF) to learn the underlying causal mechanisms to synthesize faithful samples for FSL. Specifically, based on the counterfactual inference, we design a class-agnostic feature extractor to capture the sample-specific information, together with a counterfactual generation network to simulate the data generation process from a causal perspective. Moreover, to leverage the power of CGF in counterfactual inference, we further develop a novel classifier that classifies samples based on their distributions of counterfactual generations. Extensive experiments demonstrate the effectiveness of CGF on four FSL benchmarks, e.g ., 80.12/86.13% accuracy on 5-way 1/5-shot miniImageNet FSL tasks, significantly improving the performance. Our codes and models are available at https://github.com/eric-hang/CGF.</description><subject>counterfactual inference</subject><subject>Data augmentation</subject><subject>Data mining</subject><subject>Data models</subject><subject>Feature extraction</subject><subject>few-shot learning</subject><subject>Generators</subject><subject>Independent variables</subject><subject>Inference</subject><subject>Learning</subject><subject>prototype learning</subject><subject>Prototypes</subject><subject>Semantics</subject><subject>Task analysis</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1Lw0AQhhdRsFb_gHgIeE6d_ZjN7lGCrULBQ6vXZZNONLXN1k1C8d-b2h48zcvwPjPwMHbLYcI52IdlvnhfTgQIOZFCcY38jI04okmFADwfMiBPjeB4ya7adg3AlVHZiNk89E1HsfJl1_tNMqOGou_q0CTT6Le0D_ErqUJMprRPF5-hS-bkY1M3H9fsovKblm5Oc8zepk_L_Dmdv85e8sd5WgqruzRbrapC6hWgJkQJoMkqCVlRoLAlgNHKZsVhay0IQ6XU2hYCi0wRGZRyzO6Pd3cxfPfUdm4d-tgML50wCjMtwJihJY6tMoa2jVS5Xay3Pv44Du6gyP0pcgdF7qRogO6OUE1E_wCQOAiSvwN3YMU</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Dang, Zhuohang</creator><creator>Luo, Minnan</creator><creator>Jia, Chengyou</creator><creator>Yan, Caixia</creator><creator>Chang, Xiaojun</creator><creator>Zheng, Qinghua</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-0140-7860</orcidid><orcidid>https://orcid.org/0000-0002-7778-8807</orcidid><orcidid>https://orcid.org/0000-0003-2595-0398</orcidid><orcidid>https://orcid.org/0000-0002-3775-3722</orcidid><orcidid>https://orcid.org/0000-0001-6921-0303</orcidid></search><sort><creationdate>20230801</creationdate><title>Counterfactual Generation Framework for Few-Shot Learning</title><author>Dang, Zhuohang ; Luo, Minnan ; Jia, Chengyou ; Yan, Caixia ; Chang, Xiaojun ; Zheng, Qinghua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c296t-7ddfb36d056e553006e94307bb529c0086497b06e999028ec3669b25b74ee8533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>counterfactual inference</topic><topic>Data augmentation</topic><topic>Data mining</topic><topic>Data models</topic><topic>Feature extraction</topic><topic>few-shot learning</topic><topic>Generators</topic><topic>Independent variables</topic><topic>Inference</topic><topic>Learning</topic><topic>prototype learning</topic><topic>Prototypes</topic><topic>Semantics</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dang, Zhuohang</creatorcontrib><creatorcontrib>Luo, Minnan</creatorcontrib><creatorcontrib>Jia, Chengyou</creatorcontrib><creatorcontrib>Yan, Caixia</creatorcontrib><creatorcontrib>Chang, Xiaojun</creatorcontrib><creatorcontrib>Zheng, Qinghua</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dang, Zhuohang</au><au>Luo, Minnan</au><au>Jia, Chengyou</au><au>Yan, Caixia</au><au>Chang, Xiaojun</au><au>Zheng, Qinghua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Counterfactual Generation Framework for Few-Shot Learning</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>33</volume><issue>8</issue><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Few-shot learning (FSL) that aims to recognize novel classes with few labeled samples is troubled by its data scarcity. Though recent works tackle FSL with data augmentation-based methods, these models fail to maintain the discrimination and diversity of the generated samples due to the distribution shift and intra-class bias caused by the data scarcity, therefore greatly undermining the performance. To this end, we use causal mechanisms, which are constant among independent variables across data distribution, to alleviate such effects. In this sense, we decompose the image information into two independent components: sample-specific and class-agnostic information, and further propose a novel Counterfactual Generation Framework (CGF) to learn the underlying causal mechanisms to synthesize faithful samples for FSL. Specifically, based on the counterfactual inference, we design a class-agnostic feature extractor to capture the sample-specific information, together with a counterfactual generation network to simulate the data generation process from a causal perspective. Moreover, to leverage the power of CGF in counterfactual inference, we further develop a novel classifier that classifies samples based on their distributions of counterfactual generations. Extensive experiments demonstrate the effectiveness of CGF on four FSL benchmarks, e.g ., 80.12/86.13% accuracy on 5-way 1/5-shot miniImageNet FSL tasks, significantly improving the performance. Our codes and models are available at https://github.com/eric-hang/CGF.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2023.3241651</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-0140-7860</orcidid><orcidid>https://orcid.org/0000-0002-7778-8807</orcidid><orcidid>https://orcid.org/0000-0003-2595-0398</orcidid><orcidid>https://orcid.org/0000-0002-3775-3722</orcidid><orcidid>https://orcid.org/0000-0001-6921-0303</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2023-08, Vol.33 (8), p.1-1 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_proquest_journals_2845762088 |
source | IEEE Electronic Library (IEL) |
subjects | counterfactual inference Data augmentation Data mining Data models Feature extraction few-shot learning Generators Independent variables Inference Learning prototype learning Prototypes Semantics Task analysis |
title | Counterfactual Generation Framework for Few-Shot Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T18%3A10%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Counterfactual%20Generation%20Framework%20for%20Few-Shot%20Learning&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Dang,%20Zhuohang&rft.date=2023-08-01&rft.volume=33&rft.issue=8&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2023.3241651&rft_dat=%3Cproquest_RIE%3E2845762088%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2845762088&rft_id=info:pmid/&rft_ieee_id=10035001&rfr_iscdi=true |