Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning

Modern deep learning requires large-scale extensively labelled datasets for training. Few-shot learning aims to alleviate this issue by learning effectively from few labelled examples. In previously proposed few-shot visual classifiers, it is assumed that the feature manifold, where classifier decis...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bateni, Peyman, Barber, Jarred, Goyal, Raghav, Masrani, Vaden, van de Meent, Jan-Willem, Sigal, Leonid, Wood, Frank
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bateni, Peyman
Barber, Jarred
Goyal, Raghav
Masrani, Vaden
van de Meent, Jan-Willem
Sigal, Leonid
Wood, Frank
description Modern deep learning requires large-scale extensively labelled datasets for training. Few-shot learning aims to alleviate this issue by learning effectively from few labelled examples. In previously proposed few-shot visual classifiers, it is assumed that the feature manifold, where classifier decisions are made, has uncorrelated feature dimensions and uniform feature variance. In this work, we focus on addressing the limitations arising from this assumption by proposing a variance-sensitive class of models that operates in a low-label regime. The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks. We further extend this approach to a transductive learning setting, proposing Transductive CNAPS. This transductive method combines a soft k-means parameter refinement procedure with a two-step task encoder to achieve improved test-time classification accuracy using unlabelled data. Transductive CNAPS achieves state of the art performance on Meta-Dataset. Finally, we explore the use of our methods (Simple and Transductive) for "out of the box" continual and active learning. Extensive experiments on large scale benchmarks illustrate robustness and versatility of this, relatively speaking, simple class of models. All trained model checkpoints and corresponding source codes have been made publicly available.
doi_str_mv 10.48550/arxiv.2201.05151
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2201_05151</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2201_05151</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-55b82fda499334180d3990d79381cfc4fbe3f648e3994aab1af0224907a6edfa3</originalsourceid><addsrcrecordid>eNo1j9FOgzAYRnvjhZk-gFf2ASy2tAXq3USnJiyabPfkh7Zaw1pSYLq3F-e8-pLvJCc5CF0xmohCSnoL8dvtkzSlLKGSSXaOPu_NIXiNN27XdwavzQikMhC98-93eD11oyNvU-zDMMOgTTdgG-IJPIQdOH-Dl-3o9gbD7CmDH52foMMr80U2H2HE_7oLdGahG8zlaRdou3rcls-ken16KZcVgSxnRMqmSK0GoRTnghVUc6WozhUvWGtbYRvDbSYKM98CoGFgaZoKRXPIjLbAF-j6T3uMrfvodhAP9W90fYzmP7urUhE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning</title><source>arXiv.org</source><creator>Bateni, Peyman ; Barber, Jarred ; Goyal, Raghav ; Masrani, Vaden ; van de Meent, Jan-Willem ; Sigal, Leonid ; Wood, Frank</creator><creatorcontrib>Bateni, Peyman ; Barber, Jarred ; Goyal, Raghav ; Masrani, Vaden ; van de Meent, Jan-Willem ; Sigal, Leonid ; Wood, Frank</creatorcontrib><description>Modern deep learning requires large-scale extensively labelled datasets for training. Few-shot learning aims to alleviate this issue by learning effectively from few labelled examples. In previously proposed few-shot visual classifiers, it is assumed that the feature manifold, where classifier decisions are made, has uncorrelated feature dimensions and uniform feature variance. In this work, we focus on addressing the limitations arising from this assumption by proposing a variance-sensitive class of models that operates in a low-label regime. The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks. We further extend this approach to a transductive learning setting, proposing Transductive CNAPS. This transductive method combines a soft k-means parameter refinement procedure with a two-step task encoder to achieve improved test-time classification accuracy using unlabelled data. Transductive CNAPS achieves state of the art performance on Meta-Dataset. Finally, we explore the use of our methods (Simple and Transductive) for "out of the box" continual and active learning. Extensive experiments on large scale benchmarks illustrate robustness and versatility of this, relatively speaking, simple class of models. All trained model checkpoints and corresponding source codes have been made publicly available.</description><identifier>DOI: 10.48550/arxiv.2201.05151</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2201.05151$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2201.05151$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bateni, Peyman</creatorcontrib><creatorcontrib>Barber, Jarred</creatorcontrib><creatorcontrib>Goyal, Raghav</creatorcontrib><creatorcontrib>Masrani, Vaden</creatorcontrib><creatorcontrib>van de Meent, Jan-Willem</creatorcontrib><creatorcontrib>Sigal, Leonid</creatorcontrib><creatorcontrib>Wood, Frank</creatorcontrib><title>Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning</title><description>Modern deep learning requires large-scale extensively labelled datasets for training. Few-shot learning aims to alleviate this issue by learning effectively from few labelled examples. In previously proposed few-shot visual classifiers, it is assumed that the feature manifold, where classifier decisions are made, has uncorrelated feature dimensions and uniform feature variance. In this work, we focus on addressing the limitations arising from this assumption by proposing a variance-sensitive class of models that operates in a low-label regime. The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks. We further extend this approach to a transductive learning setting, proposing Transductive CNAPS. This transductive method combines a soft k-means parameter refinement procedure with a two-step task encoder to achieve improved test-time classification accuracy using unlabelled data. Transductive CNAPS achieves state of the art performance on Meta-Dataset. Finally, we explore the use of our methods (Simple and Transductive) for "out of the box" continual and active learning. Extensive experiments on large scale benchmarks illustrate robustness and versatility of this, relatively speaking, simple class of models. All trained model checkpoints and corresponding source codes have been made publicly available.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j9FOgzAYRnvjhZk-gFf2ASy2tAXq3USnJiyabPfkh7Zaw1pSYLq3F-e8-pLvJCc5CF0xmohCSnoL8dvtkzSlLKGSSXaOPu_NIXiNN27XdwavzQikMhC98-93eD11oyNvU-zDMMOgTTdgG-IJPIQdOH-Dl-3o9gbD7CmDH52foMMr80U2H2HE_7oLdGahG8zlaRdou3rcls-ken16KZcVgSxnRMqmSK0GoRTnghVUc6WozhUvWGtbYRvDbSYKM98CoGFgaZoKRXPIjLbAF-j6T3uMrfvodhAP9W90fYzmP7urUhE</recordid><startdate>20220113</startdate><enddate>20220113</enddate><creator>Bateni, Peyman</creator><creator>Barber, Jarred</creator><creator>Goyal, Raghav</creator><creator>Masrani, Vaden</creator><creator>van de Meent, Jan-Willem</creator><creator>Sigal, Leonid</creator><creator>Wood, Frank</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220113</creationdate><title>Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning</title><author>Bateni, Peyman ; Barber, Jarred ; Goyal, Raghav ; Masrani, Vaden ; van de Meent, Jan-Willem ; Sigal, Leonid ; Wood, Frank</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-55b82fda499334180d3990d79381cfc4fbe3f648e3994aab1af0224907a6edfa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Bateni, Peyman</creatorcontrib><creatorcontrib>Barber, Jarred</creatorcontrib><creatorcontrib>Goyal, Raghav</creatorcontrib><creatorcontrib>Masrani, Vaden</creatorcontrib><creatorcontrib>van de Meent, Jan-Willem</creatorcontrib><creatorcontrib>Sigal, Leonid</creatorcontrib><creatorcontrib>Wood, Frank</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bateni, Peyman</au><au>Barber, Jarred</au><au>Goyal, Raghav</au><au>Masrani, Vaden</au><au>van de Meent, Jan-Willem</au><au>Sigal, Leonid</au><au>Wood, Frank</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning</atitle><date>2022-01-13</date><risdate>2022</risdate><abstract>Modern deep learning requires large-scale extensively labelled datasets for training. Few-shot learning aims to alleviate this issue by learning effectively from few labelled examples. In previously proposed few-shot visual classifiers, it is assumed that the feature manifold, where classifier decisions are made, has uncorrelated feature dimensions and uniform feature variance. In this work, we focus on addressing the limitations arising from this assumption by proposing a variance-sensitive class of models that operates in a low-label regime. The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks. We further extend this approach to a transductive learning setting, proposing Transductive CNAPS. This transductive method combines a soft k-means parameter refinement procedure with a two-step task encoder to achieve improved test-time classification accuracy using unlabelled data. Transductive CNAPS achieves state of the art performance on Meta-Dataset. Finally, we explore the use of our methods (Simple and Transductive) for "out of the box" continual and active learning. Extensive experiments on large scale benchmarks illustrate robustness and versatility of this, relatively speaking, simple class of models. All trained model checkpoints and corresponding source codes have been made publicly available.</abstract><doi>10.48550/arxiv.2201.05151</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2201.05151
ispartof
issn
language eng
recordid cdi_arxiv_primary_2201_05151
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T00%3A24%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Beyond%20Simple%20Meta-Learning:%20Multi-Purpose%20Models%20for%20Multi-Domain,%20Active%20and%20Continual%20Few-Shot%20Learning&rft.au=Bateni,%20Peyman&rft.date=2022-01-13&rft_id=info:doi/10.48550/arxiv.2201.05151&rft_dat=%3Carxiv_GOX%3E2201_05151%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true