Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference

Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. C...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2022, Vol.19, p.1-5
Hauptverfasser: Ji, Hong, Yang, Hong, Gao, Zhi, Li, Can, Wan, Yu, Cui, Jinqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5
container_issue
container_start_page 1
container_title IEEE geoscience and remote sensing letters
container_volume 19
creator Ji, Hong
Yang, Hong
Gao, Zhi
Li, Can
Wan, Yu
Cui, Jinqiang
description Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multitask model. Therein, rotation prediction estimates the 2-D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. In particular, label propagation (LP) is applied to make joint prediction for both labeled and unlabeled data. Then, the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks, including Remote sensing image scene classification dataset with 45 classes, published by Northwestern Polytechnical University (NWPU-RESISC45), Aerial Image Dataset (AID), and Remote sensing image scene classification dataset with 19 classes, published by Wuhan University (WHU-RS-19), demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches.
doi_str_mv 10.1109/LGRS.2022.3190925
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LGRS_2022_3190925</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9829871</ieee_id><sourcerecordid>2692814410</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-f0ac58e08fba2a620d0200daa0672464e32c98059a627235aab3a329af3cea333</originalsourceid><addsrcrecordid>eNo9kE9LAzEQxYMoWKsfQLwEPG-dJJvd5FiKrYVCwW3BW0izs5pSs3Wz659v79YWTzPMvDeP-RFyy2DEGOiHxey5GHHgfCSYBs3lGRkwKVUCMmfnhz6VidTq5ZJcxbgF4KlS-YAUU_xKire6pYXDgHSyszH6yjvb-jrQdfThlY67b7_ztvmhy80WXes_MVIbSrpqbIhl9zeh81Bhg8HhNbmo7C7izakOyXr6uJo8JYvlbD4ZLxLHtWiTCqyTCkFVG8ttxqEEDlBaC1nO0yxFwZ1WIHW_y7mQ1m6EFVzbSji0QoghuT_e3Tf1R4exNdu6a0IfaXimuWJpyqBXsaPKNXWMDVZm3_j3_hnDwBzYmQM7c2BnTux6z93R4xHxX68V1ypn4hfW92q8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2692814410</pqid></control><display><type>article</type><title>Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference</title><source>IEEE/IET Electronic Library</source><creator>Ji, Hong ; Yang, Hong ; Gao, Zhi ; Li, Can ; Wan, Yu ; Cui, Jinqiang</creator><creatorcontrib>Ji, Hong ; Yang, Hong ; Gao, Zhi ; Li, Can ; Wan, Yu ; Cui, Jinqiang</creatorcontrib><description>Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multitask model. Therein, rotation prediction estimates the 2-D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. In particular, label propagation (LP) is applied to make joint prediction for both labeled and unlabeled data. Then, the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks, including Remote sensing image scene classification dataset with 45 classes, published by Northwestern Polytechnical University (NWPU-RESISC45), Aerial Image Dataset (AID), and Remote sensing image scene classification dataset with 19 classes, published by Wuhan University (WHU-RS-19), demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches.</description><identifier>ISSN: 1545-598X</identifier><identifier>EISSN: 1558-0571</identifier><identifier>DOI: 10.1109/LGRS.2022.3190925</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Benchmarks ; Classification ; Datasets ; Feature extraction ; Few-shot scene classification ; Image classification ; Knowledge representation ; label propagation (LP) ; Nearest-neighbor ; optical remote sensing image ; Optical sensors ; Predictions ; pretext task ; Prototypes ; rectified prototype ; Remote sensing ; Semantics ; Task analysis ; Training ; transductive inference</subject><ispartof>IEEE geoscience and remote sensing letters, 2022, Vol.19, p.1-5</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-f0ac58e08fba2a620d0200daa0672464e32c98059a627235aab3a329af3cea333</citedby><cites>FETCH-LOGICAL-c293t-f0ac58e08fba2a620d0200daa0672464e32c98059a627235aab3a329af3cea333</cites><orcidid>0000-0002-6850-5721 ; 0000-0003-0812-4334 ; 0000-0001-5725-1431 ; 0000-0003-1649-9611 ; 0000-0003-3325-1183 ; 0000-0002-7833-1876</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9829871$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9829871$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ji, Hong</creatorcontrib><creatorcontrib>Yang, Hong</creatorcontrib><creatorcontrib>Gao, Zhi</creatorcontrib><creatorcontrib>Li, Can</creatorcontrib><creatorcontrib>Wan, Yu</creatorcontrib><creatorcontrib>Cui, Jinqiang</creatorcontrib><title>Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multitask model. Therein, rotation prediction estimates the 2-D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. In particular, label propagation (LP) is applied to make joint prediction for both labeled and unlabeled data. Then, the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks, including Remote sensing image scene classification dataset with 45 classes, published by Northwestern Polytechnical University (NWPU-RESISC45), Aerial Image Dataset (AID), and Remote sensing image scene classification dataset with 19 classes, published by Wuhan University (WHU-RS-19), demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches.</description><subject>Benchmarks</subject><subject>Classification</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Few-shot scene classification</subject><subject>Image classification</subject><subject>Knowledge representation</subject><subject>label propagation (LP)</subject><subject>Nearest-neighbor</subject><subject>optical remote sensing image</subject><subject>Optical sensors</subject><subject>Predictions</subject><subject>pretext task</subject><subject>Prototypes</subject><subject>rectified prototype</subject><subject>Remote sensing</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Training</subject><subject>transductive inference</subject><issn>1545-598X</issn><issn>1558-0571</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE9LAzEQxYMoWKsfQLwEPG-dJJvd5FiKrYVCwW3BW0izs5pSs3Wz659v79YWTzPMvDeP-RFyy2DEGOiHxey5GHHgfCSYBs3lGRkwKVUCMmfnhz6VidTq5ZJcxbgF4KlS-YAUU_xKire6pYXDgHSyszH6yjvb-jrQdfThlY67b7_ztvmhy80WXes_MVIbSrpqbIhl9zeh81Bhg8HhNbmo7C7izakOyXr6uJo8JYvlbD4ZLxLHtWiTCqyTCkFVG8ttxqEEDlBaC1nO0yxFwZ1WIHW_y7mQ1m6EFVzbSji0QoghuT_e3Tf1R4exNdu6a0IfaXimuWJpyqBXsaPKNXWMDVZm3_j3_hnDwBzYmQM7c2BnTux6z93R4xHxX68V1ypn4hfW92q8</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Ji, Hong</creator><creator>Yang, Hong</creator><creator>Gao, Zhi</creator><creator>Li, Can</creator><creator>Wan, Yu</creator><creator>Cui, Jinqiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TG</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>JQ2</scope><scope>KL.</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6850-5721</orcidid><orcidid>https://orcid.org/0000-0003-0812-4334</orcidid><orcidid>https://orcid.org/0000-0001-5725-1431</orcidid><orcidid>https://orcid.org/0000-0003-1649-9611</orcidid><orcidid>https://orcid.org/0000-0003-3325-1183</orcidid><orcidid>https://orcid.org/0000-0002-7833-1876</orcidid></search><sort><creationdate>2022</creationdate><title>Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference</title><author>Ji, Hong ; Yang, Hong ; Gao, Zhi ; Li, Can ; Wan, Yu ; Cui, Jinqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-f0ac58e08fba2a620d0200daa0672464e32c98059a627235aab3a329af3cea333</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Benchmarks</topic><topic>Classification</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Few-shot scene classification</topic><topic>Image classification</topic><topic>Knowledge representation</topic><topic>label propagation (LP)</topic><topic>Nearest-neighbor</topic><topic>optical remote sensing image</topic><topic>Optical sensors</topic><topic>Predictions</topic><topic>pretext task</topic><topic>Prototypes</topic><topic>rectified prototype</topic><topic>Remote sensing</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Training</topic><topic>transductive inference</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ji, Hong</creatorcontrib><creatorcontrib>Yang, Hong</creatorcontrib><creatorcontrib>Gao, Zhi</creatorcontrib><creatorcontrib>Li, Can</creatorcontrib><creatorcontrib>Wan, Yu</creatorcontrib><creatorcontrib>Cui, Jinqiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) Online</collection><collection>IEEE/IET Electronic Library</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>ProQuest Computer Science Collection</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ji, Hong</au><au>Yang, Hong</au><au>Gao, Zhi</au><au>Li, Can</au><au>Wan, Yu</au><au>Cui, Jinqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2022</date><risdate>2022</risdate><volume>19</volume><spage>1</spage><epage>5</epage><pages>1-5</pages><issn>1545-598X</issn><eissn>1558-0571</eissn><coden>IGRSBY</coden><abstract>Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multitask model. Therein, rotation prediction estimates the 2-D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. In particular, label propagation (LP) is applied to make joint prediction for both labeled and unlabeled data. Then, the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks, including Remote sensing image scene classification dataset with 45 classes, published by Northwestern Polytechnical University (NWPU-RESISC45), Aerial Image Dataset (AID), and Remote sensing image scene classification dataset with 19 classes, published by Wuhan University (WHU-RS-19), demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LGRS.2022.3190925</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0002-6850-5721</orcidid><orcidid>https://orcid.org/0000-0003-0812-4334</orcidid><orcidid>https://orcid.org/0000-0001-5725-1431</orcidid><orcidid>https://orcid.org/0000-0003-1649-9611</orcidid><orcidid>https://orcid.org/0000-0003-3325-1183</orcidid><orcidid>https://orcid.org/0000-0002-7833-1876</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1545-598X
ispartof IEEE geoscience and remote sensing letters, 2022, Vol.19, p.1-5
issn 1545-598X
1558-0571
language eng
recordid cdi_crossref_primary_10_1109_LGRS_2022_3190925
source IEEE/IET Electronic Library
subjects Benchmarks
Classification
Datasets
Feature extraction
Few-shot scene classification
Image classification
Knowledge representation
label propagation (LP)
Nearest-neighbor
optical remote sensing image
Optical sensors
Predictions
pretext task
Prototypes
rectified prototype
Remote sensing
Semantics
Task analysis
Training
transductive inference
title Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T19%3A31%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Few-Shot%20Scene%20Classification%20Using%20Auxiliary%20Objectives%20and%20Transductive%20Inference&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Ji,%20Hong&rft.date=2022&rft.volume=19&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.issn=1545-598X&rft.eissn=1558-0571&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2022.3190925&rft_dat=%3Cproquest_RIE%3E2692814410%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2692814410&rft_id=info:pmid/&rft_ieee_id=9829871&rfr_iscdi=true