Few-Shot Scene Classification Using Auxiliary Objectives and Transductive Inference
Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. C...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2022, Vol.19, p.1-5 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multitask model. Therein, rotation prediction estimates the 2-D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. In particular, label propagation (LP) is applied to make joint prediction for both labeled and unlabeled data. Then, the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks, including Remote sensing image scene classification dataset with 45 classes, published by Northwestern Polytechnical University (NWPU-RESISC45), Aerial Image Dataset (AID), and Remote sensing image scene classification dataset with 19 classes, published by Wuhan University (WHU-RS-19), demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches. |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2022.3190925 |