Towards Zero-Shot Learning: A Brief Review and an Attention-Based Embedding Network

Zero-shot learning (ZSL), an emerging topic in recent years, targets at distinguishing unseen class images by taking images from seen classes for training the classifier. Existing works often build embeddings between global feature space and attribute space, which, however, neglect the treasure in i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2023-03, Vol.33 (3), p.1181-1197
Hauptverfasser: Xie, Guo-Sen, Zhang, Zheng, Xiong, Huan, Shao, Ling, Li, Xuelong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Zero-shot learning (ZSL), an emerging topic in recent years, targets at distinguishing unseen class images by taking images from seen classes for training the classifier. Existing works often build embeddings between global feature space and attribute space, which, however, neglect the treasure in image parts. Discrimination information is usually contained in the image parts, e.g., black and white striped area of a zebra is the key difference from a horse. As such, image parts can facilitate the transfer of knowledge among the seen and unseen categories. In this paper, we first conduct a brief review on ZSL with detailed descriptions of these methods. Next, to discover meaningful parts, we propose an end-to-end attention-based embedding network for ZSL, which contains two sub-streams: the attention part embedding (APE) stream, and the attention second-order embedding (ASE) stream. APE is used to discover multiple image parts based on attention. ASE is introduced for ensuring knowledge transfer stably by second-order collaboration. Furthermore, an adaptive thresholding strategy is proposed to suppress noise and redundant parts. Finally, a global branch is incorporated for the full use of global information. Experiments on four benchmarks demonstrate that our models achieve superior results under both ZSL and GZSL settings.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2022.3208071