SDM-Net: A Simple and Effective Model for Generalized Zero-Shot Learning
Zero-Shot Learning (ZSL) is a classification task where we do not have even a single training labeled example from a set of unseen classes. Instead, we only have prior information (or description) about seen and unseen classes, often in the form of physically realizable or descriptive attributes. La...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Zero-Shot Learning (ZSL) is a classification task where we do not have even a
single training labeled example from a set of unseen classes. Instead, we only
have prior information (or description) about seen and unseen classes, often in
the form of physically realizable or descriptive attributes. Lack of any single
training example from a set of classes prohibits use of standard classification
techniques and losses, including the popular crossentropy loss. Currently,
state-of-the-art approaches encode the prior class information into dense
vectors and optimize some distance between the learned projections of the input
vector and the corresponding class vector (collectively known as embedding
models). In this paper, we propose a novel architecture of casting zero-shot
learning as a standard neural-network with crossentropy loss. During training
our approach performs soft-labeling by combining the observed training data for
the seen classes with the similarity information from the attributes for which
we have no training data or unseen classes. To the best of our knowledge, such
similarity based soft-labeling is not explored in the field of deep learning.
We evaluate the proposed model on the four benchmark datasets for zero-shot
learning, AwA, aPY, SUN and CUB datasets, and show that our model achieves
significant improvement over the state-of-the-art methods in Generalized-ZSL
and ZSL settings on all of these datasets consistently. |
---|---|
DOI: | 10.48550/arxiv.1909.04790 |