Improving Zero-Shot Learning Baselines with Commonsense Knowledge

Zero-shot learning — the problem of training and testing on a completely disjoint set of classes — relies greatly on its ability to transfer knowledge from train classes to test classes. Traditionally semantic embeddings consisting of human-defined attributes or distributed word embeddings are used...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cognitive computation 2022-11, Vol.14 (6), p.2212-2222
Hauptverfasser: Roy, Abhinaba, Ghosal, Deepanway, Cambria, Erik, Majumder, Navonil, Mihalcea, Rada, Poria, Soujanya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Zero-shot learning — the problem of training and testing on a completely disjoint set of classes — relies greatly on its ability to transfer knowledge from train classes to test classes. Traditionally semantic embeddings consisting of human-defined attributes or distributed word embeddings are used to facilitate this transfer by improving the association between visual and semantic embeddings. In this paper, we take advantage of explicit relations between nodes defined in ConceptNet, a commonsense knowledge graph, to generate commonsense embeddings of the class labels by using a graph convolution network-based autoencoder. Our experiments performed on three standard benchmark datasets surpass the strong baselines when we fuse our commonsense embeddings with existing semantic embeddings, i.e., human-defined attributes and distributed word embeddings. This work paves the path to more brain-inspired approaches to zero-short learning.
ISSN:1866-9956
1866-9964
DOI:10.1007/s12559-022-10044-0