Consistent representation joint adaptive adjustment for incremental zero-shot learning

Zero-shot learning aims to recognize objects from novel concepts through the model trained on seen class data and assisted by the semantic descriptions. Though it breaks the serious reliance on training data, it still fails to deal with the sequential streaming data in the open world. In this paper,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2024-11, Vol.606, p.128385, Article 128385
Hauptverfasser: Niu, Chang, Shang, Junyuan, Zhou, Zhiheng, Yang, Junmei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Zero-shot learning aims to recognize objects from novel concepts through the model trained on seen class data and assisted by the semantic descriptions. Though it breaks the serious reliance on training data, it still fails to deal with the sequential streaming data in the open world. In this paper, we focus on the incremental zero-shot learning (IZSL) where the seen data arrives in the form of task sequence. In each incremental task, we only have access to the seen data of current task. The IZSL methods aim to depict the characteristics of current seen classes and avoid forgetting the previous ones, in the meantime, learn to generalize to the unseen classes. We summarize the challenges in IZSL as semantic drift which is further divided into task-recency bias and seen-class bias. To solve these issues, we propose a novel IZSL method termed as CRAA. Specifically, CRAA constructs consistent representations with satisfactory discrimination and generalization for all the seen and unseen classes. Based on these representations, CRAA learns a prototype classifier with a novel adaptive adjustment strategy to alleviate the task-recency bias and seen-class bias. Note that CRAA only needs a limited memory footprint to store the fixed scale model, and meets the demands of both memory restriction and data security in industry. We have conducted extensive experiments to evaluate our method on three widely used datasets. The results prove our method is superior to all the compared methods with significant improvements. Code is available at: https://github.com/changniu54/CRAA-Master. •We summarize the IZSL challenges as task-recency bias and seen-class bias.•We propose to learn consistent representation joint an adaptive adjustment strategy.•The method needs limited memory footprint to store the fixed scale model.•The method meets the demands of both memory restriction and data security.
ISSN:0925-2312
DOI:10.1016/j.neucom.2024.128385