Logarithmic Continual Learning

We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a current task. Contempor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.1-1
Hauptverfasser: Masarczyk, Wojciech, Wawrzynski, Pawel, Marczak, Daniel, Deja, Kamil, Trzcinski, Tomasz
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a current task. Contemporary CL methods employ generative models to replay previous samples and train them recursively with a combination of current and regenerated past data. This recurrence leads to superfluous computations as the same past samples are regenerated after each task, and the reconstruction quality successively degrades. In this work, we address these limitations and propose a new generative rehearsal architecture that requires, at most, a logarithmic number of retraining sessions for each sample. Our approach leverages the allocation of past data in a set of generative models such that most of them do not require retraining after a task. The experimental evaluation of our logarithmic continual learning approach shows the superiority of our method with respect to the state-of-the-art generative rehearsal methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3218907