Continual Learning: Applications and the Road Forward
Transactions on Machine Learning Research (TMLR), 2024 Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past. In this work, we take a step back, an...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transactions on Machine Learning Research (TMLR), 2024 Continual learning is a subfield of machine learning, which aims to allow
machine learning models to continuously learn on new data, by accumulating
knowledge without forgetting what was learned in the past. In this work, we
take a step back, and ask: "Why should one care about continual learning in the
first place?". We set the stage by examining recent continual learning papers
published at four major machine learning conferences, and show that
memory-constrained settings dominate the field. Then, we discuss five open
problems in machine learning, and even though they might seem unrelated to
continual learning at first sight, we show that continual learning will
inevitably be part of their solution. These problems are model editing,
personalization and specialization, on-device learning, faster (re-)training
and reinforcement learning. Finally, by comparing the desiderata from these
unsolved problems and the current assumptions in continual learning, we
highlight and discuss four future directions for continual learning research.
We hope that this work offers an interesting perspective on the future of
continual learning, while displaying its potential value and the paths we have
to pursue in order to make it successful. This work is the result of the many
discussions the authors had at the Dagstuhl seminar on Deep Continual Learning,
in March 2023. |
---|---|
DOI: | 10.48550/arxiv.2311.11908 |