Recent Advances of Foundation Language Models-based Continual Learning: A Survey

Recently, foundation language models (LMs) have marked significant achievements in the domains of natural language processing (NLP) and computer vision (CV). Unlike traditional neural network models, foundation LMs obtain a great ability for transfer learning by acquiring rich commonsense knowledge...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM computing surveys 2025-05, Vol.57 (5), p.1-38
Hauptverfasser: Yang, Yutao, Zhou, Jie, Ding, Xuanwen, Huai, Tianyu, Liu, Shunyu, Chen, Qin, Xie, Yuan, He, Liang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, foundation language models (LMs) have marked significant achievements in the domains of natural language processing (NLP) and computer vision (CV). Unlike traditional neural network models, foundation LMs obtain a great ability for transfer learning by acquiring rich commonsense knowledge through pre-training on extensive unsupervised datasets with a vast number of parameters. Despite these capabilities, LMs still struggle with catastrophic forgetting, hindering their ability to learn continuously like humans. To address this, continual learning (CL) methodologies have been introduced, allowing LMs to adapt to new tasks while retaining learned knowledge. However, a systematic taxonomy of existing approaches and a comparison of their performance are still lacking. In this paper, we delve into a comprehensive review, summarization, and classification of the existing literature on CL-based approaches applied to foundation language models, such as pre-trained language models (PLMs), large language models (LLMs) and vision-language models (VLMs). We divide these studies into offline and online CL, which consist of traditional methods, parameter-efficient-based methods, instruction tuning-based methods and continual pre-training methods. Additionally, we outline the typical datasets and metrics employed in CL research and provide a detailed analysis of the challenges and future work for LMs-based continual learning.
ISSN:0360-0300
1557-7341
DOI:10.1145/3705725