Mitigating Catastrophic Forgetting in Language Transfer via Model Merging
As open-weight large language models (LLMs) achieve ever more impressive performances across a wide range of tasks in English, practitioners aim to adapt these models to different languages. However, such language adaptation is often accompanied by catastrophic forgetting of the base model's ca...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As open-weight large language models (LLMs) achieve ever more impressive
performances across a wide range of tasks in English, practitioners aim to
adapt these models to different languages. However, such language adaptation is
often accompanied by catastrophic forgetting of the base model's capabilities,
severely limiting the usefulness of the resulting model. We address this issue
by proposing Branch-and-Merge (BaM), a new adaptation method based on
iteratively merging multiple models, fine-tuned on a subset of the available
training data. BaM is based on the insight that this yields lower magnitude but
higher quality weight changes, reducing forgetting of the source domain while
maintaining learning on the target domain. We demonstrate in an extensive
empirical study on Bulgarian and German that BaM can significantly reduce
forgetting while matching or even improving target domain performance compared
to both standard continued pretraining and instruction finetuning across
different model architectures. |
---|---|
DOI: | 10.48550/arxiv.2407.08699 |