RE-Adapt: Reverse Engineered Adaptation of Large Language Models
We introduce RE-Adapt, an approach to fine-tuning large language models on new domains without degrading any pre-existing instruction-tuning. We reverse engineer an adapter which isolates what an instruction-tuned model has learned beyond its corresponding pretrained base model. Importantly, this re...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce RE-Adapt, an approach to fine-tuning large language models on
new domains without degrading any pre-existing instruction-tuning. We reverse
engineer an adapter which isolates what an instruction-tuned model has learned
beyond its corresponding pretrained base model. Importantly, this requires no
additional data or training. We can then fine-tune the base model on a new
domain and readapt it to instruction following with the reverse engineered
adapter. RE-Adapt and our low-rank variant LoRE-Adapt both outperform other
methods of fine-tuning, across multiple popular LLMs and datasets, even when
the models are used in conjunction with retrieval-augmented generation. |
---|---|
DOI: | 10.48550/arxiv.2405.15007 |