Rapid Switching and Multi-Adapter Fusion via Sparse High Rank Adapters
In this paper, we propose Sparse High Rank Adapters (SHiRA) that directly finetune 1-2% of the base model weights while leaving others unchanged, thus, resulting in a highly sparse adapter. This high sparsity incurs no inference overhead, enables rapid switching directly in the fused mode, and signi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose Sparse High Rank Adapters (SHiRA) that directly
finetune 1-2% of the base model weights while leaving others unchanged, thus,
resulting in a highly sparse adapter. This high sparsity incurs no inference
overhead, enables rapid switching directly in the fused mode, and significantly
reduces concept-loss during multi-adapter fusion. Our extensive experiments on
LVMs and LLMs demonstrate that finetuning merely 1-2% parameters in the base
model is sufficient for many adapter tasks and significantly outperforms Low
Rank Adaptation (LoRA). We also show that SHiRA is orthogonal to advanced LoRA
methods such as DoRA and can be easily combined with existing techniques. |
---|---|
DOI: | 10.48550/arxiv.2407.16712 |