Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models
We study the computational limits of Low-Rank Adaptation (LoRA) update for finetuning transformer-based models using fine-grained complexity theory. Our key observation is that the existence of low-rank decompositions within the gradient computation of LoRA adaptation leads to possible algorithmic s...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the computational limits of Low-Rank Adaptation (LoRA) update for
finetuning transformer-based models using fine-grained complexity theory. Our
key observation is that the existence of low-rank decompositions within the
gradient computation of LoRA adaptation leads to possible algorithmic speedup.
This allows us to (i) identify a phase transition behavior and (ii) prove the
existence of nearly linear algorithms by controlling the LoRA update
computation term by term, assuming the Strong Exponential Time Hypothesis
(SETH). For the former, we identify a sharp transition in the efficiency of all
possible rank-$r$ LoRA update algorithms for transformers, based on specific
norms resulting from the multiplications of the input sequence $\mathbf{X}$,
pretrained weights $\mathbf{W^\star}$, and adapter matrices $\alpha \mathbf{B}
\mathbf{A} / r$. Specifically, we derive a shared upper bound threshold for
such norms and show that efficient (sub-quadratic) approximation algorithms of
LoRA exist only below this threshold. For the latter, we prove the existence of
nearly linear approximation algorithms for LoRA adaptation by utilizing the
hierarchical low-rank structures of LoRA gradients and approximating the
gradients with a series of chained low-rank approximations. To showcase our
theory, we consider two practical scenarios: partial (e.g., only $\mathbf{W}_V$
and $\mathbf{W}_Q$) and full adaptations (e.g., $\mathbf{W}_Q$, $\mathbf{W}_V$,
and $\mathbf{W}_K$) of weights in attention heads. |
---|---|
DOI: | 10.48550/arxiv.2406.03136 |