Parameter-Efficient Fine-Tuning Design Spaces
Parameter-efficient fine-tuning aims to achieve performance comparable to fine-tuning, using fewer trainable parameters. Several strategies (e.g., Adapters, prefix tuning, BitFit, and LoRA) have been proposed. However, their designs are hand-crafted separately, and it remains unclear whether certain...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Parameter-efficient fine-tuning aims to achieve performance comparable to
fine-tuning, using fewer trainable parameters. Several strategies (e.g.,
Adapters, prefix tuning, BitFit, and LoRA) have been proposed. However, their
designs are hand-crafted separately, and it remains unclear whether certain
design patterns exist for parameter-efficient fine-tuning. Thus, we present a
parameter-efficient fine-tuning design paradigm and discover design patterns
that are applicable to different experimental settings. Instead of focusing on
designing another individual tuning strategy, we introduce parameter-efficient
fine-tuning design spaces that parameterize tuning structures and tuning
strategies. Specifically, any design space is characterized by four components:
layer grouping, trainable parameter allocation, tunable groups, and strategy
assignment. Starting from an initial design space, we progressively refine the
space based on the model quality of each design choice and make greedy
selection at each stage over these four components. We discover the following
design patterns: (i) group layers in a spindle pattern; (ii) allocate the
number of trainable parameters to layers uniformly; (iii) tune all the groups;
(iv) assign proper tuning strategies to different groups. These design patterns
result in new parameter-efficient fine-tuning methods. We show experimentally
that these methods consistently and significantly outperform investigated
parameter-efficient fine-tuning strategies across different backbone models and
different tasks in natural language processing. |
---|---|
DOI: | 10.48550/arxiv.2301.01821 |