CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models
Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes throu...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Methods for adapting language models (LMs) to new tasks and domains have
traditionally assumed white-box access to the model, and work by modifying its
parameters. However, this is incompatible with a recent trend in the field,
where the highest quality models are only available as black-boxes through
inference APIs. Even when the model weights are available, the computational
cost of fine-tuning large LMs can be prohibitive for most practitioners. In
this work, we present a lightweight method for adapting large LMs to new
domains and tasks, assuming no access to their weights or intermediate
activations. Our approach fine-tunes a small white-box LM and combines it with
the large black-box LM at the probability level through a small network,
learned on a small validation set. We validate our approach by adapting a large
LM (OPT-30B) to several domains and a downstream task (machine translation),
observing improved performance in all cases, of up to 9%, while using a domain
expert 23x smaller. |
---|---|
DOI: | 10.48550/arxiv.2305.16876 |