ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
We propose a new paradigm to continually evolve pretrained models, denoted ColD Fusion. It provides the benefits of multitask learning but leverages distributed computation with limited communication and eliminates the need for shared data. Consequentially, ColD Fusion can give rise to a synergistic...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new paradigm to continually evolve pretrained models, denoted
ColD Fusion. It provides the benefits of multitask learning but leverages
distributed computation with limited communication and eliminates the need for
shared data. Consequentially, ColD Fusion can give rise to a synergistic loop,
where finetuned models can be recycled to continually improve the pretrained
model they are based upon. We show that ColD Fusion yields comparable benefits
to multitask training by producing a model that (a) attains strong performance
on all of the datasets it was trained on; and (b) is a better starting point
for finetuning on unseen datasets. We show that ColD Fusion outperforms RoBERTa
and even previous multitask models. Specifically, when training and testing on
35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.33 points
on average without any changes to the architecture. |
---|---|
DOI: | 10.48550/arxiv.2212.01378 |