Fusing finetuned models for better pretraining
Pretrained models are the standard starting point for training. This approach consistently outperforms the use of a random initialization. However, pretraining is a costly endeavour that few can undertake. In this paper, we create better base models at hardly any cost, by fusing multiple existing fi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pretrained models are the standard starting point for training. This approach
consistently outperforms the use of a random initialization. However,
pretraining is a costly endeavour that few can undertake.
In this paper, we create better base models at hardly any cost, by fusing
multiple existing fine tuned models into one. Specifically, we fuse by
averaging the weights of these models. We show that the fused model results
surpass the pretrained model ones. We also show that fusing is often better
than intertraining.
We find that fusing is less dependent on the target task. Furthermore, weight
decay nullifies intertraining effects but not those of fusing. |
---|---|
DOI: | 10.48550/arxiv.2204.03044 |