PLeaS -- Merging Models with Permutations and Least Squares
The democratization of machine learning systems has made the process of fine-tuning accessible to a large number of practitioners, leading to a wide range of open-source models fine-tuned on specialized tasks and datasets. Recent work has proposed to merge such models to combine their functionalitie...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The democratization of machine learning systems has made the process of
fine-tuning accessible to a large number of practitioners, leading to a wide
range of open-source models fine-tuned on specialized tasks and datasets.
Recent work has proposed to merge such models to combine their functionalities.
However, prior approaches are restricted to models that are fine-tuned from the
same base model. Furthermore, the final merged model is typically restricted to
be of the same size as the original models. In this work, we propose a new
two-step algorithm to merge models-termed PLeaS-which relaxes these
constraints. First, leveraging the Permutation symmetries inherent in the two
models, PLeaS partially matches nodes in each layer by maximizing alignment.
Next, PLeaS computes the weights of the merged model as a layer-wise Least
Squares solution to minimize the approximation error between the features of
the merged model and the permuted features of the original models. into a
single model of a desired size, even when the two original models are
fine-tuned from different base models. We also present a variant of our method
which can merge models without using data from the fine-tuning domains. We
demonstrate our method to merge ResNet models trained with shared and different
label spaces, and show that we can perform better than the state-of-the-art
merging methods by 8 to 15 percentage points for the same target compute while
merging models trained on DomainNet and on fine-grained classification tasks. |
---|---|
DOI: | 10.48550/arxiv.2407.02447 |