Budgeted Online Model Selection and Fine-Tuning via Federated Learning
Online model selection involves selecting a model from a set of candidate models 'on the fly' to perform prediction on a stream of data. The choice of candidate models henceforth has a crucial impact on the performance. Although employing a larger set of candidate models naturally leads to...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Online model selection involves selecting a model from a set of candidate
models 'on the fly' to perform prediction on a stream of data. The choice of
candidate models henceforth has a crucial impact on the performance. Although
employing a larger set of candidate models naturally leads to more flexibility
in model selection, this may be infeasible in cases where prediction tasks are
performed on edge devices with limited memory. Faced with this challenge, the
present paper proposes an online federated model selection framework where a
group of learners (clients) interacts with a server with sufficient memory such
that the server stores all candidate models. However, each client only chooses
to store a subset of models that can be fit into its memory and performs its
own prediction task using one of the stored models. Furthermore, employing the
proposed algorithm, clients and the server collaborate to fine-tune models to
adapt them to a non-stationary environment. Theoretical analysis proves that
the proposed algorithm enjoys sub-linear regret with respect to the best model
in hindsight. Experiments on real datasets demonstrate the effectiveness of the
proposed algorithm. |
---|---|
DOI: | 10.48550/arxiv.2401.10478 |