Online Federated Learning via Non-Stationary Detection and Adaptation amidst Concept Drift
Federated Learning (FL) is an emerging domain in the broader context of artificial intelligence research. Methodologies pertaining to FL assume distributed model training, consisting of a collection of clients and a server, with the main goal of achieving optimal global model with restrictions on da...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning (FL) is an emerging domain in the broader context of
artificial intelligence research. Methodologies pertaining to FL assume
distributed model training, consisting of a collection of clients and a server,
with the main goal of achieving optimal global model with restrictions on data
sharing due to privacy concerns. It is worth highlighting that the diverse
existing literature in FL mostly assume stationary data generation processes;
such an assumption is unrealistic in real-world conditions where concept drift
occurs due to, for instance, seasonal or period observations, faults in sensor
measurements. In this paper, we introduce a multiscale algorithmic framework
which combines theoretical guarantees of \textit{FedAvg} and \textit{FedOMD}
algorithms in near stationary settings with a non-stationary detection and
adaptation technique to ameliorate FL generalization performance in the
presence of concept drifts. We present a multi-scale algorithmic framework
leading to $\Tilde{\mathcal{O}} ( \min \{ \sqrt{LT} ,
\Delta^{\frac{1}{3}}T^{\frac{2}{3}} + \sqrt{T} \})$ \textit{dynamic regret} for
$T$ rounds with an underlying general convex loss function, where $L$ is the
number of times non-stationary drifts occurred and $\Delta$ is the cumulative
magnitude of drift experienced within $T$ rounds. |
---|---|
DOI: | 10.48550/arxiv.2211.12578 |