Using Diffusion Models as Generative Replay in Continual Federated Learning -- What will Happen?
Federated learning (FL) has become a cornerstone in decentralized learning, where, in many scenarios, the incoming data distribution will change dynamically over time, introducing continuous learning (CL) problems. This continual federated learning (CFL) task presents unique challenges, particularly...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL) has become a cornerstone in decentralized learning,
where, in many scenarios, the incoming data distribution will change
dynamically over time, introducing continuous learning (CL) problems. This
continual federated learning (CFL) task presents unique challenges,
particularly regarding catastrophic forgetting and non-IID input data. Existing
solutions include using a replay buffer to store historical data or leveraging
generative adversarial networks. Nevertheless, motivated by recent advancements
in the diffusion model for generative tasks, this paper introduces DCFL, a
novel framework tailored to address the challenges of CFL in dynamic
distributed learning environments. Our approach harnesses the power of the
conditional diffusion model to generate synthetic historical data at each local
device during communication, effectively mitigating latent shifts in dynamic
data distribution inputs. We provide the convergence bound for the proposed CFL
framework and demonstrate its promising performance across multiple datasets,
showcasing its effectiveness in tackling the complexities of CFL tasks. |
---|---|
DOI: | 10.48550/arxiv.2411.06618 |