Overcoming Forgetting in Federated Learning on Non-IID Data
We tackle the problem of Federated Learning in the non i.i.d. case, in which local models drift apart, inhibiting learning. Building on an analogy with Lifelong Learning, we adapt a solution for catastrophic forgetting to Federated Learning. We add a penalty term to the loss function, compelling all...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We tackle the problem of Federated Learning in the non i.i.d. case, in which
local models drift apart, inhibiting learning. Building on an analogy with
Lifelong Learning, we adapt a solution for catastrophic forgetting to Federated
Learning. We add a penalty term to the loss function, compelling all local
models to converge to a shared optimum. We show that this can be done
efficiently for communication (adding no further privacy risks), scaling with
the number of nodes in the distributed setting. Our experiments show that this
method is superior to competing ones for image recognition on the MNIST
dataset. |
---|---|
DOI: | 10.48550/arxiv.1910.07796 |