The Effect of Training Parameters and Mechanisms on Decentralized Federated Learning based on MNIST Dataset
Federated Learning is an algorithm suited for training models on decentralized data, but the requirement of a central "server" node is a bottleneck. In this document, we first introduce the notion of Decentralized Federated Learning (DFL). We then perform various experiments on different s...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning is an algorithm suited for training models on
decentralized data, but the requirement of a central "server" node is a
bottleneck. In this document, we first introduce the notion of Decentralized
Federated Learning (DFL). We then perform various experiments on different
setups, such as changing model aggregation frequency, switching from
independent and identically distributed (IID) dataset partitioning to non-IID
partitioning with partial global sharing, using different optimization methods
across clients, and breaking models into segments with partial sharing. All
experiments are run on the MNIST handwritten digits dataset. We observe that
those altered training procedures are generally robust, albeit non-optimal. We
also observe failures in training when the variance between model weights is
too large. The open-source experiment code is accessible through
GitHub\footnote{Code was uploaded at
\url{https://github.com/zhzhang2018/DecentralizedFL}}. |
---|---|
DOI: | 10.48550/arxiv.2108.03508 |