FedFNN: Faster Training Convergence Through Update Predictions in Federated Recommender Systems
Federated Learning (FL) has emerged as a key approach for distributed machine learning, enhancing online personalization while ensuring user data privacy. Instead of sending private data to a central server as in traditional approaches, FL decentralizes computations: devices train locally and share...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning (FL) has emerged as a key approach for distributed machine
learning, enhancing online personalization while ensuring user data privacy.
Instead of sending private data to a central server as in traditional
approaches, FL decentralizes computations: devices train locally and share
updates with a global server. A primary challenge in this setting is achieving
fast and accurate model training - vital for recommendation systems where
delays can compromise user engagement. This paper introduces FedFNN, an
algorithm that accelerates decentralized model training. In FL, only a subset
of users are involved in each training epoch. FedFNN employs supervised
learning to predict weight updates from unsampled users, using updates from the
sampled set. Our evaluations, using real and synthetic data, show: 1. FedFNN
achieves training speeds 5x faster than leading methods, maintaining or
improving accuracy; 2. the algorithm's performance is consistent regardless of
client cluster variations; 3. FedFNN outperforms other methods in scenarios
with limited client availability, converging more quickly. |
---|---|
DOI: | 10.48550/arxiv.2309.08635 |