Demystifying Impact of Key Hyper-Parameters in Federated Learning: A Case Study on CIFAR-10 and FashionMNIST

Federated Learning (FL) has emerged as a promising paradigm for privacy-preserving distributed Machine Learning (ML), enabling model training across distributed devices without compromising data privacy. However, the impact of hyper-parameters on FL model performance remains understudied and most of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.120570-120583
Hauptverfasser: Kundroo, Majid, Kim, Taehong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated Learning (FL) has emerged as a promising paradigm for privacy-preserving distributed Machine Learning (ML), enabling model training across distributed devices without compromising data privacy. However, the impact of hyper-parameters on FL model performance remains understudied and most of the existing FL studies rely on default or out-of-the-box hyper-parameters, often leading to suboptimal convergence. This study specifically investigates the intricate relationship between key hyper-parameters-learning rate, epochs per round, batch size, and client participation ratio (CPR)-and the performance of FL models on two distinct datasets: CIFAR-10 using ResNet-18 and FashionMNIST using a simple CNN model. Through systematic exploration on these datasets, employing a centralized server and 200 clients, we elucidate the significant impact of varying hyper-parameters. Our findings underscore the importance of dataset-specific hyper-parameter optimization, revealing contrasting optimal configurations for the complex CIFAR-10 dataset and the simpler FashionMNIST dataset. Additionally, the correlation analysis offers a deep understanding of hyper-parameter inter-dependencies, essential for effective optimization. This study provides valuable insights for practitioners to customize hyper-parameter configurations, ensuring optimal performance for FL models trained on different types of datasets and provides a foundation for future exploration in hyper-parameter optimization within the FL domain.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3450894