Adaptive Federated Learning with Auto-Tuned Clients
Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data. While being a flexible framework, where the distribution of local data, participation rate,...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kim, Junhyung Lyle Toghani, Mohammad Taha Uribe, César A Kyrillidis, Anastasios |
description | Federated learning (FL) is a distributed machine learning framework where the
global model of a central server is trained via multiple collaborative steps by
participating clients without sharing their data. While being a flexible
framework, where the distribution of local data, participation rate, and
computing power of each client can greatly vary, such flexibility gives rise to
many new challenges, especially in the hyperparameter tuning on the client
side. We propose $\Delta$-SGD, a simple step size rule for SGD that enables
each client to use its own step size by adapting to the local smoothness of the
function each client is optimizing. We provide theoretical and empirical
results where the benefit of the client adaptivity is shown in various FL
scenarios. |
doi_str_mv | 10.48550/arxiv.2306.11201 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_11201</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_11201</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-a986104eb8fa7dea31801877e0af08639e7c36e20c2b5fd054f555432e8c1abd3</originalsourceid><addsrcrecordid>eNotzr1OwzAUhmEvDKjtBTCRG0g4x45_OkYRLUiRWLJHJ_ExtdSmlesWuHugMH3DK316hHhAqGqnNTxR-ozXSiowFaIEvBeq8XTK8crFhj0nyuyLjinNcX4vPmLeFc0lH8v-Mv-Edh95zueluAu0P_Pqfxei3zz37UvZvW1f26YryVgsae0MQs2jC2Q9k0IH6KxloADOqDXbSRmWMMlRBw-6DlrrWkl2E9Lo1UI8_t3e1MMpxQOlr-FXP9z06hvDtT5s</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Adaptive Federated Learning with Auto-Tuned Clients</title><source>arXiv.org</source><creator>Kim, Junhyung Lyle ; Toghani, Mohammad Taha ; Uribe, César A ; Kyrillidis, Anastasios</creator><creatorcontrib>Kim, Junhyung Lyle ; Toghani, Mohammad Taha ; Uribe, César A ; Kyrillidis, Anastasios</creatorcontrib><description>Federated learning (FL) is a distributed machine learning framework where the
global model of a central server is trained via multiple collaborative steps by
participating clients without sharing their data. While being a flexible
framework, where the distribution of local data, participation rate, and
computing power of each client can greatly vary, such flexibility gives rise to
many new challenges, especially in the hyperparameter tuning on the client
side. We propose $\Delta$-SGD, a simple step size rule for SGD that enables
each client to use its own step size by adapting to the local smoothness of the
function each client is optimizing. We provide theoretical and empirical
results where the benefit of the client adaptivity is shown in various FL
scenarios.</description><identifier>DOI: 10.48550/arxiv.2306.11201</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning ; Mathematics - Optimization and Control</subject><creationdate>2023-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.11201$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.11201$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Junhyung Lyle</creatorcontrib><creatorcontrib>Toghani, Mohammad Taha</creatorcontrib><creatorcontrib>Uribe, César A</creatorcontrib><creatorcontrib>Kyrillidis, Anastasios</creatorcontrib><title>Adaptive Federated Learning with Auto-Tuned Clients</title><description>Federated learning (FL) is a distributed machine learning framework where the
global model of a central server is trained via multiple collaborative steps by
participating clients without sharing their data. While being a flexible
framework, where the distribution of local data, participation rate, and
computing power of each client can greatly vary, such flexibility gives rise to
many new challenges, especially in the hyperparameter tuning on the client
side. We propose $\Delta$-SGD, a simple step size rule for SGD that enables
each client to use its own step size by adapting to the local smoothness of the
function each client is optimizing. We provide theoretical and empirical
results where the benefit of the client adaptivity is shown in various FL
scenarios.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1OwzAUhmEvDKjtBTCRG0g4x45_OkYRLUiRWLJHJ_ExtdSmlesWuHugMH3DK316hHhAqGqnNTxR-ozXSiowFaIEvBeq8XTK8crFhj0nyuyLjinNcX4vPmLeFc0lH8v-Mv-Edh95zueluAu0P_Pqfxei3zz37UvZvW1f26YryVgsae0MQs2jC2Q9k0IH6KxloADOqDXbSRmWMMlRBw-6DlrrWkl2E9Lo1UI8_t3e1MMpxQOlr-FXP9z06hvDtT5s</recordid><startdate>20230619</startdate><enddate>20230619</enddate><creator>Kim, Junhyung Lyle</creator><creator>Toghani, Mohammad Taha</creator><creator>Uribe, César A</creator><creator>Kyrillidis, Anastasios</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20230619</creationdate><title>Adaptive Federated Learning with Auto-Tuned Clients</title><author>Kim, Junhyung Lyle ; Toghani, Mohammad Taha ; Uribe, César A ; Kyrillidis, Anastasios</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-a986104eb8fa7dea31801877e0af08639e7c36e20c2b5fd054f555432e8c1abd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Junhyung Lyle</creatorcontrib><creatorcontrib>Toghani, Mohammad Taha</creatorcontrib><creatorcontrib>Uribe, César A</creatorcontrib><creatorcontrib>Kyrillidis, Anastasios</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Junhyung Lyle</au><au>Toghani, Mohammad Taha</au><au>Uribe, César A</au><au>Kyrillidis, Anastasios</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Federated Learning with Auto-Tuned Clients</atitle><date>2023-06-19</date><risdate>2023</risdate><abstract>Federated learning (FL) is a distributed machine learning framework where the
global model of a central server is trained via multiple collaborative steps by
participating clients without sharing their data. While being a flexible
framework, where the distribution of local data, participation rate, and
computing power of each client can greatly vary, such flexibility gives rise to
many new challenges, especially in the hyperparameter tuning on the client
side. We propose $\Delta$-SGD, a simple step size rule for SGD that enables
each client to use its own step size by adapting to the local smoothness of the
function each client is optimizing. We provide theoretical and empirical
results where the benefit of the client adaptivity is shown in various FL
scenarios.</abstract><doi>10.48550/arxiv.2306.11201</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2306.11201 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2306_11201 |
source | arXiv.org |
subjects | Computer Science - Distributed, Parallel, and Cluster Computing Computer Science - Learning Mathematics - Optimization and Control |
title | Adaptive Federated Learning with Auto-Tuned Clients |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T14%3A47%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Federated%20Learning%20with%20Auto-Tuned%20Clients&rft.au=Kim,%20Junhyung%20Lyle&rft.date=2023-06-19&rft_id=info:doi/10.48550/arxiv.2306.11201&rft_dat=%3Carxiv_GOX%3E2306_11201%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |