Towards Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquire...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE access |
container_volume | |
creator | Nguyen, Tu Viet Ho, Nhan Duc Hoang, Hieu Thien Do, Cuong Danh Wong, Kok-Seng |
description | Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy. |
doi_str_mv | 10.1109/ACCESS.2022.3215758 |
format | Article |
fullrecord | <record><control><sourceid>ieee</sourceid><recordid>TN_cdi_ieee_primary_9924192</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9924192</ieee_id><sourcerecordid>9924192</sourcerecordid><originalsourceid>FETCH-ieee_primary_99241923</originalsourceid><addsrcrecordid>eNp9i7FuwkAQRE9IkYISvoBmf8CO7xwDVyLHyEUCBUiU6GSvYRP7Du0eQfn7uEidaUZvnkapuc5SrTP7si7Lar9PTWZMmhtdLIvVRE2NXtgkL_LFo5qJfGZjVuNULKfqcgh3x61A1XXUEPoINSE7bi7UuB422I4UsYV3dOzJn-ENhc4edt_I8HHrIyV1uMKRGHsUgTIMw82P50jBC2wx3gN_ybN66FwvOPvrJzXfVIeyTggRT1emwfHPyVrzqq3J_7e_JlNIPw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Nguyen, Tu Viet ; Ho, Nhan Duc ; Hoang, Hieu Thien ; Do, Cuong Danh ; Wong, Kok-Seng</creator><creatorcontrib>Nguyen, Tu Viet ; Ho, Nhan Duc ; Hoang, Hieu Thien ; Do, Cuong Danh ; Wong, Kok-Seng</creatorcontrib><description>Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy.</description><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2022.3215758</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>IEEE</publisher><subject>Bandwidth Optimization ; Communication-Efficiency ; Computational modeling ; Data models ; Distributed Machine Learning ; Federated learning ; Multi-Hop Wireless Networks ; Optimization ; Servers ; Task analysis ; Training</subject><ispartof>IEEE access, 2022, p.1-1</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-2029-7644</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9924192$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Nguyen, Tu Viet</creatorcontrib><creatorcontrib>Ho, Nhan Duc</creatorcontrib><creatorcontrib>Hoang, Hieu Thien</creatorcontrib><creatorcontrib>Do, Cuong Danh</creatorcontrib><creatorcontrib>Wong, Kok-Seng</creatorcontrib><title>Towards Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks</title><title>IEEE access</title><addtitle>Access</addtitle><description>Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy.</description><subject>Bandwidth Optimization</subject><subject>Communication-Efficiency</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>Distributed Machine Learning</subject><subject>Federated learning</subject><subject>Multi-Hop Wireless Networks</subject><subject>Optimization</subject><subject>Servers</subject><subject>Task analysis</subject><subject>Training</subject><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNp9i7FuwkAQRE9IkYISvoBmf8CO7xwDVyLHyEUCBUiU6GSvYRP7Du0eQfn7uEidaUZvnkapuc5SrTP7si7Lar9PTWZMmhtdLIvVRE2NXtgkL_LFo5qJfGZjVuNULKfqcgh3x61A1XXUEPoINSE7bi7UuB422I4UsYV3dOzJn-ENhc4edt_I8HHrIyV1uMKRGHsUgTIMw82P50jBC2wx3gN_ybN66FwvOPvrJzXfVIeyTggRT1emwfHPyVrzqq3J_7e_JlNIPw</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Nguyen, Tu Viet</creator><creator>Ho, Nhan Duc</creator><creator>Hoang, Hieu Thien</creator><creator>Do, Cuong Danh</creator><creator>Wong, Kok-Seng</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><orcidid>https://orcid.org/0000-0002-2029-7644</orcidid></search><sort><creationdate>2022</creationdate><title>Towards Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks</title><author>Nguyen, Tu Viet ; Ho, Nhan Duc ; Hoang, Hieu Thien ; Do, Cuong Danh ; Wong, Kok-Seng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_99241923</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Bandwidth Optimization</topic><topic>Communication-Efficiency</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>Distributed Machine Learning</topic><topic>Federated learning</topic><topic>Multi-Hop Wireless Networks</topic><topic>Optimization</topic><topic>Servers</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Tu Viet</creatorcontrib><creatorcontrib>Ho, Nhan Duc</creatorcontrib><creatorcontrib>Hoang, Hieu Thien</creatorcontrib><creatorcontrib>Do, Cuong Danh</creatorcontrib><creatorcontrib>Wong, Kok-Seng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Tu Viet</au><au>Ho, Nhan Duc</au><au>Hoang, Hieu Thien</au><au>Do, Cuong Danh</au><au>Wong, Kok-Seng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2022</date><risdate>2022</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy.</abstract><pub>IEEE</pub><doi>10.1109/ACCESS.2022.3215758</doi><orcidid>https://orcid.org/0000-0002-2029-7644</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2169-3536 |
ispartof | IEEE access, 2022, p.1-1 |
issn | 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_9924192 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Bandwidth Optimization Communication-Efficiency Computational modeling Data models Distributed Machine Learning Federated learning Multi-Hop Wireless Networks Optimization Servers Task analysis Training |
title | Towards Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T05%3A22%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Efficient%20Hierarchical%20Federated%20Learning%20Design%20Over%20Multi-Hop%20Wireless%20Communications%20Networks&rft.jtitle=IEEE%20access&rft.au=Nguyen,%20Tu%20Viet&rft.date=2022&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2022.3215758&rft_dat=%3Cieee%3E9924192%3C/ieee%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9924192&rfr_iscdi=true |