Decentralized machine learning training: a survey on synchronization, consolidation, and topologies

Federated Learning (FL) has emerged as a promising methodology for collaboratively training machine learning models on decentralized devices. Notwithstanding, the effective synchronization and consolidation of model updates originating from diverse devices, in conjunction with the appropriate config...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Khan, Qazi Waqas, Khan, Anam Nawaz, Rizwan, Atif, Ahmad, Rashid, Khan, Salabat, Kim, Do Hyeun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE access
container_volume 11
creator Khan, Qazi Waqas
Khan, Anam Nawaz
Rizwan, Atif
Ahmad, Rashid
Khan, Salabat
Kim, Do Hyeun
description Federated Learning (FL) has emerged as a promising methodology for collaboratively training machine learning models on decentralized devices. Notwithstanding, the effective synchronization and consolidation of model updates originating from diverse devices, in conjunction with the appropriate configuration of network topologies, persist as crucial obstacles. This paper provides a comprehensive analysis of the current techniques and methodologies utilized in the synchronization, consolidation, and network topologies of Federated Learning. The present study explores diverse synchronization strategies utilized for the purpose of coordinating model updates from geographically distributed cross-silo edge nodes. The study takes into account several factors, including communication efficiency and privacy preservation. This study delves into the intricacies of model consolidation techniques, such as weighted and personalized aggregation methods, to evaluate their efficacy in consolidation of local model updates into a global model, while taking into consideration statistical heterogeneity and resource constraints. In addition, an examination is conducted on the importance of network topologies in Federated Learning (FL), taking into account their influence on communication efficacy, confidentiality, expandability, resilience, and resource allocation. The survey assesses and contrasts the efficacies and constraints of extant methodologies, discerns deficiencies in present investigations, and provides insights for future progressions. The objective of this survey is to provide a thorough examination of FL synchronization, consolidation, and network topologies, with the intention of offering a valuable reference for individuals engaged in Federated Learning, including researchers, practitioners, and stakeholders. This survey aims to support the advancement of more effective and resilient FL systems.
doi_str_mv 10.1109/ACCESS.2023.3284976
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2023_3284976</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10147820</ieee_id><doaj_id>oai_doaj_org_article_d7df23585ac646bf9ffe75b2c76c87de</doaj_id><sourcerecordid>2836054115</sourcerecordid><originalsourceid>FETCH-LOGICAL-c409t-53a206a16f5464dc488175642b7fcc8a207a9ea8121efc3ae7ce651e57999cc23</originalsourceid><addsrcrecordid>eNpNUU1rGzEQXUILDWl-QXsQ5Fq7-v7oLbhpGwj0kOQs5NmRI7ORXGldcH591l1TMpeZeTPvzcDruk-MLhmj7uv1anVzf7_klIul4FY6o8-6c860Wwgl9Ls39YfusrUtncJOkDLnHXxHwDzWMKQX7MlzgKeUkQwYak55Q6ZJOhbfSCBtX__igZRM2iHDUy05vYQxlfyFQMmtDKk_tSH3ZCy7MpRNwvaxex_D0PDylC-6xx83D6tfi7vfP29X13cLkNSNCyUCpzowHZXUsgdpLTNKS742EcBOQxMcBss4wwgioAHUiqEyzjkALi6621m3L2HrdzU9h3rwJST_Dyh140MdEwzoe9NHLpRVAbTU6-hiRKPWHIwGa3qctK5mrV0tf_bYRr8t-5qn9z23QlMlGVPTlpi3oJbWKsb_Vxn1R3P8bI4_muNP5kyszzMrIeIbBpPGcipeAVoxjKU</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2836054115</pqid></control><display><type>article</type><title>Decentralized machine learning training: a survey on synchronization, consolidation, and topologies</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Khan, Qazi Waqas ; Khan, Anam Nawaz ; Rizwan, Atif ; Ahmad, Rashid ; Khan, Salabat ; Kim, Do Hyeun</creator><creatorcontrib>Khan, Qazi Waqas ; Khan, Anam Nawaz ; Rizwan, Atif ; Ahmad, Rashid ; Khan, Salabat ; Kim, Do Hyeun</creatorcontrib><description>Federated Learning (FL) has emerged as a promising methodology for collaboratively training machine learning models on decentralized devices. Notwithstanding, the effective synchronization and consolidation of model updates originating from diverse devices, in conjunction with the appropriate configuration of network topologies, persist as crucial obstacles. This paper provides a comprehensive analysis of the current techniques and methodologies utilized in the synchronization, consolidation, and network topologies of Federated Learning. The present study explores diverse synchronization strategies utilized for the purpose of coordinating model updates from geographically distributed cross-silo edge nodes. The study takes into account several factors, including communication efficiency and privacy preservation. This study delves into the intricacies of model consolidation techniques, such as weighted and personalized aggregation methods, to evaluate their efficacy in consolidation of local model updates into a global model, while taking into consideration statistical heterogeneity and resource constraints. In addition, an examination is conducted on the importance of network topologies in Federated Learning (FL), taking into account their influence on communication efficacy, confidentiality, expandability, resilience, and resource allocation. The survey assesses and contrasts the efficacies and constraints of extant methodologies, discerns deficiencies in present investigations, and provides insights for future progressions. The objective of this survey is to provide a thorough examination of FL synchronization, consolidation, and network topologies, with the intention of offering a valuable reference for individuals engaged in Federated Learning, including researchers, practitioners, and stakeholders. This survey aims to support the advancement of more effective and resilient FL systems.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3284976</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Asynchronous ; Consolidation ; Federated Learning ; Geographical distribution ; Heterogeneity ; Machine learning ; Network topologies ; Network Topology ; Progressions ; Resilience ; Resource allocation ; Semi Asynchronous Weight Aggregation ; Synchronism ; Synchronous ; System effectiveness ; Training</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c409t-53a206a16f5464dc488175642b7fcc8a207a9ea8121efc3ae7ce651e57999cc23</citedby><cites>FETCH-LOGICAL-c409t-53a206a16f5464dc488175642b7fcc8a207a9ea8121efc3ae7ce651e57999cc23</cites><orcidid>0000-0002-4031-3920 ; 0000-0001-6260-5820 ; 0000-0002-3457-2301 ; 0000-0001-6669-8147 ; 0000-0001-6922-7412 ; 0000-0003-3737-9037</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10147820$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,27610,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Khan, Qazi Waqas</creatorcontrib><creatorcontrib>Khan, Anam Nawaz</creatorcontrib><creatorcontrib>Rizwan, Atif</creatorcontrib><creatorcontrib>Ahmad, Rashid</creatorcontrib><creatorcontrib>Khan, Salabat</creatorcontrib><creatorcontrib>Kim, Do Hyeun</creatorcontrib><title>Decentralized machine learning training: a survey on synchronization, consolidation, and topologies</title><title>IEEE access</title><addtitle>Access</addtitle><description>Federated Learning (FL) has emerged as a promising methodology for collaboratively training machine learning models on decentralized devices. Notwithstanding, the effective synchronization and consolidation of model updates originating from diverse devices, in conjunction with the appropriate configuration of network topologies, persist as crucial obstacles. This paper provides a comprehensive analysis of the current techniques and methodologies utilized in the synchronization, consolidation, and network topologies of Federated Learning. The present study explores diverse synchronization strategies utilized for the purpose of coordinating model updates from geographically distributed cross-silo edge nodes. The study takes into account several factors, including communication efficiency and privacy preservation. This study delves into the intricacies of model consolidation techniques, such as weighted and personalized aggregation methods, to evaluate their efficacy in consolidation of local model updates into a global model, while taking into consideration statistical heterogeneity and resource constraints. In addition, an examination is conducted on the importance of network topologies in Federated Learning (FL), taking into account their influence on communication efficacy, confidentiality, expandability, resilience, and resource allocation. The survey assesses and contrasts the efficacies and constraints of extant methodologies, discerns deficiencies in present investigations, and provides insights for future progressions. The objective of this survey is to provide a thorough examination of FL synchronization, consolidation, and network topologies, with the intention of offering a valuable reference for individuals engaged in Federated Learning, including researchers, practitioners, and stakeholders. This survey aims to support the advancement of more effective and resilient FL systems.</description><subject>Asynchronous</subject><subject>Consolidation</subject><subject>Federated Learning</subject><subject>Geographical distribution</subject><subject>Heterogeneity</subject><subject>Machine learning</subject><subject>Network topologies</subject><subject>Network Topology</subject><subject>Progressions</subject><subject>Resilience</subject><subject>Resource allocation</subject><subject>Semi Asynchronous Weight Aggregation</subject><subject>Synchronism</subject><subject>Synchronous</subject><subject>System effectiveness</subject><subject>Training</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1rGzEQXUILDWl-QXsQ5Fq7-v7oLbhpGwj0kOQs5NmRI7ORXGldcH591l1TMpeZeTPvzcDruk-MLhmj7uv1anVzf7_klIul4FY6o8-6c860Wwgl9Ls39YfusrUtncJOkDLnHXxHwDzWMKQX7MlzgKeUkQwYak55Q6ZJOhbfSCBtX__igZRM2iHDUy05vYQxlfyFQMmtDKk_tSH3ZCy7MpRNwvaxex_D0PDylC-6xx83D6tfi7vfP29X13cLkNSNCyUCpzowHZXUsgdpLTNKS742EcBOQxMcBss4wwgioAHUiqEyzjkALi6621m3L2HrdzU9h3rwJST_Dyh140MdEwzoe9NHLpRVAbTU6-hiRKPWHIwGa3qctK5mrV0tf_bYRr8t-5qn9z23QlMlGVPTlpi3oJbWKsb_Vxn1R3P8bI4_muNP5kyszzMrIeIbBpPGcipeAVoxjKU</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Khan, Qazi Waqas</creator><creator>Khan, Anam Nawaz</creator><creator>Rizwan, Atif</creator><creator>Ahmad, Rashid</creator><creator>Khan, Salabat</creator><creator>Kim, Do Hyeun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4031-3920</orcidid><orcidid>https://orcid.org/0000-0001-6260-5820</orcidid><orcidid>https://orcid.org/0000-0002-3457-2301</orcidid><orcidid>https://orcid.org/0000-0001-6669-8147</orcidid><orcidid>https://orcid.org/0000-0001-6922-7412</orcidid><orcidid>https://orcid.org/0000-0003-3737-9037</orcidid></search><sort><creationdate>20230101</creationdate><title>Decentralized machine learning training: a survey on synchronization, consolidation, and topologies</title><author>Khan, Qazi Waqas ; Khan, Anam Nawaz ; Rizwan, Atif ; Ahmad, Rashid ; Khan, Salabat ; Kim, Do Hyeun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c409t-53a206a16f5464dc488175642b7fcc8a207a9ea8121efc3ae7ce651e57999cc23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Asynchronous</topic><topic>Consolidation</topic><topic>Federated Learning</topic><topic>Geographical distribution</topic><topic>Heterogeneity</topic><topic>Machine learning</topic><topic>Network topologies</topic><topic>Network Topology</topic><topic>Progressions</topic><topic>Resilience</topic><topic>Resource allocation</topic><topic>Semi Asynchronous Weight Aggregation</topic><topic>Synchronism</topic><topic>Synchronous</topic><topic>System effectiveness</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Khan, Qazi Waqas</creatorcontrib><creatorcontrib>Khan, Anam Nawaz</creatorcontrib><creatorcontrib>Rizwan, Atif</creatorcontrib><creatorcontrib>Ahmad, Rashid</creatorcontrib><creatorcontrib>Khan, Salabat</creatorcontrib><creatorcontrib>Kim, Do Hyeun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Khan, Qazi Waqas</au><au>Khan, Anam Nawaz</au><au>Rizwan, Atif</au><au>Ahmad, Rashid</au><au>Khan, Salabat</au><au>Kim, Do Hyeun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Decentralized machine learning training: a survey on synchronization, consolidation, and topologies</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Federated Learning (FL) has emerged as a promising methodology for collaboratively training machine learning models on decentralized devices. Notwithstanding, the effective synchronization and consolidation of model updates originating from diverse devices, in conjunction with the appropriate configuration of network topologies, persist as crucial obstacles. This paper provides a comprehensive analysis of the current techniques and methodologies utilized in the synchronization, consolidation, and network topologies of Federated Learning. The present study explores diverse synchronization strategies utilized for the purpose of coordinating model updates from geographically distributed cross-silo edge nodes. The study takes into account several factors, including communication efficiency and privacy preservation. This study delves into the intricacies of model consolidation techniques, such as weighted and personalized aggregation methods, to evaluate their efficacy in consolidation of local model updates into a global model, while taking into consideration statistical heterogeneity and resource constraints. In addition, an examination is conducted on the importance of network topologies in Federated Learning (FL), taking into account their influence on communication efficacy, confidentiality, expandability, resilience, and resource allocation. The survey assesses and contrasts the efficacies and constraints of extant methodologies, discerns deficiencies in present investigations, and provides insights for future progressions. The objective of this survey is to provide a thorough examination of FL synchronization, consolidation, and network topologies, with the intention of offering a valuable reference for individuals engaged in Federated Learning, including researchers, practitioners, and stakeholders. This survey aims to support the advancement of more effective and resilient FL systems.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3284976</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4031-3920</orcidid><orcidid>https://orcid.org/0000-0001-6260-5820</orcidid><orcidid>https://orcid.org/0000-0002-3457-2301</orcidid><orcidid>https://orcid.org/0000-0001-6669-8147</orcidid><orcidid>https://orcid.org/0000-0001-6922-7412</orcidid><orcidid>https://orcid.org/0000-0003-3737-9037</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2023-01, Vol.11, p.1-1
issn 2169-3536
2169-3536
language eng
recordid cdi_crossref_primary_10_1109_ACCESS_2023_3284976
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Asynchronous
Consolidation
Federated Learning
Geographical distribution
Heterogeneity
Machine learning
Network topologies
Network Topology
Progressions
Resilience
Resource allocation
Semi Asynchronous Weight Aggregation
Synchronism
Synchronous
System effectiveness
Training
title Decentralized machine learning training: a survey on synchronization, consolidation, and topologies
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T21%3A54%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Decentralized%20machine%20learning%20training:%20a%20survey%20on%20synchronization,%20consolidation,%20and%20topologies&rft.jtitle=IEEE%20access&rft.au=Khan,%20Qazi%20Waqas&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3284976&rft_dat=%3Cproquest_cross%3E2836054115%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2836054115&rft_id=info:pmid/&rft_ieee_id=10147820&rft_doaj_id=oai_doaj_org_article_d7df23585ac646bf9ffe75b2c76c87de&rfr_iscdi=true