Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis

Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on network science and engineering 2022-09, Vol.9 (5), p.3166-3178
Hauptverfasser: Liu, Yi, Zhu, Yuanshao, Yu, James J.Q.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3178
container_issue 5
container_start_page 3166
container_title IEEE transactions on network science and engineering
container_volume 9
creator Liu, Yi
Zhu, Yuanshao
Yu, James J.Q.
description Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves hundreds of remote devices (or clients), resulting in expensive communication costs, which is not friendly to resource-constrained FEEL. To address this issue, we propose a distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints. Specifically, the proposed algorithm is improved based on distributed L-BFGS algorithm and allows each client to approximate the high-cost Hessian matrix by computing the low-cost Fisher matrix in a distributed manner to find a "better" descent direction, thereby speeding up convergence. Second, we prove that the proposed algorithm has linear convergence in strongly convex and non-convex cases and analyze its computational and communication complexity. Similarly, due to the heterogeneity of the connected remote devices, FEEL faces the challenge of heterogeneous data and non-IID (Independent and Identically Distributed) data. To this end, we design a simple but elegant training scheme, namely FedOVA (Federated One-vs-All), to solve the heterogeneous statistical challenge brought by heterogeneous data. In this way, FedOVA first decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning. In particular, the scheme can be well integrated with our communication efficient algorithm to serve FEEL. Numerical results verify the effectiveness and superiority of the proposed algorithm.
doi_str_mv 10.1109/TNSE.2021.3126021
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TNSE_2021_3126021</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9609654</ieee_id><sourcerecordid>2712060035</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-d6096d1dc19d6ab054800fa11f60a6a2919b2e6ec2ad7bfbfb30cc19aeb240fb3</originalsourceid><addsrcrecordid>eNo9UMFKAzEUDKJg0X6AeFnwvPUl2aaNt1JbKxQFrejJkN28rVvapCbZQ__eLC3yDjMPZh7zhpAbCgNKQd6vXt5nAwaMDjhlIuEZ6THOi5wz-XXecTbKCyFHl6QfwgYAKBsLznmPfL9hcK2vMJ86G6LXjUWTzdGg1zGxmVljtkTtbWPX2WcTf7IFRvRujRZdG7JHHfVDNnd-1251bJzNtDXZxOrtITThmlzUehuwf8Ir8jGfraaLfPn69DydLPOKSR5zI0AKQ01FpRG6hGExBqg1pbUALTSTVJYMBVZMm1FZp-FQJbHGkhWQtityd7y79-63xRDVJn2VQgTFRpSBAODDpKJHVeVdCB5rtffNTvuDoqC6JlXXpOqaVKcmk-f26GkQ8V8vu8DDgv8Boa9wLg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2712060035</pqid></control><display><type>article</type><title>Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Yi ; Zhu, Yuanshao ; Yu, James J.Q.</creator><creatorcontrib>Liu, Yi ; Zhu, Yuanshao ; Yu, James J.Q.</creatorcontrib><description>Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves hundreds of remote devices (or clients), resulting in expensive communication costs, which is not friendly to resource-constrained FEEL. To address this issue, we propose a distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints. Specifically, the proposed algorithm is improved based on distributed L-BFGS algorithm and allows each client to approximate the high-cost Hessian matrix by computing the low-cost Fisher matrix in a distributed manner to find a "better" descent direction, thereby speeding up convergence. Second, we prove that the proposed algorithm has linear convergence in strongly convex and non-convex cases and analyze its computational and communication complexity. Similarly, due to the heterogeneity of the connected remote devices, FEEL faces the challenge of heterogeneous data and non-IID (Independent and Identically Distributed) data. To this end, we design a simple but elegant training scheme, namely FedOVA (Federated One-vs-All), to solve the heterogeneous statistical challenge brought by heterogeneous data. In this way, FedOVA first decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning. In particular, the scheme can be well integrated with our communication efficient algorithm to serve FEEL. Numerical results verify the effectiveness and superiority of the proposed algorithm.</description><identifier>ISSN: 2327-4697</identifier><identifier>EISSN: 2334-329X</identifier><identifier>DOI: 10.1109/TNSE.2021.3126021</identifier><identifier>CODEN: ITNSD5</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Approximation algorithms ; Classification ; Collaboration ; Communication ; Computational modeling ; Constraints ; Convergence ; Distributed databases ; Federated Edge Learning ; Hessian matrices ; Heterogeneity ; Machine learning ; Newton-type Methods ; Non-IID Data ; One-vs-All Methods ; Optimization ; Resource-constrained ; Task analysis ; Training ; Wireless communications</subject><ispartof>IEEE transactions on network science and engineering, 2022-09, Vol.9 (5), p.3166-3178</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-d6096d1dc19d6ab054800fa11f60a6a2919b2e6ec2ad7bfbfb30cc19aeb240fb3</citedby><cites>FETCH-LOGICAL-c293t-d6096d1dc19d6ab054800fa11f60a6a2919b2e6ec2ad7bfbfb30cc19aeb240fb3</cites><orcidid>0000-0002-5657-181X ; 0000-0002-6392-6711 ; 0000-0002-0811-6150</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9609654$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9609654$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Zhu, Yuanshao</creatorcontrib><creatorcontrib>Yu, James J.Q.</creatorcontrib><title>Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis</title><title>IEEE transactions on network science and engineering</title><addtitle>TNSE</addtitle><description>Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves hundreds of remote devices (or clients), resulting in expensive communication costs, which is not friendly to resource-constrained FEEL. To address this issue, we propose a distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints. Specifically, the proposed algorithm is improved based on distributed L-BFGS algorithm and allows each client to approximate the high-cost Hessian matrix by computing the low-cost Fisher matrix in a distributed manner to find a "better" descent direction, thereby speeding up convergence. Second, we prove that the proposed algorithm has linear convergence in strongly convex and non-convex cases and analyze its computational and communication complexity. Similarly, due to the heterogeneity of the connected remote devices, FEEL faces the challenge of heterogeneous data and non-IID (Independent and Identically Distributed) data. To this end, we design a simple but elegant training scheme, namely FedOVA (Federated One-vs-All), to solve the heterogeneous statistical challenge brought by heterogeneous data. In this way, FedOVA first decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning. In particular, the scheme can be well integrated with our communication efficient algorithm to serve FEEL. Numerical results verify the effectiveness and superiority of the proposed algorithm.</description><subject>Algorithms</subject><subject>Approximation algorithms</subject><subject>Classification</subject><subject>Collaboration</subject><subject>Communication</subject><subject>Computational modeling</subject><subject>Constraints</subject><subject>Convergence</subject><subject>Distributed databases</subject><subject>Federated Edge Learning</subject><subject>Hessian matrices</subject><subject>Heterogeneity</subject><subject>Machine learning</subject><subject>Newton-type Methods</subject><subject>Non-IID Data</subject><subject>One-vs-All Methods</subject><subject>Optimization</subject><subject>Resource-constrained</subject><subject>Task analysis</subject><subject>Training</subject><subject>Wireless communications</subject><issn>2327-4697</issn><issn>2334-329X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9UMFKAzEUDKJg0X6AeFnwvPUl2aaNt1JbKxQFrejJkN28rVvapCbZQ__eLC3yDjMPZh7zhpAbCgNKQd6vXt5nAwaMDjhlIuEZ6THOi5wz-XXecTbKCyFHl6QfwgYAKBsLznmPfL9hcK2vMJ86G6LXjUWTzdGg1zGxmVljtkTtbWPX2WcTf7IFRvRujRZdG7JHHfVDNnd-1251bJzNtDXZxOrtITThmlzUehuwf8Ir8jGfraaLfPn69DydLPOKSR5zI0AKQ01FpRG6hGExBqg1pbUALTSTVJYMBVZMm1FZp-FQJbHGkhWQtityd7y79-63xRDVJn2VQgTFRpSBAODDpKJHVeVdCB5rtffNTvuDoqC6JlXXpOqaVKcmk-f26GkQ8V8vu8DDgv8Boa9wLg</recordid><startdate>20220901</startdate><enddate>20220901</enddate><creator>Liu, Yi</creator><creator>Zhu, Yuanshao</creator><creator>Yu, James J.Q.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-5657-181X</orcidid><orcidid>https://orcid.org/0000-0002-6392-6711</orcidid><orcidid>https://orcid.org/0000-0002-0811-6150</orcidid></search><sort><creationdate>20220901</creationdate><title>Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis</title><author>Liu, Yi ; Zhu, Yuanshao ; Yu, James J.Q.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-d6096d1dc19d6ab054800fa11f60a6a2919b2e6ec2ad7bfbfb30cc19aeb240fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Approximation algorithms</topic><topic>Classification</topic><topic>Collaboration</topic><topic>Communication</topic><topic>Computational modeling</topic><topic>Constraints</topic><topic>Convergence</topic><topic>Distributed databases</topic><topic>Federated Edge Learning</topic><topic>Hessian matrices</topic><topic>Heterogeneity</topic><topic>Machine learning</topic><topic>Newton-type Methods</topic><topic>Non-IID Data</topic><topic>One-vs-All Methods</topic><topic>Optimization</topic><topic>Resource-constrained</topic><topic>Task analysis</topic><topic>Training</topic><topic>Wireless communications</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Zhu, Yuanshao</creatorcontrib><creatorcontrib>Yu, James J.Q.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on network science and engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Yi</au><au>Zhu, Yuanshao</au><au>Yu, James J.Q.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis</atitle><jtitle>IEEE transactions on network science and engineering</jtitle><stitle>TNSE</stitle><date>2022-09-01</date><risdate>2022</risdate><volume>9</volume><issue>5</issue><spage>3166</spage><epage>3178</epage><pages>3166-3178</pages><issn>2327-4697</issn><eissn>2334-329X</eissn><coden>ITNSD5</coden><abstract>Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves hundreds of remote devices (or clients), resulting in expensive communication costs, which is not friendly to resource-constrained FEEL. To address this issue, we propose a distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints. Specifically, the proposed algorithm is improved based on distributed L-BFGS algorithm and allows each client to approximate the high-cost Hessian matrix by computing the low-cost Fisher matrix in a distributed manner to find a "better" descent direction, thereby speeding up convergence. Second, we prove that the proposed algorithm has linear convergence in strongly convex and non-convex cases and analyze its computational and communication complexity. Similarly, due to the heterogeneity of the connected remote devices, FEEL faces the challenge of heterogeneous data and non-IID (Independent and Identically Distributed) data. To this end, we design a simple but elegant training scheme, namely FedOVA (Federated One-vs-All), to solve the heterogeneous statistical challenge brought by heterogeneous data. In this way, FedOVA first decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning. In particular, the scheme can be well integrated with our communication efficient algorithm to serve FEEL. Numerical results verify the effectiveness and superiority of the proposed algorithm.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TNSE.2021.3126021</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-5657-181X</orcidid><orcidid>https://orcid.org/0000-0002-6392-6711</orcidid><orcidid>https://orcid.org/0000-0002-0811-6150</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2327-4697
ispartof IEEE transactions on network science and engineering, 2022-09, Vol.9 (5), p.3166-3178
issn 2327-4697
2334-329X
language eng
recordid cdi_crossref_primary_10_1109_TNSE_2021_3126021
source IEEE Electronic Library (IEL)
subjects Algorithms
Approximation algorithms
Classification
Collaboration
Communication
Computational modeling
Constraints
Convergence
Distributed databases
Federated Edge Learning
Hessian matrices
Heterogeneity
Machine learning
Newton-type Methods
Non-IID Data
One-vs-All Methods
Optimization
Resource-constrained
Task analysis
Training
Wireless communications
title Resource-Constrained Federated Edge Learning With Heterogeneous Data: Formulation and Analysis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T00%3A51%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Resource-Constrained%20Federated%20Edge%20Learning%20With%20Heterogeneous%20Data:%20Formulation%20and%20Analysis&rft.jtitle=IEEE%20transactions%20on%20network%20science%20and%20engineering&rft.au=Liu,%20Yi&rft.date=2022-09-01&rft.volume=9&rft.issue=5&rft.spage=3166&rft.epage=3178&rft.pages=3166-3178&rft.issn=2327-4697&rft.eissn=2334-329X&rft.coden=ITNSD5&rft_id=info:doi/10.1109/TNSE.2021.3126021&rft_dat=%3Cproquest_RIE%3E2712060035%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2712060035&rft_id=info:pmid/&rft_ieee_id=9609654&rfr_iscdi=true