Communication-Efficient Federated Learning With Binary Neural Networks
Federated learning (FL) is a privacy-preserving machine learning setting that enables many devices to jointly train a shared global model without the need to reveal their data to a central server. However, FL involves a frequent exchange of the parameters between all the clients and the server that...
Gespeichert in:
Veröffentlicht in: | IEEE journal on selected areas in communications 2021-12, Vol.39 (12), p.3836-3850 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3850 |
---|---|
container_issue | 12 |
container_start_page | 3836 |
container_title | IEEE journal on selected areas in communications |
container_volume | 39 |
creator | Yang, Yuzhi Zhang, Zhaoyang Yang, Qianqian |
description | Federated learning (FL) is a privacy-preserving machine learning setting that enables many devices to jointly train a shared global model without the need to reveal their data to a central server. However, FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training. This introduces extensive communication overhead, which can be a major bottleneck in FL with limited communication links. In this paper, we consider training the binary neural networks (BNNs) in the FL setting instead of the typical real-valued neural networks to fulfill the stringent delay and efficiency requirement in wireless edge networks. We introduce a novel FL framework of training BNNs, where the clients only upload the binary parameters to the server. We also propose a novel parameter updating scheme based on the Maximum Likelihood (ML) estimation that preserves the performance of the BNN even without the availability of aggregated real-valued auxiliary parameters that are usually needed during the training of the BNN. Moreover, for the first time in the literature, we theoretically derive the conditions under which the training of BNN is converging. Numerical results show that the proposed FL framework significantly reduces the communication cost compared to the conventional neural networks with typical real-valued parameters, and the performance loss incurred by the binarization can be further compensated by a hybrid method. |
doi_str_mv | 10.1109/JSAC.2021.3118415 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2599209531</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9562478</ieee_id><sourcerecordid>2599209531</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-4d07183e716ada1287e6c9bcf80542d9d2e1a5a812971e0151ebdc19a7aa20b03</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0EEqXwAYhNJNYpM3Yc28sStTxUwQIQS8tNJuDSJsVxhPh7UrVidTfnzlwdxi4RJohgbh5fpsWEA8eJQNQZyiM2Qil1CgD6mI1ACZFqhfkpO-u6FQBmmeYjNi_azaZvfOmib5t0Vte-9NTEZE4VBRepShbkQuObj-Tdx8_k1jcu_CZP1Ae3HiL-tOGrO2cntVt3dHHIMXubz16L-3TxfPdQTBdpKUQe06wChVrQsMNVDrlWlJdmWdYaZMYrU3FCJ51GbhQSoERaViUap5zjsAQxZtf7u9vQfvfURbtq-9AMLy2XxnAwUuBA4Z4qQ9t1gWq7DX4zzLYIdqfL7nTZnS570DV0rvYdT0T_vJE5z5QWfytwZYg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2599209531</pqid></control><display><type>article</type><title>Communication-Efficient Federated Learning With Binary Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Yang, Yuzhi ; Zhang, Zhaoyang ; Yang, Qianqian</creator><creatorcontrib>Yang, Yuzhi ; Zhang, Zhaoyang ; Yang, Qianqian</creatorcontrib><description>Federated learning (FL) is a privacy-preserving machine learning setting that enables many devices to jointly train a shared global model without the need to reveal their data to a central server. However, FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training. This introduces extensive communication overhead, which can be a major bottleneck in FL with limited communication links. In this paper, we consider training the binary neural networks (BNNs) in the FL setting instead of the typical real-valued neural networks to fulfill the stringent delay and efficiency requirement in wireless edge networks. We introduce a novel FL framework of training BNNs, where the clients only upload the binary parameters to the server. We also propose a novel parameter updating scheme based on the Maximum Likelihood (ML) estimation that preserves the performance of the BNN even without the availability of aggregated real-valued auxiliary parameters that are usually needed during the training of the BNN. Moreover, for the first time in the literature, we theoretically derive the conditions under which the training of BNN is converging. Numerical results show that the proposed FL framework significantly reduces the communication cost compared to the conventional neural networks with typical real-valued parameters, and the performance loss incurred by the binarization can be further compensated by a hybrid method.</description><identifier>ISSN: 0733-8716</identifier><identifier>EISSN: 1558-0008</identifier><identifier>DOI: 10.1109/JSAC.2021.3118415</identifier><identifier>CODEN: ISACEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>binary neural networks (BNN) ; Clients ; Collaborative work ; Communication ; Costs ; Data models ; distributed learning ; Federated learning ; Machine learning ; maximum likelihood (ML) estimation ; Maximum likelihood estimation ; Neural networks ; Parameters ; Servers ; Training data ; Wireless networks</subject><ispartof>IEEE journal on selected areas in communications, 2021-12, Vol.39 (12), p.3836-3850</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-4d07183e716ada1287e6c9bcf80542d9d2e1a5a812971e0151ebdc19a7aa20b03</citedby><cites>FETCH-LOGICAL-c336t-4d07183e716ada1287e6c9bcf80542d9d2e1a5a812971e0151ebdc19a7aa20b03</cites><orcidid>0000-0003-2454-1904 ; 0000-0003-2346-6228</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9562478$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9562478$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yang, Yuzhi</creatorcontrib><creatorcontrib>Zhang, Zhaoyang</creatorcontrib><creatorcontrib>Yang, Qianqian</creatorcontrib><title>Communication-Efficient Federated Learning With Binary Neural Networks</title><title>IEEE journal on selected areas in communications</title><addtitle>J-SAC</addtitle><description>Federated learning (FL) is a privacy-preserving machine learning setting that enables many devices to jointly train a shared global model without the need to reveal their data to a central server. However, FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training. This introduces extensive communication overhead, which can be a major bottleneck in FL with limited communication links. In this paper, we consider training the binary neural networks (BNNs) in the FL setting instead of the typical real-valued neural networks to fulfill the stringent delay and efficiency requirement in wireless edge networks. We introduce a novel FL framework of training BNNs, where the clients only upload the binary parameters to the server. We also propose a novel parameter updating scheme based on the Maximum Likelihood (ML) estimation that preserves the performance of the BNN even without the availability of aggregated real-valued auxiliary parameters that are usually needed during the training of the BNN. Moreover, for the first time in the literature, we theoretically derive the conditions under which the training of BNN is converging. Numerical results show that the proposed FL framework significantly reduces the communication cost compared to the conventional neural networks with typical real-valued parameters, and the performance loss incurred by the binarization can be further compensated by a hybrid method.</description><subject>binary neural networks (BNN)</subject><subject>Clients</subject><subject>Collaborative work</subject><subject>Communication</subject><subject>Costs</subject><subject>Data models</subject><subject>distributed learning</subject><subject>Federated learning</subject><subject>Machine learning</subject><subject>maximum likelihood (ML) estimation</subject><subject>Maximum likelihood estimation</subject><subject>Neural networks</subject><subject>Parameters</subject><subject>Servers</subject><subject>Training data</subject><subject>Wireless networks</subject><issn>0733-8716</issn><issn>1558-0008</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMtOwzAQRS0EEqXwAYhNJNYpM3Yc28sStTxUwQIQS8tNJuDSJsVxhPh7UrVidTfnzlwdxi4RJohgbh5fpsWEA8eJQNQZyiM2Qil1CgD6mI1ACZFqhfkpO-u6FQBmmeYjNi_azaZvfOmib5t0Vte-9NTEZE4VBRepShbkQuObj-Tdx8_k1jcu_CZP1Ae3HiL-tOGrO2cntVt3dHHIMXubz16L-3TxfPdQTBdpKUQe06wChVrQsMNVDrlWlJdmWdYaZMYrU3FCJ51GbhQSoERaViUap5zjsAQxZtf7u9vQfvfURbtq-9AMLy2XxnAwUuBA4Z4qQ9t1gWq7DX4zzLYIdqfL7nTZnS570DV0rvYdT0T_vJE5z5QWfytwZYg</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Yang, Yuzhi</creator><creator>Zhang, Zhaoyang</creator><creator>Yang, Qianqian</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-2454-1904</orcidid><orcidid>https://orcid.org/0000-0003-2346-6228</orcidid></search><sort><creationdate>20211201</creationdate><title>Communication-Efficient Federated Learning With Binary Neural Networks</title><author>Yang, Yuzhi ; Zhang, Zhaoyang ; Yang, Qianqian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-4d07183e716ada1287e6c9bcf80542d9d2e1a5a812971e0151ebdc19a7aa20b03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>binary neural networks (BNN)</topic><topic>Clients</topic><topic>Collaborative work</topic><topic>Communication</topic><topic>Costs</topic><topic>Data models</topic><topic>distributed learning</topic><topic>Federated learning</topic><topic>Machine learning</topic><topic>maximum likelihood (ML) estimation</topic><topic>Maximum likelihood estimation</topic><topic>Neural networks</topic><topic>Parameters</topic><topic>Servers</topic><topic>Training data</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Yuzhi</creatorcontrib><creatorcontrib>Zhang, Zhaoyang</creatorcontrib><creatorcontrib>Yang, Qianqian</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE journal on selected areas in communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Yuzhi</au><au>Zhang, Zhaoyang</au><au>Yang, Qianqian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Communication-Efficient Federated Learning With Binary Neural Networks</atitle><jtitle>IEEE journal on selected areas in communications</jtitle><stitle>J-SAC</stitle><date>2021-12-01</date><risdate>2021</risdate><volume>39</volume><issue>12</issue><spage>3836</spage><epage>3850</epage><pages>3836-3850</pages><issn>0733-8716</issn><eissn>1558-0008</eissn><coden>ISACEM</coden><abstract>Federated learning (FL) is a privacy-preserving machine learning setting that enables many devices to jointly train a shared global model without the need to reveal their data to a central server. However, FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training. This introduces extensive communication overhead, which can be a major bottleneck in FL with limited communication links. In this paper, we consider training the binary neural networks (BNNs) in the FL setting instead of the typical real-valued neural networks to fulfill the stringent delay and efficiency requirement in wireless edge networks. We introduce a novel FL framework of training BNNs, where the clients only upload the binary parameters to the server. We also propose a novel parameter updating scheme based on the Maximum Likelihood (ML) estimation that preserves the performance of the BNN even without the availability of aggregated real-valued auxiliary parameters that are usually needed during the training of the BNN. Moreover, for the first time in the literature, we theoretically derive the conditions under which the training of BNN is converging. Numerical results show that the proposed FL framework significantly reduces the communication cost compared to the conventional neural networks with typical real-valued parameters, and the performance loss incurred by the binarization can be further compensated by a hybrid method.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSAC.2021.3118415</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-2454-1904</orcidid><orcidid>https://orcid.org/0000-0003-2346-6228</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0733-8716 |
ispartof | IEEE journal on selected areas in communications, 2021-12, Vol.39 (12), p.3836-3850 |
issn | 0733-8716 1558-0008 |
language | eng |
recordid | cdi_proquest_journals_2599209531 |
source | IEEE Electronic Library (IEL) |
subjects | binary neural networks (BNN) Clients Collaborative work Communication Costs Data models distributed learning Federated learning Machine learning maximum likelihood (ML) estimation Maximum likelihood estimation Neural networks Parameters Servers Training data Wireless networks |
title | Communication-Efficient Federated Learning With Binary Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T18%3A36%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Communication-Efficient%20Federated%20Learning%20With%20Binary%20Neural%20Networks&rft.jtitle=IEEE%20journal%20on%20selected%20areas%20in%20communications&rft.au=Yang,%20Yuzhi&rft.date=2021-12-01&rft.volume=39&rft.issue=12&rft.spage=3836&rft.epage=3850&rft.pages=3836-3850&rft.issn=0733-8716&rft.eissn=1558-0008&rft.coden=ISACEM&rft_id=info:doi/10.1109/JSAC.2021.3118415&rft_dat=%3Cproquest_RIE%3E2599209531%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2599209531&rft_id=info:pmid/&rft_ieee_id=9562478&rfr_iscdi=true |