Data-Free Knowledge Distillation for Heterogeneous Federated Learning

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-06
Hauptverfasser: Zhu, Zhuangdi, Hong, Junyuan, Zhou, Jiayu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhu, Zhuangdi
Hong, Junyuan
Zhou, Jiayu
description Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2531424608</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2531424608</sourcerecordid><originalsourceid>FETCH-proquest_journals_25314246083</originalsourceid><addsrcrecordid>eNqNyrEKwjAQgOEgCBbtOwScC-klrd1tS0FH9xLotaSEnF5SfH0dfACnf_j-nchA67JoDMBB5DGuSimoL1BVOhNda5MtekaUt0Bvj9OCsnUxOe9tchTkTCwHTMi0YEDaouxxQrYJJ3lHy8GF5ST2s_UR81-P4tx3j-tQPJleG8Y0rrRx-NIIlS4NmFo1-r_rA3HdOuE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2531424608</pqid></control><display><type>article</type><title>Data-Free Knowledge Distillation for Heterogeneous Federated Learning</title><source>Freely Accessible Journals</source><creator>Zhu, Zhuangdi ; Hong, Junyuan ; Zhou, Jiayu</creator><creatorcontrib>Zhu, Zhuangdi ; Hong, Junyuan ; Zhou, Jiayu</creatorcontrib><description>Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Distillation ; Heterogeneity ; Knowledge ; Machine learning ; Mathematical models ; Parameters</subject><ispartof>arXiv.org, 2021-06</ispartof><rights>2021. This work is published under http://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>782,786</link.rule.ids></links><search><creatorcontrib>Zhu, Zhuangdi</creatorcontrib><creatorcontrib>Hong, Junyuan</creatorcontrib><creatorcontrib>Zhou, Jiayu</creatorcontrib><title>Data-Free Knowledge Distillation for Heterogeneous Federated Learning</title><title>arXiv.org</title><description>Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.</description><subject>Distillation</subject><subject>Heterogeneity</subject><subject>Knowledge</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Parameters</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyrEKwjAQgOEgCBbtOwScC-klrd1tS0FH9xLotaSEnF5SfH0dfACnf_j-nchA67JoDMBB5DGuSimoL1BVOhNda5MtekaUt0Bvj9OCsnUxOe9tchTkTCwHTMi0YEDaouxxQrYJJ3lHy8GF5ST2s_UR81-P4tx3j-tQPJleG8Y0rrRx-NIIlS4NmFo1-r_rA3HdOuE</recordid><startdate>20210609</startdate><enddate>20210609</enddate><creator>Zhu, Zhuangdi</creator><creator>Hong, Junyuan</creator><creator>Zhou, Jiayu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210609</creationdate><title>Data-Free Knowledge Distillation for Heterogeneous Federated Learning</title><author>Zhu, Zhuangdi ; Hong, Junyuan ; Zhou, Jiayu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25314246083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Distillation</topic><topic>Heterogeneity</topic><topic>Knowledge</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Parameters</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Zhuangdi</creatorcontrib><creatorcontrib>Hong, Junyuan</creatorcontrib><creatorcontrib>Zhou, Jiayu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhu, Zhuangdi</au><au>Hong, Junyuan</au><au>Zhou, Jiayu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Data-Free Knowledge Distillation for Heterogeneous Federated Learning</atitle><jtitle>arXiv.org</jtitle><date>2021-06-09</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2531424608
source Freely Accessible Journals
subjects Distillation
Heterogeneity
Knowledge
Machine learning
Mathematical models
Parameters
title Data-Free Knowledge Distillation for Heterogeneous Federated Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-02T20%3A56%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Data-Free%20Knowledge%20Distillation%20for%20Heterogeneous%20Federated%20Learning&rft.jtitle=arXiv.org&rft.au=Zhu,%20Zhuangdi&rft.date=2021-06-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2531424608%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2531424608&rft_id=info:pmid/&rfr_iscdi=true