Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge

Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2024-10, p.1-1
Hauptverfasser: Sun, Chuanneng, Jiang, Tingcong, Pompili, Dario
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE internet of things journal
container_volume
creator Sun, Chuanneng
Jiang, Tingcong
Pompili, Dario
description Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and practicality. To overcome these fundamental limitations, we introduce Fed2KD+, a novel FL framework that leverages a set of tiny unified models and Conditional Variational Auto-Encoders (CVAEs) to enable FL training for heterogeneous models between network clients. Using forward and backward distillation processes, Fed2KD+ allows a seamless exchange of knowledge, mitigating data and heterogeneity problems of the model. Moreover, we propose a cosine similarity penalty in the loss function of CVAE+ to enhance the generalizability of CVAE for non-IID scenarios, improving the adaptability and efficiency of the framework. Furthermore, our framework design incorporates a co-design with Radio Access Network (RAN) architecture, reducing the fronthaul traffic volume and improving scalability. Extensive evaluations of one image and two IoT datasets demonstrate the superiority of Fed2KD+ in achieving higher accuracy and faster convergence compared to existing methods, including FedAvg, FedMD, and FedGen. Furthermore, we also performed hardware profiling on the Raspberry Pi and NVIDIA Jetson Nano to quantify the additional resources required to train the unified and CVAE+ models.
doi_str_mv 10.1109/JIOT.2024.3488565
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10738280</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10738280</ieee_id><sourcerecordid>10_1109_JIOT_2024_3488565</sourcerecordid><originalsourceid>FETCH-LOGICAL-c630-8f814d35c9a049cd7ee4b8462bef52b30d07d17403c3591bf8ccfbe59f871d3d3</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EElXpByCx8A-kjF-Js6xKX1DUTddEiT0uRiFBdiji73FFF13NaO65sziE3DOYMgbl4_Nmt59y4HIqpNYqV1dkxAUvMpnn_PpivyWTGD8AINUUK_MReVvjgKE_YIf9d6RLtBjqAS3dYh063x3o0dd0leJ09kekr73FNpt5m5iXrv9p0R6QPvk4-LZNSN9R39HhHekiBXfkxtVtxMl5jsl-udjP19l2t9rMZ9vM5AIy7TSTVihT1iBLYwtE2WiZ8wad4o0AC4VlhQRhhCpZ47QxrkFVOl0wK6wYE_b_1oQ-xoCu-gr-sw6_FYPqpKg6KapOiqqzotR5-O94RLzgC6G5BvEHFTZjbg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Chuanneng ; Jiang, Tingcong ; Pompili, Dario</creator><creatorcontrib>Sun, Chuanneng ; Jiang, Tingcong ; Pompili, Dario</creatorcontrib><description>Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and practicality. To overcome these fundamental limitations, we introduce Fed2KD+, a novel FL framework that leverages a set of tiny unified models and Conditional Variational Auto-Encoders (CVAEs) to enable FL training for heterogeneous models between network clients. Using forward and backward distillation processes, Fed2KD+ allows a seamless exchange of knowledge, mitigating data and heterogeneity problems of the model. Moreover, we propose a cosine similarity penalty in the loss function of CVAE+ to enhance the generalizability of CVAE for non-IID scenarios, improving the adaptability and efficiency of the framework. Furthermore, our framework design incorporates a co-design with Radio Access Network (RAN) architecture, reducing the fronthaul traffic volume and improving scalability. Extensive evaluations of one image and two IoT datasets demonstrate the superiority of Fed2KD+ in achieving higher accuracy and faster convergence compared to existing methods, including FedAvg, FedMD, and FedGen. Furthermore, we also performed hardware profiling on the Raspberry Pi and NVIDIA Jetson Nano to quantify the additional resources required to train the unified and CVAE+ models.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2024.3488565</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Convergence ; Data models ; Distributed databases ; edge learning ; Federated learning ; Internet of Things ; knowledge distillation ; model heterogeneity ; Privacy ; Radio access networks ; Servers ; Training</subject><ispartof>IEEE internet of things journal, 2024-10, p.1-1</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0001-7524-9044 ; 0000-0002-5365-509X ; 0009-0008-8823-0615</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10738280$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10738280$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Sun, Chuanneng</creatorcontrib><creatorcontrib>Jiang, Tingcong</creatorcontrib><creatorcontrib>Pompili, Dario</creatorcontrib><title>Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and practicality. To overcome these fundamental limitations, we introduce Fed2KD+, a novel FL framework that leverages a set of tiny unified models and Conditional Variational Auto-Encoders (CVAEs) to enable FL training for heterogeneous models between network clients. Using forward and backward distillation processes, Fed2KD+ allows a seamless exchange of knowledge, mitigating data and heterogeneity problems of the model. Moreover, we propose a cosine similarity penalty in the loss function of CVAE+ to enhance the generalizability of CVAE for non-IID scenarios, improving the adaptability and efficiency of the framework. Furthermore, our framework design incorporates a co-design with Radio Access Network (RAN) architecture, reducing the fronthaul traffic volume and improving scalability. Extensive evaluations of one image and two IoT datasets demonstrate the superiority of Fed2KD+ in achieving higher accuracy and faster convergence compared to existing methods, including FedAvg, FedMD, and FedGen. Furthermore, we also performed hardware profiling on the Raspberry Pi and NVIDIA Jetson Nano to quantify the additional resources required to train the unified and CVAE+ models.</description><subject>Computational modeling</subject><subject>Convergence</subject><subject>Data models</subject><subject>Distributed databases</subject><subject>edge learning</subject><subject>Federated learning</subject><subject>Internet of Things</subject><subject>knowledge distillation</subject><subject>model heterogeneity</subject><subject>Privacy</subject><subject>Radio access networks</subject><subject>Servers</subject><subject>Training</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMtOwzAQRS0EElXpByCx8A-kjF-Js6xKX1DUTddEiT0uRiFBdiji73FFF13NaO65sziE3DOYMgbl4_Nmt59y4HIqpNYqV1dkxAUvMpnn_PpivyWTGD8AINUUK_MReVvjgKE_YIf9d6RLtBjqAS3dYh063x3o0dd0leJ09kekr73FNpt5m5iXrv9p0R6QPvk4-LZNSN9R39HhHekiBXfkxtVtxMl5jsl-udjP19l2t9rMZ9vM5AIy7TSTVihT1iBLYwtE2WiZ8wad4o0AC4VlhQRhhCpZ47QxrkFVOl0wK6wYE_b_1oQ-xoCu-gr-sw6_FYPqpKg6KapOiqqzotR5-O94RLzgC6G5BvEHFTZjbg</recordid><startdate>20241029</startdate><enddate>20241029</enddate><creator>Sun, Chuanneng</creator><creator>Jiang, Tingcong</creator><creator>Pompili, Dario</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-7524-9044</orcidid><orcidid>https://orcid.org/0000-0002-5365-509X</orcidid><orcidid>https://orcid.org/0009-0008-8823-0615</orcidid></search><sort><creationdate>20241029</creationdate><title>Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge</title><author>Sun, Chuanneng ; Jiang, Tingcong ; Pompili, Dario</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c630-8f814d35c9a049cd7ee4b8462bef52b30d07d17403c3591bf8ccfbe59f871d3d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computational modeling</topic><topic>Convergence</topic><topic>Data models</topic><topic>Distributed databases</topic><topic>edge learning</topic><topic>Federated learning</topic><topic>Internet of Things</topic><topic>knowledge distillation</topic><topic>model heterogeneity</topic><topic>Privacy</topic><topic>Radio access networks</topic><topic>Servers</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Chuanneng</creatorcontrib><creatorcontrib>Jiang, Tingcong</creatorcontrib><creatorcontrib>Pompili, Dario</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Chuanneng</au><au>Jiang, Tingcong</au><au>Pompili, Dario</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2024-10-29</date><risdate>2024</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and practicality. To overcome these fundamental limitations, we introduce Fed2KD+, a novel FL framework that leverages a set of tiny unified models and Conditional Variational Auto-Encoders (CVAEs) to enable FL training for heterogeneous models between network clients. Using forward and backward distillation processes, Fed2KD+ allows a seamless exchange of knowledge, mitigating data and heterogeneity problems of the model. Moreover, we propose a cosine similarity penalty in the loss function of CVAE+ to enhance the generalizability of CVAE for non-IID scenarios, improving the adaptability and efficiency of the framework. Furthermore, our framework design incorporates a co-design with Radio Access Network (RAN) architecture, reducing the fronthaul traffic volume and improving scalability. Extensive evaluations of one image and two IoT datasets demonstrate the superiority of Fed2KD+ in achieving higher accuracy and faster convergence compared to existing methods, including FedAvg, FedMD, and FedGen. Furthermore, we also performed hardware profiling on the Raspberry Pi and NVIDIA Jetson Nano to quantify the additional resources required to train the unified and CVAE+ models.</abstract><pub>IEEE</pub><doi>10.1109/JIOT.2024.3488565</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-7524-9044</orcidid><orcidid>https://orcid.org/0000-0002-5365-509X</orcidid><orcidid>https://orcid.org/0009-0008-8823-0615</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2327-4662
ispartof IEEE internet of things journal, 2024-10, p.1-1
issn 2327-4662
2327-4662
language eng
recordid cdi_ieee_primary_10738280
source IEEE Electronic Library (IEL)
subjects Computational modeling
Convergence
Data models
Distributed databases
edge learning
Federated learning
Internet of Things
knowledge distillation
model heterogeneity
Privacy
Radio access networks
Servers
Training
title Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A36%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Heterogeneous%20Federated%20Learning%20via%20Generative%20Model-Aided%20Knowledge%20Distillation%20in%20the%20Edge&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Sun,%20Chuanneng&rft.date=2024-10-29&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2024.3488565&rft_dat=%3Ccrossref_RIE%3E10_1109_JIOT_2024_3488565%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10738280&rfr_iscdi=true