Adaptive Federated Learning via New Entropy Approach
Federated Learning (FL) has emerged as a prominent distributed machine learning framework that enables geographically discrete clients to train a global model collaboratively while preserving their privacy-sensitive data. However, due to the non-independent-and-identically-distributed (Non-IID) data...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on mobile computing 2024-12, Vol.23 (12), p.11920-11936 |
---|---|
Hauptverfasser: | , , , |
Format: | Magazinearticle |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 11936 |
---|---|
container_issue | 12 |
container_start_page | 11920 |
container_title | IEEE transactions on mobile computing |
container_volume | 23 |
creator | Zheng, Shensheng Yuan, Wenhao Wang, Xuehe Duan, Lingjie |
description | Federated Learning (FL) has emerged as a prominent distributed machine learning framework that enables geographically discrete clients to train a global model collaboratively while preserving their privacy-sensitive data. However, due to the non-independent-and-identically-distributed (Non-IID) data generated by heterogeneous clients, the performances of the conventional federated optimization schemes such as FedAvg and its variants deteriorate, requiring the design to adaptively adjust specific model parameters to alleviate the negative influence of heterogeneity. In this paper, by leveraging entropy as a new metric for assessing the degree of system disorder, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients and achieve fast convergence. Nevertheless, given the data disparity and parameter deviation of heterogeneous clients, determining the optimal dynamic learning rate for each client becomes a challenging task as there is no communication among participating clients during the local training epochs. To enable a decentralized learning rate for each participating client, we first introduce the mean-field terms to estimate the components associated with other clients' local parameters. Furthermore, we provide rigorous theoretical analysis on the existence and determination of the mean-field estimators. Based on the mean-field estimators, the closed-form adaptive learning rate for each client is derived by constructing the Hamilton equation. Moreover, the convergence rate of our proposed FedEnt is proved. The extensive experimental results on the real-world datasets (i.e., MNIST, EMNIST-L, CIFAR10, and CIFAR100) show that our FedEnt algorithm surpasses FedAvg and its variants (i.e., FedAdam, FedProx, and FedDyn) under Non-IID settings and achieves a faster convergence rate. |
doi_str_mv | 10.1109/TMC.2024.3402080 |
format | Magazinearticle |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10531669</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10531669</ieee_id><sourcerecordid>10_1109_TMC_2024_3402080</sourcerecordid><originalsourceid>FETCH-LOGICAL-c217t-d7826ce260010b9e4e233fee735ae3b4a56e4cc704fe6b06087c551d9905203d3</originalsourceid><addsrcrecordid>eNpNjztPwzAUhS0EEqWwMzDkDyRcv-MxilqKFGAps-XYNxAESeRERf33JGoHpnOG89BHyD2FjFIwj_uXMmPARMYFMMjhgqyolHkKSsHl4rlKKeP8mtyM4xcAzY3RKyKK4IapPWCyxYDRTRiSCl3s2u4jObQuecXfZNNNsR-OSTEMsXf-85ZcNe57xLuzrsn7drMvd2n19vRcFlXqGdVTGnTOlEem5jeoDQqc_xtEzaVDXgsnFQrvNYgGVQ0Kcu2lpMEYkAx44GsCp10f-3GM2Nghtj8uHi0Fu1Dbmdou1PZMPVceTpUWEf_FJadKGf4HnCtSAQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>magazinearticle</recordtype></control><display><type>magazinearticle</type><title>Adaptive Federated Learning via New Entropy Approach</title><source>IEEE Electronic Library (IEL)</source><creator>Zheng, Shensheng ; Yuan, Wenhao ; Wang, Xuehe ; Duan, Lingjie</creator><creatorcontrib>Zheng, Shensheng ; Yuan, Wenhao ; Wang, Xuehe ; Duan, Lingjie</creatorcontrib><description>Federated Learning (FL) has emerged as a prominent distributed machine learning framework that enables geographically discrete clients to train a global model collaboratively while preserving their privacy-sensitive data. However, due to the non-independent-and-identically-distributed (Non-IID) data generated by heterogeneous clients, the performances of the conventional federated optimization schemes such as FedAvg and its variants deteriorate, requiring the design to adaptively adjust specific model parameters to alleviate the negative influence of heterogeneity. In this paper, by leveraging entropy as a new metric for assessing the degree of system disorder, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients and achieve fast convergence. Nevertheless, given the data disparity and parameter deviation of heterogeneous clients, determining the optimal dynamic learning rate for each client becomes a challenging task as there is no communication among participating clients during the local training epochs. To enable a decentralized learning rate for each participating client, we first introduce the mean-field terms to estimate the components associated with other clients' local parameters. Furthermore, we provide rigorous theoretical analysis on the existence and determination of the mean-field estimators. Based on the mean-field estimators, the closed-form adaptive learning rate for each client is derived by constructing the Hamilton equation. Moreover, the convergence rate of our proposed FedEnt is proved. The extensive experimental results on the real-world datasets (i.e., MNIST, EMNIST-L, CIFAR10, and CIFAR100) show that our FedEnt algorithm surpasses FedAvg and its variants (i.e., FedAdam, FedProx, and FedDyn) under Non-IID settings and achieves a faster convergence rate.</description><identifier>ISSN: 1536-1233</identifier><identifier>EISSN: 1558-0660</identifier><identifier>DOI: 10.1109/TMC.2024.3402080</identifier><identifier>CODEN: ITMCCJ</identifier><language>eng</language><publisher>IEEE</publisher><subject>Adaptation models ; Adaptive federated optimization ; Computational modeling ; Convergence ; Data models ; Entropy ; entropy theory ; mean-field analysis ; Optimization ; Training</subject><ispartof>IEEE transactions on mobile computing, 2024-12, Vol.23 (12), p.11920-11936</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c217t-d7826ce260010b9e4e233fee735ae3b4a56e4cc704fe6b06087c551d9905203d3</cites><orcidid>0000-0002-2951-3737 ; 0009-0001-6625-7496 ; 0000-0002-6910-468X ; 0000-0002-0217-6507</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10531669$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>776,780,792,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10531669$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zheng, Shensheng</creatorcontrib><creatorcontrib>Yuan, Wenhao</creatorcontrib><creatorcontrib>Wang, Xuehe</creatorcontrib><creatorcontrib>Duan, Lingjie</creatorcontrib><title>Adaptive Federated Learning via New Entropy Approach</title><title>IEEE transactions on mobile computing</title><addtitle>TMC</addtitle><description>Federated Learning (FL) has emerged as a prominent distributed machine learning framework that enables geographically discrete clients to train a global model collaboratively while preserving their privacy-sensitive data. However, due to the non-independent-and-identically-distributed (Non-IID) data generated by heterogeneous clients, the performances of the conventional federated optimization schemes such as FedAvg and its variants deteriorate, requiring the design to adaptively adjust specific model parameters to alleviate the negative influence of heterogeneity. In this paper, by leveraging entropy as a new metric for assessing the degree of system disorder, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients and achieve fast convergence. Nevertheless, given the data disparity and parameter deviation of heterogeneous clients, determining the optimal dynamic learning rate for each client becomes a challenging task as there is no communication among participating clients during the local training epochs. To enable a decentralized learning rate for each participating client, we first introduce the mean-field terms to estimate the components associated with other clients' local parameters. Furthermore, we provide rigorous theoretical analysis on the existence and determination of the mean-field estimators. Based on the mean-field estimators, the closed-form adaptive learning rate for each client is derived by constructing the Hamilton equation. Moreover, the convergence rate of our proposed FedEnt is proved. The extensive experimental results on the real-world datasets (i.e., MNIST, EMNIST-L, CIFAR10, and CIFAR100) show that our FedEnt algorithm surpasses FedAvg and its variants (i.e., FedAdam, FedProx, and FedDyn) under Non-IID settings and achieves a faster convergence rate.</description><subject>Adaptation models</subject><subject>Adaptive federated optimization</subject><subject>Computational modeling</subject><subject>Convergence</subject><subject>Data models</subject><subject>Entropy</subject><subject>entropy theory</subject><subject>mean-field analysis</subject><subject>Optimization</subject><subject>Training</subject><issn>1536-1233</issn><issn>1558-0660</issn><fulltext>true</fulltext><rsrctype>magazinearticle</rsrctype><creationdate>2024</creationdate><recordtype>magazinearticle</recordtype><sourceid>RIE</sourceid><recordid>eNpNjztPwzAUhS0EEqWwMzDkDyRcv-MxilqKFGAps-XYNxAESeRERf33JGoHpnOG89BHyD2FjFIwj_uXMmPARMYFMMjhgqyolHkKSsHl4rlKKeP8mtyM4xcAzY3RKyKK4IapPWCyxYDRTRiSCl3s2u4jObQuecXfZNNNsR-OSTEMsXf-85ZcNe57xLuzrsn7drMvd2n19vRcFlXqGdVTGnTOlEem5jeoDQqc_xtEzaVDXgsnFQrvNYgGVQ0Kcu2lpMEYkAx44GsCp10f-3GM2Nghtj8uHi0Fu1Dbmdou1PZMPVceTpUWEf_FJadKGf4HnCtSAQ</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Zheng, Shensheng</creator><creator>Yuan, Wenhao</creator><creator>Wang, Xuehe</creator><creator>Duan, Lingjie</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-2951-3737</orcidid><orcidid>https://orcid.org/0009-0001-6625-7496</orcidid><orcidid>https://orcid.org/0000-0002-6910-468X</orcidid><orcidid>https://orcid.org/0000-0002-0217-6507</orcidid></search><sort><creationdate>20241201</creationdate><title>Adaptive Federated Learning via New Entropy Approach</title><author>Zheng, Shensheng ; Yuan, Wenhao ; Wang, Xuehe ; Duan, Lingjie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c217t-d7826ce260010b9e4e233fee735ae3b4a56e4cc704fe6b06087c551d9905203d3</frbrgroupid><rsrctype>magazinearticle</rsrctype><prefilter>magazinearticle</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation models</topic><topic>Adaptive federated optimization</topic><topic>Computational modeling</topic><topic>Convergence</topic><topic>Data models</topic><topic>Entropy</topic><topic>entropy theory</topic><topic>mean-field analysis</topic><topic>Optimization</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Shensheng</creatorcontrib><creatorcontrib>Yuan, Wenhao</creatorcontrib><creatorcontrib>Wang, Xuehe</creatorcontrib><creatorcontrib>Duan, Lingjie</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on mobile computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zheng, Shensheng</au><au>Yuan, Wenhao</au><au>Wang, Xuehe</au><au>Duan, Lingjie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Federated Learning via New Entropy Approach</atitle><jtitle>IEEE transactions on mobile computing</jtitle><stitle>TMC</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>23</volume><issue>12</issue><spage>11920</spage><epage>11936</epage><pages>11920-11936</pages><issn>1536-1233</issn><eissn>1558-0660</eissn><coden>ITMCCJ</coden><abstract>Federated Learning (FL) has emerged as a prominent distributed machine learning framework that enables geographically discrete clients to train a global model collaboratively while preserving their privacy-sensitive data. However, due to the non-independent-and-identically-distributed (Non-IID) data generated by heterogeneous clients, the performances of the conventional federated optimization schemes such as FedAvg and its variants deteriorate, requiring the design to adaptively adjust specific model parameters to alleviate the negative influence of heterogeneity. In this paper, by leveraging entropy as a new metric for assessing the degree of system disorder, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients and achieve fast convergence. Nevertheless, given the data disparity and parameter deviation of heterogeneous clients, determining the optimal dynamic learning rate for each client becomes a challenging task as there is no communication among participating clients during the local training epochs. To enable a decentralized learning rate for each participating client, we first introduce the mean-field terms to estimate the components associated with other clients' local parameters. Furthermore, we provide rigorous theoretical analysis on the existence and determination of the mean-field estimators. Based on the mean-field estimators, the closed-form adaptive learning rate for each client is derived by constructing the Hamilton equation. Moreover, the convergence rate of our proposed FedEnt is proved. The extensive experimental results on the real-world datasets (i.e., MNIST, EMNIST-L, CIFAR10, and CIFAR100) show that our FedEnt algorithm surpasses FedAvg and its variants (i.e., FedAdam, FedProx, and FedDyn) under Non-IID settings and achieves a faster convergence rate.</abstract><pub>IEEE</pub><doi>10.1109/TMC.2024.3402080</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-2951-3737</orcidid><orcidid>https://orcid.org/0009-0001-6625-7496</orcidid><orcidid>https://orcid.org/0000-0002-6910-468X</orcidid><orcidid>https://orcid.org/0000-0002-0217-6507</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1536-1233 |
ispartof | IEEE transactions on mobile computing, 2024-12, Vol.23 (12), p.11920-11936 |
issn | 1536-1233 1558-0660 |
language | eng |
recordid | cdi_ieee_primary_10531669 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation models Adaptive federated optimization Computational modeling Convergence Data models Entropy entropy theory mean-field analysis Optimization Training |
title | Adaptive Federated Learning via New Entropy Approach |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T03%3A54%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Federated%20Learning%20via%20New%20Entropy%20Approach&rft.jtitle=IEEE%20transactions%20on%20mobile%20computing&rft.au=Zheng,%20Shensheng&rft.date=2024-12-01&rft.volume=23&rft.issue=12&rft.spage=11920&rft.epage=11936&rft.pages=11920-11936&rft.issn=1536-1233&rft.eissn=1558-0660&rft.coden=ITMCCJ&rft_id=info:doi/10.1109/TMC.2024.3402080&rft_dat=%3Ccrossref_RIE%3E10_1109_TMC_2024_3402080%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10531669&rfr_iscdi=true |