HFSL: heterogeneity split federated learning based on client computing capabilities

With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of supercomputing 2025, Vol.81 (1), Article 196
Hauptverfasser: Wu, Nengwu, Zhao, Wenjie, Chen, Yuxiang, Xiao, Jiahong, Wang, Jin, Liang, Wei, Li, Kuan-Ching, Sukhija, Nitin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 1
container_start_page
container_title The Journal of supercomputing
container_volume 81
creator Wu, Nengwu
Zhao, Wenjie
Chen, Yuxiang
Xiao, Jiahong
Wang, Jin
Liang, Wei
Li, Kuan-Ching
Sukhija, Nitin
description With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needing data to leave local devices, effectively preventing data leakage and misuse. However, the disparity in computational capabilities of edge devices necessitates partitioning models according to the least capable client, resulting in a significant portion of the computational load being offloaded to a more capable server-side infrastructure, thereby incurring substantial training overheads. This work proposes a novel method for split federated learning targeting heterogeneous endpoints to address these challenges. The method addresses the problem of heterogeneous training across different clients by adding auxiliary layers, enhances the accuracy of heterogeneous model split training using self-distillation techniques, and leverages the global model from the previous round to mitigate the accuracy degradation during federated aggregation. We conducted validations on the CIFAR-10 dataset and compared it with the existing SL, SFLV1, and SFLV2 methods; our HFSL2 method improved by 3.81%, 13.94%, and 6.19%, respectively. Validations were also carried out on the HAM10000, FashionMNIST, and MNIST datasets, through which we found that our algorithm can effectively enhance the aggregation accuracy of heterogeneous computing capabilities.
doi_str_mv 10.1007/s11227-024-06632-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3131832970</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3131832970</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-de8dfeb1cec9e1a2935027c4b27ea53da1d351297f582fcce3dc90da0f2405b83</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wNOC5-jkY7-8SbFWKHionkM2ma1bttk1SQ_996au4M3TMMz7vAMPIbcM7hlA-RAY47ykwCWFohCcFmdkxvJSUJCVPCczqDnQKpf8klyFsAMAKUoxI5vVcrN-zD4xoh-26LCLxyyMfRezFi16HdFmPWrvOrfNGh3SOrjM9B26mJlhPx7i6WL0qJsuYR2Ga3LR6j7gze-ck4_l8_tiRddvL6-LpzU1HCBSi5VtsWEGTY1M81rkwEsjG16izoXVzIqc8bps84q3xqCwpgaroeUS8qYSc3I39Y5--DpgiGo3HLxLL5VgglUisZBSfEoZP4TgsVWj7_baHxUDdZKnJnkqyVM_8lSRIDFBIYXdFv1f9T_UN1RNcvc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3131832970</pqid></control><display><type>article</type><title>HFSL: heterogeneity split federated learning based on client computing capabilities</title><source>Springer Nature - Complete Springer Journals</source><creator>Wu, Nengwu ; Zhao, Wenjie ; Chen, Yuxiang ; Xiao, Jiahong ; Wang, Jin ; Liang, Wei ; Li, Kuan-Ching ; Sukhija, Nitin</creator><creatorcontrib>Wu, Nengwu ; Zhao, Wenjie ; Chen, Yuxiang ; Xiao, Jiahong ; Wang, Jin ; Liang, Wei ; Li, Kuan-Ching ; Sukhija, Nitin</creatorcontrib><description>With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needing data to leave local devices, effectively preventing data leakage and misuse. However, the disparity in computational capabilities of edge devices necessitates partitioning models according to the least capable client, resulting in a significant portion of the computational load being offloaded to a more capable server-side infrastructure, thereby incurring substantial training overheads. This work proposes a novel method for split federated learning targeting heterogeneous endpoints to address these challenges. The method addresses the problem of heterogeneous training across different clients by adding auxiliary layers, enhances the accuracy of heterogeneous model split training using self-distillation techniques, and leverages the global model from the previous round to mitigate the accuracy degradation during federated aggregation. We conducted validations on the CIFAR-10 dataset and compared it with the existing SL, SFLV1, and SFLV2 methods; our HFSL2 method improved by 3.81%, 13.94%, and 6.19%, respectively. Validations were also carried out on the HAM10000, FashionMNIST, and MNIST datasets, through which we found that our algorithm can effectively enhance the aggregation accuracy of heterogeneous computing capabilities.</description><identifier>ISSN: 0920-8542</identifier><identifier>EISSN: 1573-0484</identifier><identifier>DOI: 10.1007/s11227-024-06632-6</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Compilers ; Computer Science ; Datasets ; Edge computing ; Federated learning ; Heterogeneity ; Internet of Things ; Interpreters ; Processor Architectures ; Programming Languages</subject><ispartof>The Journal of supercomputing, 2025, Vol.81 (1), Article 196</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-de8dfeb1cec9e1a2935027c4b27ea53da1d351297f582fcce3dc90da0f2405b83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11227-024-06632-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11227-024-06632-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Wu, Nengwu</creatorcontrib><creatorcontrib>Zhao, Wenjie</creatorcontrib><creatorcontrib>Chen, Yuxiang</creatorcontrib><creatorcontrib>Xiao, Jiahong</creatorcontrib><creatorcontrib>Wang, Jin</creatorcontrib><creatorcontrib>Liang, Wei</creatorcontrib><creatorcontrib>Li, Kuan-Ching</creatorcontrib><creatorcontrib>Sukhija, Nitin</creatorcontrib><title>HFSL: heterogeneity split federated learning based on client computing capabilities</title><title>The Journal of supercomputing</title><addtitle>J Supercomput</addtitle><description>With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needing data to leave local devices, effectively preventing data leakage and misuse. However, the disparity in computational capabilities of edge devices necessitates partitioning models according to the least capable client, resulting in a significant portion of the computational load being offloaded to a more capable server-side infrastructure, thereby incurring substantial training overheads. This work proposes a novel method for split federated learning targeting heterogeneous endpoints to address these challenges. The method addresses the problem of heterogeneous training across different clients by adding auxiliary layers, enhances the accuracy of heterogeneous model split training using self-distillation techniques, and leverages the global model from the previous round to mitigate the accuracy degradation during federated aggregation. We conducted validations on the CIFAR-10 dataset and compared it with the existing SL, SFLV1, and SFLV2 methods; our HFSL2 method improved by 3.81%, 13.94%, and 6.19%, respectively. Validations were also carried out on the HAM10000, FashionMNIST, and MNIST datasets, through which we found that our algorithm can effectively enhance the aggregation accuracy of heterogeneous computing capabilities.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Compilers</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Edge computing</subject><subject>Federated learning</subject><subject>Heterogeneity</subject><subject>Internet of Things</subject><subject>Interpreters</subject><subject>Processor Architectures</subject><subject>Programming Languages</subject><issn>0920-8542</issn><issn>1573-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKt_wNOC5-jkY7-8SbFWKHionkM2ma1bttk1SQ_996au4M3TMMz7vAMPIbcM7hlA-RAY47ykwCWFohCcFmdkxvJSUJCVPCczqDnQKpf8klyFsAMAKUoxI5vVcrN-zD4xoh-26LCLxyyMfRezFi16HdFmPWrvOrfNGh3SOrjM9B26mJlhPx7i6WL0qJsuYR2Ga3LR6j7gze-ck4_l8_tiRddvL6-LpzU1HCBSi5VtsWEGTY1M81rkwEsjG16izoXVzIqc8bps84q3xqCwpgaroeUS8qYSc3I39Y5--DpgiGo3HLxLL5VgglUisZBSfEoZP4TgsVWj7_baHxUDdZKnJnkqyVM_8lSRIDFBIYXdFv1f9T_UN1RNcvc</recordid><startdate>2025</startdate><enddate>2025</enddate><creator>Wu, Nengwu</creator><creator>Zhao, Wenjie</creator><creator>Chen, Yuxiang</creator><creator>Xiao, Jiahong</creator><creator>Wang, Jin</creator><creator>Liang, Wei</creator><creator>Li, Kuan-Ching</creator><creator>Sukhija, Nitin</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>2025</creationdate><title>HFSL: heterogeneity split federated learning based on client computing capabilities</title><author>Wu, Nengwu ; Zhao, Wenjie ; Chen, Yuxiang ; Xiao, Jiahong ; Wang, Jin ; Liang, Wei ; Li, Kuan-Ching ; Sukhija, Nitin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-de8dfeb1cec9e1a2935027c4b27ea53da1d351297f582fcce3dc90da0f2405b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Compilers</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Edge computing</topic><topic>Federated learning</topic><topic>Heterogeneity</topic><topic>Internet of Things</topic><topic>Interpreters</topic><topic>Processor Architectures</topic><topic>Programming Languages</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Nengwu</creatorcontrib><creatorcontrib>Zhao, Wenjie</creatorcontrib><creatorcontrib>Chen, Yuxiang</creatorcontrib><creatorcontrib>Xiao, Jiahong</creatorcontrib><creatorcontrib>Wang, Jin</creatorcontrib><creatorcontrib>Liang, Wei</creatorcontrib><creatorcontrib>Li, Kuan-Ching</creatorcontrib><creatorcontrib>Sukhija, Nitin</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of supercomputing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Nengwu</au><au>Zhao, Wenjie</au><au>Chen, Yuxiang</au><au>Xiao, Jiahong</au><au>Wang, Jin</au><au>Liang, Wei</au><au>Li, Kuan-Ching</au><au>Sukhija, Nitin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>HFSL: heterogeneity split federated learning based on client computing capabilities</atitle><jtitle>The Journal of supercomputing</jtitle><stitle>J Supercomput</stitle><date>2025</date><risdate>2025</risdate><volume>81</volume><issue>1</issue><artnum>196</artnum><issn>0920-8542</issn><eissn>1573-0484</eissn><abstract>With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needing data to leave local devices, effectively preventing data leakage and misuse. However, the disparity in computational capabilities of edge devices necessitates partitioning models according to the least capable client, resulting in a significant portion of the computational load being offloaded to a more capable server-side infrastructure, thereby incurring substantial training overheads. This work proposes a novel method for split federated learning targeting heterogeneous endpoints to address these challenges. The method addresses the problem of heterogeneous training across different clients by adding auxiliary layers, enhances the accuracy of heterogeneous model split training using self-distillation techniques, and leverages the global model from the previous round to mitigate the accuracy degradation during federated aggregation. We conducted validations on the CIFAR-10 dataset and compared it with the existing SL, SFLV1, and SFLV2 methods; our HFSL2 method improved by 3.81%, 13.94%, and 6.19%, respectively. Validations were also carried out on the HAM10000, FashionMNIST, and MNIST datasets, through which we found that our algorithm can effectively enhance the aggregation accuracy of heterogeneous computing capabilities.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11227-024-06632-6</doi></addata></record>
fulltext fulltext
identifier ISSN: 0920-8542
ispartof The Journal of supercomputing, 2025, Vol.81 (1), Article 196
issn 0920-8542
1573-0484
language eng
recordid cdi_proquest_journals_3131832970
source Springer Nature - Complete Springer Journals
subjects Accuracy
Algorithms
Compilers
Computer Science
Datasets
Edge computing
Federated learning
Heterogeneity
Internet of Things
Interpreters
Processor Architectures
Programming Languages
title HFSL: heterogeneity split federated learning based on client computing capabilities
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T15%3A13%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=HFSL:%20heterogeneity%20split%20federated%20learning%20based%20on%20client%20computing%20capabilities&rft.jtitle=The%20Journal%20of%20supercomputing&rft.au=Wu,%20Nengwu&rft.date=2025&rft.volume=81&rft.issue=1&rft.artnum=196&rft.issn=0920-8542&rft.eissn=1573-0484&rft_id=info:doi/10.1007/s11227-024-06632-6&rft_dat=%3Cproquest_cross%3E3131832970%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3131832970&rft_id=info:pmid/&rfr_iscdi=true