Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition

To address data heterogeneity, the key strategy of Personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wu, Xinghao, Liu, Xuefeng, Niu, Jianwei, Wang, Haolin, Tang, Shaojie, Zhu, Guogang, Su, Hao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wu, Xinghao
Liu, Xuefeng
Niu, Jianwei
Wang, Haolin
Tang, Shaojie
Zhu, Guogang
Su, Hao
description To address data heterogeneity, the key strategy of Personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter partitioning approach, where the parameters of a model are designated as one of two types: parameters shared with other clients to extract general knowledge and parameters retained locally to learn client-specific knowledge. However, as these two types of parameters are put together like a jigsaw puzzle into a single model during the training process, each parameter may simultaneously absorb both general and client-specific knowledge, thus struggling to separate the two types of knowledge effectively. In this paper, we introduce FedDecomp, a simple but effective PFL paradigm that employs parameter additive decomposition to address this issue. Instead of assigning each parameter of a model as either a shared or personalized one, FedDecomp decomposes each parameter into the sum of two parameters: a shared one and a personalized one, thus achieving a more thorough decoupling of shared and personalized knowledge compared to the parameter partitioning method. In addition, as we find that retaining local knowledge of specific clients requires much lower model capacity compared with general knowledge across all clients, we let the matrix containing personalized parameters be low rank during the training process. Moreover, a new alternating training strategy is proposed to further improve the performance. Experimental results across multiple datasets and varying degrees of data heterogeneity demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9\%. The code is available at https://github.com/XinghaoWu/FedDecomp.
doi_str_mv 10.48550/arxiv.2406.19931
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_19931</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_19931</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_199313</originalsourceid><addsrcrecordid>eNqFjs0KgkAUhWfTIqoHaNV9gUxTI5dRWZCLiPZyaW4yNN6R0bR6-lTatzpw_viEmHquE6zD0F2gfanaWQbuyvGiyPeG4rGjm3kWWnEGB2KyqAFZwplsaRi1-pCEE5tGk8wIFENMsm1VrZ0QWu6GtULYSKkqVVO_TkwzvyA_oHvPC1O2keGxGNxRlzT56UjM4v11e5z3WGlhVY72nXZ4aY_n_298AZrWRoM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition</title><source>arXiv.org</source><creator>Wu, Xinghao ; Liu, Xuefeng ; Niu, Jianwei ; Wang, Haolin ; Tang, Shaojie ; Zhu, Guogang ; Su, Hao</creator><creatorcontrib>Wu, Xinghao ; Liu, Xuefeng ; Niu, Jianwei ; Wang, Haolin ; Tang, Shaojie ; Zhu, Guogang ; Su, Hao</creatorcontrib><description>To address data heterogeneity, the key strategy of Personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter partitioning approach, where the parameters of a model are designated as one of two types: parameters shared with other clients to extract general knowledge and parameters retained locally to learn client-specific knowledge. However, as these two types of parameters are put together like a jigsaw puzzle into a single model during the training process, each parameter may simultaneously absorb both general and client-specific knowledge, thus struggling to separate the two types of knowledge effectively. In this paper, we introduce FedDecomp, a simple but effective PFL paradigm that employs parameter additive decomposition to address this issue. Instead of assigning each parameter of a model as either a shared or personalized one, FedDecomp decomposes each parameter into the sum of two parameters: a shared one and a personalized one, thus achieving a more thorough decoupling of shared and personalized knowledge compared to the parameter partitioning method. In addition, as we find that retaining local knowledge of specific clients requires much lower model capacity compared with general knowledge across all clients, we let the matrix containing personalized parameters be low rank during the training process. Moreover, a new alternating training strategy is proposed to further improve the performance. Experimental results across multiple datasets and varying degrees of data heterogeneity demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9\%. The code is available at https://github.com/XinghaoWu/FedDecomp.</description><identifier>DOI: 10.48550/arxiv.2406.19931</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.19931$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.19931$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Xinghao</creatorcontrib><creatorcontrib>Liu, Xuefeng</creatorcontrib><creatorcontrib>Niu, Jianwei</creatorcontrib><creatorcontrib>Wang, Haolin</creatorcontrib><creatorcontrib>Tang, Shaojie</creatorcontrib><creatorcontrib>Zhu, Guogang</creatorcontrib><creatorcontrib>Su, Hao</creatorcontrib><title>Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition</title><description>To address data heterogeneity, the key strategy of Personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter partitioning approach, where the parameters of a model are designated as one of two types: parameters shared with other clients to extract general knowledge and parameters retained locally to learn client-specific knowledge. However, as these two types of parameters are put together like a jigsaw puzzle into a single model during the training process, each parameter may simultaneously absorb both general and client-specific knowledge, thus struggling to separate the two types of knowledge effectively. In this paper, we introduce FedDecomp, a simple but effective PFL paradigm that employs parameter additive decomposition to address this issue. Instead of assigning each parameter of a model as either a shared or personalized one, FedDecomp decomposes each parameter into the sum of two parameters: a shared one and a personalized one, thus achieving a more thorough decoupling of shared and personalized knowledge compared to the parameter partitioning method. In addition, as we find that retaining local knowledge of specific clients requires much lower model capacity compared with general knowledge across all clients, we let the matrix containing personalized parameters be low rank during the training process. Moreover, a new alternating training strategy is proposed to further improve the performance. Experimental results across multiple datasets and varying degrees of data heterogeneity demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9\%. The code is available at https://github.com/XinghaoWu/FedDecomp.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjs0KgkAUhWfTIqoHaNV9gUxTI5dRWZCLiPZyaW4yNN6R0bR6-lTatzpw_viEmHquE6zD0F2gfanaWQbuyvGiyPeG4rGjm3kWWnEGB2KyqAFZwplsaRi1-pCEE5tGk8wIFENMsm1VrZ0QWu6GtULYSKkqVVO_TkwzvyA_oHvPC1O2keGxGNxRlzT56UjM4v11e5z3WGlhVY72nXZ4aY_n_298AZrWRoM</recordid><startdate>20240628</startdate><enddate>20240628</enddate><creator>Wu, Xinghao</creator><creator>Liu, Xuefeng</creator><creator>Niu, Jianwei</creator><creator>Wang, Haolin</creator><creator>Tang, Shaojie</creator><creator>Zhu, Guogang</creator><creator>Su, Hao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240628</creationdate><title>Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition</title><author>Wu, Xinghao ; Liu, Xuefeng ; Niu, Jianwei ; Wang, Haolin ; Tang, Shaojie ; Zhu, Guogang ; Su, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_199313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Xinghao</creatorcontrib><creatorcontrib>Liu, Xuefeng</creatorcontrib><creatorcontrib>Niu, Jianwei</creatorcontrib><creatorcontrib>Wang, Haolin</creatorcontrib><creatorcontrib>Tang, Shaojie</creatorcontrib><creatorcontrib>Zhu, Guogang</creatorcontrib><creatorcontrib>Su, Hao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Xinghao</au><au>Liu, Xuefeng</au><au>Niu, Jianwei</au><au>Wang, Haolin</au><au>Tang, Shaojie</au><au>Zhu, Guogang</au><au>Su, Hao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition</atitle><date>2024-06-28</date><risdate>2024</risdate><abstract>To address data heterogeneity, the key strategy of Personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter partitioning approach, where the parameters of a model are designated as one of two types: parameters shared with other clients to extract general knowledge and parameters retained locally to learn client-specific knowledge. However, as these two types of parameters are put together like a jigsaw puzzle into a single model during the training process, each parameter may simultaneously absorb both general and client-specific knowledge, thus struggling to separate the two types of knowledge effectively. In this paper, we introduce FedDecomp, a simple but effective PFL paradigm that employs parameter additive decomposition to address this issue. Instead of assigning each parameter of a model as either a shared or personalized one, FedDecomp decomposes each parameter into the sum of two parameters: a shared one and a personalized one, thus achieving a more thorough decoupling of shared and personalized knowledge compared to the parameter partitioning method. In addition, as we find that retaining local knowledge of specific clients requires much lower model capacity compared with general knowledge across all clients, we let the matrix containing personalized parameters be low rank during the training process. Moreover, a new alternating training strategy is proposed to further improve the performance. Experimental results across multiple datasets and varying degrees of data heterogeneity demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9\%. The code is available at https://github.com/XinghaoWu/FedDecomp.</abstract><doi>10.48550/arxiv.2406.19931</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.19931
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_19931
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T20%3A09%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Decoupling%20General%20and%20Personalized%20Knowledge%20in%20Federated%20Learning%20via%20Additive%20and%20Low-Rank%20Decomposition&rft.au=Wu,%20Xinghao&rft.date=2024-06-28&rft_id=info:doi/10.48550/arxiv.2406.19931&rft_dat=%3Carxiv_GOX%3E2406_19931%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true