The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning

Personalized Federated Learning (PFL) is a commonly used framework that allows clients to collaboratively train their personalized models. PFL is particularly useful for handling situations where data from different clients are not independent and identically distributed (non-IID). Previous research...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wu, Xinghao, Liu, Xuefeng, Niu, Jianwei, Zhu, Guogang, Tang, Shaojie, Li, Xiaotian, Cao, Jiannong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wu, Xinghao
Liu, Xuefeng
Niu, Jianwei
Zhu, Guogang
Tang, Shaojie
Li, Xiaotian
Cao, Jiannong
description Personalized Federated Learning (PFL) is a commonly used framework that allows clients to collaboratively train their personalized models. PFL is particularly useful for handling situations where data from different clients are not independent and identically distributed (non-IID). Previous research in PFL implicitly assumes that clients can gain more benefits from those with similar data distributions. Correspondingly, methods such as personalized weight aggregation are developed to assign higher weights to similar clients during training. We pose a question: can a client benefit from other clients with dissimilar data distributions and if so, how? This question is particularly relevant in scenarios with a high degree of non-IID, where clients have widely different data distributions, and learning from only similar clients will lose knowledge from many other clients. We note that when dealing with clients with similar data distributions, methods such as personalized weight aggregation tend to enforce their models to be close in the parameter space. It is reasonable to conjecture that a client can benefit from dissimilar clients if we allow their models to depart from each other. Based on this idea, we propose DiversiFed which allows each client to learn from clients with diversified data distribution in personalized federated learning. DiversiFed pushes personalized models of clients with dissimilar data distributions apart in the parameter space while pulling together those with similar distributions. In addition, to achieve the above effect without using prior knowledge of data distribution, we design a loss function that leverages the model similarity to determine the degree of attraction and repulsion between any two models. Experiments on several datasets show that DiversiFed can benefit from dissimilar clients and thus outperform the state-of-the-art methods.
doi_str_mv 10.48550/arxiv.2407.15464
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_15464</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_15464</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_154643</originalsourceid><addsrcrecordid>eNqFjrEOgkAMhm9xMOoDONkXEEEONY6ixMHBgZ2coWiT4zC9g4hPLxCdnf7m79fmE2Ie-J7cRZG_UvyixltLf-sFkdzIscjTB8KRGmRLroVDZWq7hwsqNmTuUHBVdmtrqSStuB8d0612mEOsCY2zQAau3XlllKZ31yeYI6ue-L2ZilGhtMXZNydikZzS-LwcfLInU6m4zXqvbPAK_xMfpSNERQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning</title><source>arXiv.org</source><creator>Wu, Xinghao ; Liu, Xuefeng ; Niu, Jianwei ; Zhu, Guogang ; Tang, Shaojie ; Li, Xiaotian ; Cao, Jiannong</creator><creatorcontrib>Wu, Xinghao ; Liu, Xuefeng ; Niu, Jianwei ; Zhu, Guogang ; Tang, Shaojie ; Li, Xiaotian ; Cao, Jiannong</creatorcontrib><description>Personalized Federated Learning (PFL) is a commonly used framework that allows clients to collaboratively train their personalized models. PFL is particularly useful for handling situations where data from different clients are not independent and identically distributed (non-IID). Previous research in PFL implicitly assumes that clients can gain more benefits from those with similar data distributions. Correspondingly, methods such as personalized weight aggregation are developed to assign higher weights to similar clients during training. We pose a question: can a client benefit from other clients with dissimilar data distributions and if so, how? This question is particularly relevant in scenarios with a high degree of non-IID, where clients have widely different data distributions, and learning from only similar clients will lose knowledge from many other clients. We note that when dealing with clients with similar data distributions, methods such as personalized weight aggregation tend to enforce their models to be close in the parameter space. It is reasonable to conjecture that a client can benefit from dissimilar clients if we allow their models to depart from each other. Based on this idea, we propose DiversiFed which allows each client to learn from clients with diversified data distribution in personalized federated learning. DiversiFed pushes personalized models of clients with dissimilar data distributions apart in the parameter space while pulling together those with similar distributions. In addition, to achieve the above effect without using prior knowledge of data distribution, we design a loss function that leverages the model similarity to determine the degree of attraction and repulsion between any two models. Experiments on several datasets show that DiversiFed can benefit from dissimilar clients and thus outperform the state-of-the-art methods.</description><identifier>DOI: 10.48550/arxiv.2407.15464</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.15464$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.15464$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Xinghao</creatorcontrib><creatorcontrib>Liu, Xuefeng</creatorcontrib><creatorcontrib>Niu, Jianwei</creatorcontrib><creatorcontrib>Zhu, Guogang</creatorcontrib><creatorcontrib>Tang, Shaojie</creatorcontrib><creatorcontrib>Li, Xiaotian</creatorcontrib><creatorcontrib>Cao, Jiannong</creatorcontrib><title>The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning</title><description>Personalized Federated Learning (PFL) is a commonly used framework that allows clients to collaboratively train their personalized models. PFL is particularly useful for handling situations where data from different clients are not independent and identically distributed (non-IID). Previous research in PFL implicitly assumes that clients can gain more benefits from those with similar data distributions. Correspondingly, methods such as personalized weight aggregation are developed to assign higher weights to similar clients during training. We pose a question: can a client benefit from other clients with dissimilar data distributions and if so, how? This question is particularly relevant in scenarios with a high degree of non-IID, where clients have widely different data distributions, and learning from only similar clients will lose knowledge from many other clients. We note that when dealing with clients with similar data distributions, methods such as personalized weight aggregation tend to enforce their models to be close in the parameter space. It is reasonable to conjecture that a client can benefit from dissimilar clients if we allow their models to depart from each other. Based on this idea, we propose DiversiFed which allows each client to learn from clients with diversified data distribution in personalized federated learning. DiversiFed pushes personalized models of clients with dissimilar data distributions apart in the parameter space while pulling together those with similar distributions. In addition, to achieve the above effect without using prior knowledge of data distribution, we design a loss function that leverages the model similarity to determine the degree of attraction and repulsion between any two models. Experiments on several datasets show that DiversiFed can benefit from dissimilar clients and thus outperform the state-of-the-art methods.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrEOgkAMhm9xMOoDONkXEEEONY6ixMHBgZ2coWiT4zC9g4hPLxCdnf7m79fmE2Ie-J7cRZG_UvyixltLf-sFkdzIscjTB8KRGmRLroVDZWq7hwsqNmTuUHBVdmtrqSStuB8d0612mEOsCY2zQAau3XlllKZ31yeYI6ue-L2ZilGhtMXZNydikZzS-LwcfLInU6m4zXqvbPAK_xMfpSNERQ</recordid><startdate>20240722</startdate><enddate>20240722</enddate><creator>Wu, Xinghao</creator><creator>Liu, Xuefeng</creator><creator>Niu, Jianwei</creator><creator>Zhu, Guogang</creator><creator>Tang, Shaojie</creator><creator>Li, Xiaotian</creator><creator>Cao, Jiannong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240722</creationdate><title>The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning</title><author>Wu, Xinghao ; Liu, Xuefeng ; Niu, Jianwei ; Zhu, Guogang ; Tang, Shaojie ; Li, Xiaotian ; Cao, Jiannong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_154643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Xinghao</creatorcontrib><creatorcontrib>Liu, Xuefeng</creatorcontrib><creatorcontrib>Niu, Jianwei</creatorcontrib><creatorcontrib>Zhu, Guogang</creatorcontrib><creatorcontrib>Tang, Shaojie</creatorcontrib><creatorcontrib>Li, Xiaotian</creatorcontrib><creatorcontrib>Cao, Jiannong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Xinghao</au><au>Liu, Xuefeng</au><au>Niu, Jianwei</au><au>Zhu, Guogang</au><au>Tang, Shaojie</au><au>Li, Xiaotian</au><au>Cao, Jiannong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning</atitle><date>2024-07-22</date><risdate>2024</risdate><abstract>Personalized Federated Learning (PFL) is a commonly used framework that allows clients to collaboratively train their personalized models. PFL is particularly useful for handling situations where data from different clients are not independent and identically distributed (non-IID). Previous research in PFL implicitly assumes that clients can gain more benefits from those with similar data distributions. Correspondingly, methods such as personalized weight aggregation are developed to assign higher weights to similar clients during training. We pose a question: can a client benefit from other clients with dissimilar data distributions and if so, how? This question is particularly relevant in scenarios with a high degree of non-IID, where clients have widely different data distributions, and learning from only similar clients will lose knowledge from many other clients. We note that when dealing with clients with similar data distributions, methods such as personalized weight aggregation tend to enforce their models to be close in the parameter space. It is reasonable to conjecture that a client can benefit from dissimilar clients if we allow their models to depart from each other. Based on this idea, we propose DiversiFed which allows each client to learn from clients with diversified data distribution in personalized federated learning. DiversiFed pushes personalized models of clients with dissimilar data distributions apart in the parameter space while pulling together those with similar distributions. In addition, to achieve the above effect without using prior knowledge of data distribution, we design a loss function that leverages the model similarity to determine the degree of attraction and repulsion between any two models. Experiments on several datasets show that DiversiFed can benefit from dissimilar clients and thus outperform the state-of-the-art methods.</abstract><doi>10.48550/arxiv.2407.15464</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.15464
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_15464
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Learning
title The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T20%3A15%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Diversity%20Bonus:%20Learning%20from%20Dissimilar%20Distributed%20Clients%20in%20Personalized%20Federated%20Learning&rft.au=Wu,%20Xinghao&rft.date=2024-07-22&rft_id=info:doi/10.48550/arxiv.2407.15464&rft_dat=%3Carxiv_GOX%3E2407_15464%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true