Rehearsal-Free Continual Federated Learning with Synergistic Regularization
Continual Federated Learning (CFL) allows distributed devices to collaboratively learn novel concepts from continuously shifting training data while avoiding knowledge forgetting of previously seen tasks. To tackle this challenge, most current CFL approaches rely on extensive rehearsal of previous d...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Li, Yichen Wang, Yuying Xiao, Tianzhe Wang, Haozhao Qi, Yining Li, Ruixuan |
description | Continual Federated Learning (CFL) allows distributed devices to
collaboratively learn novel concepts from continuously shifting training data
while avoiding knowledge forgetting of previously seen tasks. To tackle this
challenge, most current CFL approaches rely on extensive rehearsal of previous
data. Despite effectiveness, rehearsal comes at a cost to memory, and it may
also violate data privacy. Considering these, we seek to apply regularization
techniques to CFL by considering their cost-efficient properties that do not
require sample caching or rehearsal. Specifically, we first apply traditional
regularization techniques to CFL and observe that existing regularization
techniques, especially synaptic intelligence, can achieve promising results
under homogeneous data distribution but fail when the data is heterogeneous.
Based on this observation, we propose a simple yet effective regularization
algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the
CFL with heterogeneous data settings. FedSSI can not only reduce computational
overhead without rehearsal but also address the data heterogeneity issue.
Extensive experiments show that FedSSI achieves superior performance compared
to state-of-the-art methods. |
doi_str_mv | 10.48550/arxiv.2412.13779 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_13779</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_13779</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_137793</originalsourceid><addsrcrecordid>eNqFzT0OgkAQQOFtLIx6ACv3Aqz8Bq2JxEQrtCcTGJdJ1sUMi4qnV4m91Wte8gmxDHwVb5LEXwM_6a7COAhVEKXpdioOBTYI3IHxckaUWWsd2R6MzLFGBoe1PH4GS1bLB7lGngaLrKlzVMkCdW-A6QWOWjsXkwuYDhe_zsQq352zvTey5Y3pCjyUX74c-ej_8Qbr2jty</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Rehearsal-Free Continual Federated Learning with Synergistic Regularization</title><source>arXiv.org</source><creator>Li, Yichen ; Wang, Yuying ; Xiao, Tianzhe ; Wang, Haozhao ; Qi, Yining ; Li, Ruixuan</creator><creatorcontrib>Li, Yichen ; Wang, Yuying ; Xiao, Tianzhe ; Wang, Haozhao ; Qi, Yining ; Li, Ruixuan</creatorcontrib><description>Continual Federated Learning (CFL) allows distributed devices to
collaboratively learn novel concepts from continuously shifting training data
while avoiding knowledge forgetting of previously seen tasks. To tackle this
challenge, most current CFL approaches rely on extensive rehearsal of previous
data. Despite effectiveness, rehearsal comes at a cost to memory, and it may
also violate data privacy. Considering these, we seek to apply regularization
techniques to CFL by considering their cost-efficient properties that do not
require sample caching or rehearsal. Specifically, we first apply traditional
regularization techniques to CFL and observe that existing regularization
techniques, especially synaptic intelligence, can achieve promising results
under homogeneous data distribution but fail when the data is heterogeneous.
Based on this observation, we propose a simple yet effective regularization
algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the
CFL with heterogeneous data settings. FedSSI can not only reduce computational
overhead without rehearsal but also address the data heterogeneity issue.
Extensive experiments show that FedSSI achieves superior performance compared
to state-of-the-art methods.</description><identifier>DOI: 10.48550/arxiv.2412.13779</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.13779$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.13779$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yichen</creatorcontrib><creatorcontrib>Wang, Yuying</creatorcontrib><creatorcontrib>Xiao, Tianzhe</creatorcontrib><creatorcontrib>Wang, Haozhao</creatorcontrib><creatorcontrib>Qi, Yining</creatorcontrib><creatorcontrib>Li, Ruixuan</creatorcontrib><title>Rehearsal-Free Continual Federated Learning with Synergistic Regularization</title><description>Continual Federated Learning (CFL) allows distributed devices to
collaboratively learn novel concepts from continuously shifting training data
while avoiding knowledge forgetting of previously seen tasks. To tackle this
challenge, most current CFL approaches rely on extensive rehearsal of previous
data. Despite effectiveness, rehearsal comes at a cost to memory, and it may
also violate data privacy. Considering these, we seek to apply regularization
techniques to CFL by considering their cost-efficient properties that do not
require sample caching or rehearsal. Specifically, we first apply traditional
regularization techniques to CFL and observe that existing regularization
techniques, especially synaptic intelligence, can achieve promising results
under homogeneous data distribution but fail when the data is heterogeneous.
Based on this observation, we propose a simple yet effective regularization
algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the
CFL with heterogeneous data settings. FedSSI can not only reduce computational
overhead without rehearsal but also address the data heterogeneity issue.
Extensive experiments show that FedSSI achieves superior performance compared
to state-of-the-art methods.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFzT0OgkAQQOFtLIx6ACv3Aqz8Bq2JxEQrtCcTGJdJ1sUMi4qnV4m91Wte8gmxDHwVb5LEXwM_6a7COAhVEKXpdioOBTYI3IHxckaUWWsd2R6MzLFGBoe1PH4GS1bLB7lGngaLrKlzVMkCdW-A6QWOWjsXkwuYDhe_zsQq352zvTey5Y3pCjyUX74c-ej_8Qbr2jty</recordid><startdate>20241218</startdate><enddate>20241218</enddate><creator>Li, Yichen</creator><creator>Wang, Yuying</creator><creator>Xiao, Tianzhe</creator><creator>Wang, Haozhao</creator><creator>Qi, Yining</creator><creator>Li, Ruixuan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241218</creationdate><title>Rehearsal-Free Continual Federated Learning with Synergistic Regularization</title><author>Li, Yichen ; Wang, Yuying ; Xiao, Tianzhe ; Wang, Haozhao ; Qi, Yining ; Li, Ruixuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_137793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yichen</creatorcontrib><creatorcontrib>Wang, Yuying</creatorcontrib><creatorcontrib>Xiao, Tianzhe</creatorcontrib><creatorcontrib>Wang, Haozhao</creatorcontrib><creatorcontrib>Qi, Yining</creatorcontrib><creatorcontrib>Li, Ruixuan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yichen</au><au>Wang, Yuying</au><au>Xiao, Tianzhe</au><au>Wang, Haozhao</au><au>Qi, Yining</au><au>Li, Ruixuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rehearsal-Free Continual Federated Learning with Synergistic Regularization</atitle><date>2024-12-18</date><risdate>2024</risdate><abstract>Continual Federated Learning (CFL) allows distributed devices to
collaboratively learn novel concepts from continuously shifting training data
while avoiding knowledge forgetting of previously seen tasks. To tackle this
challenge, most current CFL approaches rely on extensive rehearsal of previous
data. Despite effectiveness, rehearsal comes at a cost to memory, and it may
also violate data privacy. Considering these, we seek to apply regularization
techniques to CFL by considering their cost-efficient properties that do not
require sample caching or rehearsal. Specifically, we first apply traditional
regularization techniques to CFL and observe that existing regularization
techniques, especially synaptic intelligence, can achieve promising results
under homogeneous data distribution but fail when the data is heterogeneous.
Based on this observation, we propose a simple yet effective regularization
algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the
CFL with heterogeneous data settings. FedSSI can not only reduce computational
overhead without rehearsal but also address the data heterogeneity issue.
Extensive experiments show that FedSSI achieves superior performance compared
to state-of-the-art methods.</abstract><doi>10.48550/arxiv.2412.13779</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2412.13779 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2412_13779 |
source | arXiv.org |
subjects | Computer Science - Distributed, Parallel, and Cluster Computing Computer Science - Learning |
title | Rehearsal-Free Continual Federated Learning with Synergistic Regularization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T16%3A13%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rehearsal-Free%20Continual%20Federated%20Learning%20with%20Synergistic%20Regularization&rft.au=Li,%20Yichen&rft.date=2024-12-18&rft_id=info:doi/10.48550/arxiv.2412.13779&rft_dat=%3Carxiv_GOX%3E2412_13779%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |