Federated User Representation Learning
Collaborative personalization, such as through learned user representations (embeddings), can improve the prediction accuracy of neural-network-based models significantly. We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to u...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bui, Duc Malik, Kshitiz Goetz, Jack Liu, Honglei Moon, Seungwhan Kumar, Anuj Shin, Kang G |
description | Collaborative personalization, such as through learned user representations
(embeddings), can improve the prediction accuracy of neural-network-based
models significantly. We propose Federated User Representation Learning (FURL),
a simple, scalable, privacy-preserving and resource-efficient way to utilize
existing neural personalization techniques in the Federated Learning (FL)
setting. FURL divides model parameters into federated and private parameters.
Private parameters, such as private user embeddings, are trained locally, but
unlike federated parameters, they are not transferred to or averaged on the
server. We show theoretically that this parameter split does not affect
training for most model personalization approaches. Storing user embeddings
locally not only preserves user privacy, but also improves memory locality of
personalization compared to on-server training. We evaluate FURL on two
datasets, demonstrating a significant improvement in model quality with 8% and
51% performance increases, and approximately the same level of performance as
centralized training with only 0% and 4% reductions. Furthermore, we show that
user embeddings learned in FL and the centralized setting have a very similar
structure, indicating that FURL can learn collaboratively through the shared
parameters while preserving user privacy. |
doi_str_mv | 10.48550/arxiv.1909.12535 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1909_12535</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1909_12535</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-36fdac2c5729d76ccc5d856185e418bc9a1e23ea9db01e3d3731880059214d093</originalsourceid><addsrcrecordid>eNotzrsKwjAUgOEsDqI-gJOd3Fpzkp42GUW8QUGQOpdjcpSCVkmL6NuLl-nffj4hxiCT1CDKGYVn_UjASpuAQo19MV2x50Ad--jQcoj2fA_cctNRV9-aqGAKTd2ch6J3okvLo38Holwty8UmLnbr7WJexJTlGOvs5Mkph7myPs-cc-gNZmCQUzBHZwlYaSbrjxJYe51rMEZKtApSL60eiMlv-4VW91BfKbyqD7j6gvUbXE453Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Federated User Representation Learning</title><source>arXiv.org</source><creator>Bui, Duc ; Malik, Kshitiz ; Goetz, Jack ; Liu, Honglei ; Moon, Seungwhan ; Kumar, Anuj ; Shin, Kang G</creator><creatorcontrib>Bui, Duc ; Malik, Kshitiz ; Goetz, Jack ; Liu, Honglei ; Moon, Seungwhan ; Kumar, Anuj ; Shin, Kang G</creatorcontrib><description>Collaborative personalization, such as through learned user representations
(embeddings), can improve the prediction accuracy of neural-network-based
models significantly. We propose Federated User Representation Learning (FURL),
a simple, scalable, privacy-preserving and resource-efficient way to utilize
existing neural personalization techniques in the Federated Learning (FL)
setting. FURL divides model parameters into federated and private parameters.
Private parameters, such as private user embeddings, are trained locally, but
unlike federated parameters, they are not transferred to or averaged on the
server. We show theoretically that this parameter split does not affect
training for most model personalization approaches. Storing user embeddings
locally not only preserves user privacy, but also improves memory locality of
personalization compared to on-server training. We evaluate FURL on two
datasets, demonstrating a significant improvement in model quality with 8% and
51% performance increases, and approximately the same level of performance as
centralized training with only 0% and 4% reductions. Furthermore, we show that
user embeddings learned in FL and the centralized setting have a very similar
structure, indicating that FURL can learn collaboratively through the shared
parameters while preserving user privacy.</description><identifier>DOI: 10.48550/arxiv.1909.12535</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1909.12535$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1909.12535$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bui, Duc</creatorcontrib><creatorcontrib>Malik, Kshitiz</creatorcontrib><creatorcontrib>Goetz, Jack</creatorcontrib><creatorcontrib>Liu, Honglei</creatorcontrib><creatorcontrib>Moon, Seungwhan</creatorcontrib><creatorcontrib>Kumar, Anuj</creatorcontrib><creatorcontrib>Shin, Kang G</creatorcontrib><title>Federated User Representation Learning</title><description>Collaborative personalization, such as through learned user representations
(embeddings), can improve the prediction accuracy of neural-network-based
models significantly. We propose Federated User Representation Learning (FURL),
a simple, scalable, privacy-preserving and resource-efficient way to utilize
existing neural personalization techniques in the Federated Learning (FL)
setting. FURL divides model parameters into federated and private parameters.
Private parameters, such as private user embeddings, are trained locally, but
unlike federated parameters, they are not transferred to or averaged on the
server. We show theoretically that this parameter split does not affect
training for most model personalization approaches. Storing user embeddings
locally not only preserves user privacy, but also improves memory locality of
personalization compared to on-server training. We evaluate FURL on two
datasets, demonstrating a significant improvement in model quality with 8% and
51% performance increases, and approximately the same level of performance as
centralized training with only 0% and 4% reductions. Furthermore, we show that
user embeddings learned in FL and the centralized setting have a very similar
structure, indicating that FURL can learn collaboratively through the shared
parameters while preserving user privacy.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAUgOEsDqI-gJOd3Fpzkp42GUW8QUGQOpdjcpSCVkmL6NuLl-nffj4hxiCT1CDKGYVn_UjASpuAQo19MV2x50Ad--jQcoj2fA_cctNRV9-aqGAKTd2ch6J3okvLo38Holwty8UmLnbr7WJexJTlGOvs5Mkph7myPs-cc-gNZmCQUzBHZwlYaSbrjxJYe51rMEZKtApSL60eiMlv-4VW91BfKbyqD7j6gvUbXE453Q</recordid><startdate>20190927</startdate><enddate>20190927</enddate><creator>Bui, Duc</creator><creator>Malik, Kshitiz</creator><creator>Goetz, Jack</creator><creator>Liu, Honglei</creator><creator>Moon, Seungwhan</creator><creator>Kumar, Anuj</creator><creator>Shin, Kang G</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190927</creationdate><title>Federated User Representation Learning</title><author>Bui, Duc ; Malik, Kshitiz ; Goetz, Jack ; Liu, Honglei ; Moon, Seungwhan ; Kumar, Anuj ; Shin, Kang G</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-36fdac2c5729d76ccc5d856185e418bc9a1e23ea9db01e3d3731880059214d093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bui, Duc</creatorcontrib><creatorcontrib>Malik, Kshitiz</creatorcontrib><creatorcontrib>Goetz, Jack</creatorcontrib><creatorcontrib>Liu, Honglei</creatorcontrib><creatorcontrib>Moon, Seungwhan</creatorcontrib><creatorcontrib>Kumar, Anuj</creatorcontrib><creatorcontrib>Shin, Kang G</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bui, Duc</au><au>Malik, Kshitiz</au><au>Goetz, Jack</au><au>Liu, Honglei</au><au>Moon, Seungwhan</au><au>Kumar, Anuj</au><au>Shin, Kang G</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Federated User Representation Learning</atitle><date>2019-09-27</date><risdate>2019</risdate><abstract>Collaborative personalization, such as through learned user representations
(embeddings), can improve the prediction accuracy of neural-network-based
models significantly. We propose Federated User Representation Learning (FURL),
a simple, scalable, privacy-preserving and resource-efficient way to utilize
existing neural personalization techniques in the Federated Learning (FL)
setting. FURL divides model parameters into federated and private parameters.
Private parameters, such as private user embeddings, are trained locally, but
unlike federated parameters, they are not transferred to or averaged on the
server. We show theoretically that this parameter split does not affect
training for most model personalization approaches. Storing user embeddings
locally not only preserves user privacy, but also improves memory locality of
personalization compared to on-server training. We evaluate FURL on two
datasets, demonstrating a significant improvement in model quality with 8% and
51% performance increases, and approximately the same level of performance as
centralized training with only 0% and 4% reductions. Furthermore, we show that
user embeddings learned in FL and the centralized setting have a very similar
structure, indicating that FURL can learn collaboratively through the shared
parameters while preserving user privacy.</abstract><doi>10.48550/arxiv.1909.12535</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1909.12535 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1909_12535 |
source | arXiv.org |
subjects | Computer Science - Learning Statistics - Machine Learning |
title | Federated User Representation Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T18%3A14%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Federated%20User%20Representation%20Learning&rft.au=Bui,%20Duc&rft.date=2019-09-27&rft_id=info:doi/10.48550/arxiv.1909.12535&rft_dat=%3Carxiv_GOX%3E1909_12535%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |