Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning

This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ma, Yiping, Woods, Jess, Angel, Sebastian, Polychroniadou, Antigoni, Rabin, Tal
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 496
container_issue
container_start_page 477
container_title
container_volume
creator Ma, Yiping
Woods, Jess
Angel, Sebastian
Polychroniadou, Antigoni
Rabin, Tal
description This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS '20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.
doi_str_mv 10.1109/SP46215.2023.10179434
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_10179434</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10179434</ieee_id><sourcerecordid>10179434</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-3a2804eb846bb8ae1f3bebbf9c4207f05bf1eaeb10db447b1d6105a7ecca65c83</originalsourceid><addsrcrecordid>eNo1kM1OhDAUhauJiTOjb6BJXwC8ty0F3E0moiYYJ6LrSQsXrGFgUmCMby_xZ3VOvsWXk8PYNUKICOlNsVVaYBQKEDJEwDhVUp2wJWodqVRKrU7ZQsg4ClBAfM6Ww_ABIECmasHqrDV71zX9LX-a2tEFL_3UVbyYUUtBQf5InhdUTp74umk8NWZ0fcc_3fjO14dD68ofMPCx51vvjmYknlFFfi4Vz8n4bnZdsLPatANd_uWKvWV3r5uHIH--f9ys88AJUGMgjUhAkU2UtjYxhLW0ZG2dlmqeXkNkayRDFqGySsUWK40QmZjK0uioTOSKXf16HRHtDt7tjf_a_Z8ivwFTfljF</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Ma, Yiping ; Woods, Jess ; Angel, Sebastian ; Polychroniadou, Antigoni ; Rabin, Tal</creator><creatorcontrib>Ma, Yiping ; Woods, Jess ; Angel, Sebastian ; Polychroniadou, Antigoni ; Rabin, Tal</creatorcontrib><description>This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS '20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.</description><identifier>EISSN: 2375-1207</identifier><identifier>EISBN: 1665493364</identifier><identifier>EISBN: 9781665493369</identifier><identifier>DOI: 10.1109/SP46215.2023.10179434</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Adaptation models ; Federated learning ; Neural networks ; Privacy ; Protocols ; Runtime ; secure-aggregation ; Training</subject><ispartof>2023 IEEE Symposium on Security and Privacy (SP), 2023, p.477-496</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10179434$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,792,23909,23910,25118,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10179434$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ma, Yiping</creatorcontrib><creatorcontrib>Woods, Jess</creatorcontrib><creatorcontrib>Angel, Sebastian</creatorcontrib><creatorcontrib>Polychroniadou, Antigoni</creatorcontrib><creatorcontrib>Rabin, Tal</creatorcontrib><title>Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning</title><title>2023 IEEE Symposium on Security and Privacy (SP)</title><addtitle>SP</addtitle><description>This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS '20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.</description><subject>Adaptation models</subject><subject>Federated learning</subject><subject>Neural networks</subject><subject>Privacy</subject><subject>Protocols</subject><subject>Runtime</subject><subject>secure-aggregation</subject><subject>Training</subject><issn>2375-1207</issn><isbn>1665493364</isbn><isbn>9781665493369</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo1kM1OhDAUhauJiTOjb6BJXwC8ty0F3E0moiYYJ6LrSQsXrGFgUmCMby_xZ3VOvsWXk8PYNUKICOlNsVVaYBQKEDJEwDhVUp2wJWodqVRKrU7ZQsg4ClBAfM6Ww_ABIECmasHqrDV71zX9LX-a2tEFL_3UVbyYUUtBQf5InhdUTp74umk8NWZ0fcc_3fjO14dD68ofMPCx51vvjmYknlFFfi4Vz8n4bnZdsLPatANd_uWKvWV3r5uHIH--f9ys88AJUGMgjUhAkU2UtjYxhLW0ZG2dlmqeXkNkayRDFqGySsUWK40QmZjK0uioTOSKXf16HRHtDt7tjf_a_Z8ivwFTfljF</recordid><startdate>202305</startdate><enddate>202305</enddate><creator>Ma, Yiping</creator><creator>Woods, Jess</creator><creator>Angel, Sebastian</creator><creator>Polychroniadou, Antigoni</creator><creator>Rabin, Tal</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202305</creationdate><title>Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning</title><author>Ma, Yiping ; Woods, Jess ; Angel, Sebastian ; Polychroniadou, Antigoni ; Rabin, Tal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-3a2804eb846bb8ae1f3bebbf9c4207f05bf1eaeb10db447b1d6105a7ecca65c83</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation models</topic><topic>Federated learning</topic><topic>Neural networks</topic><topic>Privacy</topic><topic>Protocols</topic><topic>Runtime</topic><topic>secure-aggregation</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Yiping</creatorcontrib><creatorcontrib>Woods, Jess</creatorcontrib><creatorcontrib>Angel, Sebastian</creatorcontrib><creatorcontrib>Polychroniadou, Antigoni</creatorcontrib><creatorcontrib>Rabin, Tal</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Yiping</au><au>Woods, Jess</au><au>Angel, Sebastian</au><au>Polychroniadou, Antigoni</au><au>Rabin, Tal</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning</atitle><btitle>2023 IEEE Symposium on Security and Privacy (SP)</btitle><stitle>SP</stitle><date>2023-05</date><risdate>2023</risdate><spage>477</spage><epage>496</epage><pages>477-496</pages><eissn>2375-1207</eissn><eisbn>1665493364</eisbn><eisbn>9781665493369</eisbn><coden>IEEPAD</coden><abstract>This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS '20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.</abstract><pub>IEEE</pub><doi>10.1109/SP46215.2023.10179434</doi><tpages>20</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2375-1207
ispartof 2023 IEEE Symposium on Security and Privacy (SP), 2023, p.477-496
issn 2375-1207
language eng
recordid cdi_ieee_primary_10179434
source IEEE Electronic Library (IEL)
subjects Adaptation models
Federated learning
Neural networks
Privacy
Protocols
Runtime
secure-aggregation
Training
title Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T21%3A08%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Flamingo:%20Multi-Round%20Single-Server%20Secure%20Aggregation%20with%20Applications%20to%20Private%20Federated%20Learning&rft.btitle=2023%20IEEE%20Symposium%20on%20Security%20and%20Privacy%20(SP)&rft.au=Ma,%20Yiping&rft.date=2023-05&rft.spage=477&rft.epage=496&rft.pages=477-496&rft.eissn=2375-1207&rft.coden=IEEPAD&rft_id=info:doi/10.1109/SP46215.2023.10179434&rft_dat=%3Cieee_RIE%3E10179434%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1665493364&rft.eisbn_list=9781665493369&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10179434&rfr_iscdi=true