Exploiting Unintended Feature Leakage in Collaborative Learning

Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Melis, Luca, Song, Congzheng, De Cristofaro, Emiliano, Shmatikov, Vitaly
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 706
container_issue
container_start_page 691
container_title
container_volume
creator Melis, Luca
Song, Congzheng
De Cristofaro, Emiliano
Shmatikov, Vitaly
description Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
doi_str_mv 10.1109/SP.2019.00029
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_8835269</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8835269</ieee_id><sourcerecordid>8835269</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-3c4ee1c5f7fb1c8f3019aa3b1132a4da4393c29af0e8b86054e7ee679f21c4453</originalsourceid><addsrcrecordid>eNotjsFKxDAURaMgOI6zdOWmP9Ca5CVNshIpM6NQmAGd9fCavpRobYe2iv69Rb2bs7iHy2XsRvBMCO7unveZ5MJlnHPpztjKGSs02HwOd-dsIcHoVEhuLtnVOL7OFgenFux-_XVq-zjFrkkOXewm6mqqkw3h9DFQUhK-YUNJ7JKib1us-gGn-PlbDLPeXLOLgO1Iq38u2WGzfike03K3fSoeytSDVFMKXhEJr4MJlfA2wPwVESohQKKqUYEDLx0GTrayOdeKDFFuXJDCK6VhyW7_diMRHU9DfMfh-2gtaJk7-AGq0EhY</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Exploiting Unintended Feature Leakage in Collaborative Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Melis, Luca ; Song, Congzheng ; De Cristofaro, Emiliano ; Shmatikov, Vitaly</creator><creatorcontrib>Melis, Luca ; Song, Congzheng ; De Cristofaro, Emiliano ; Shmatikov, Vitaly</creatorcontrib><description>Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.</description><identifier>EISSN: 2375-1207</identifier><identifier>EISBN: 9781538666609</identifier><identifier>EISBN: 153866660X</identifier><identifier>DOI: 10.1109/SP.2019.00029</identifier><language>eng</language><publisher>IEEE</publisher><subject>Collaborative work ; collaborative-learning ; Computational modeling ; Data models ; deep-learning ; inference-attacks ; privacy ; security ; Servers ; Task analysis ; Training ; Training data</subject><ispartof>2019 IEEE Symposium on Security and Privacy (SP), 2019, p.691-706</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c324t-3c4ee1c5f7fb1c8f3019aa3b1132a4da4393c29af0e8b86054e7ee679f21c4453</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8835269$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,792,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8835269$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Melis, Luca</creatorcontrib><creatorcontrib>Song, Congzheng</creatorcontrib><creatorcontrib>De Cristofaro, Emiliano</creatorcontrib><creatorcontrib>Shmatikov, Vitaly</creatorcontrib><title>Exploiting Unintended Feature Leakage in Collaborative Learning</title><title>2019 IEEE Symposium on Security and Privacy (SP)</title><addtitle>SP</addtitle><description>Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.</description><subject>Collaborative work</subject><subject>collaborative-learning</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>deep-learning</subject><subject>inference-attacks</subject><subject>privacy</subject><subject>security</subject><subject>Servers</subject><subject>Task analysis</subject><subject>Training</subject><subject>Training data</subject><issn>2375-1207</issn><isbn>9781538666609</isbn><isbn>153866660X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjsFKxDAURaMgOI6zdOWmP9Ca5CVNshIpM6NQmAGd9fCavpRobYe2iv69Rb2bs7iHy2XsRvBMCO7unveZ5MJlnHPpztjKGSs02HwOd-dsIcHoVEhuLtnVOL7OFgenFux-_XVq-zjFrkkOXewm6mqqkw3h9DFQUhK-YUNJ7JKib1us-gGn-PlbDLPeXLOLgO1Iq38u2WGzfike03K3fSoeytSDVFMKXhEJr4MJlfA2wPwVESohQKKqUYEDLx0GTrayOdeKDFFuXJDCK6VhyW7_diMRHU9DfMfh-2gtaJk7-AGq0EhY</recordid><startdate>20190501</startdate><enddate>20190501</enddate><creator>Melis, Luca</creator><creator>Song, Congzheng</creator><creator>De Cristofaro, Emiliano</creator><creator>Shmatikov, Vitaly</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20190501</creationdate><title>Exploiting Unintended Feature Leakage in Collaborative Learning</title><author>Melis, Luca ; Song, Congzheng ; De Cristofaro, Emiliano ; Shmatikov, Vitaly</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-3c4ee1c5f7fb1c8f3019aa3b1132a4da4393c29af0e8b86054e7ee679f21c4453</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Collaborative work</topic><topic>collaborative-learning</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>deep-learning</topic><topic>inference-attacks</topic><topic>privacy</topic><topic>security</topic><topic>Servers</topic><topic>Task analysis</topic><topic>Training</topic><topic>Training data</topic><toplevel>online_resources</toplevel><creatorcontrib>Melis, Luca</creatorcontrib><creatorcontrib>Song, Congzheng</creatorcontrib><creatorcontrib>De Cristofaro, Emiliano</creatorcontrib><creatorcontrib>Shmatikov, Vitaly</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Melis, Luca</au><au>Song, Congzheng</au><au>De Cristofaro, Emiliano</au><au>Shmatikov, Vitaly</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Exploiting Unintended Feature Leakage in Collaborative Learning</atitle><btitle>2019 IEEE Symposium on Security and Privacy (SP)</btitle><stitle>SP</stitle><date>2019-05-01</date><risdate>2019</risdate><spage>691</spage><epage>706</epage><pages>691-706</pages><eissn>2375-1207</eissn><eisbn>9781538666609</eisbn><eisbn>153866660X</eisbn><abstract>Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.</abstract><pub>IEEE</pub><doi>10.1109/SP.2019.00029</doi><tpages>16</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2375-1207
ispartof 2019 IEEE Symposium on Security and Privacy (SP), 2019, p.691-706
issn 2375-1207
language eng
recordid cdi_ieee_primary_8835269
source IEEE Electronic Library (IEL)
subjects Collaborative work
collaborative-learning
Computational modeling
Data models
deep-learning
inference-attacks
privacy
security
Servers
Task analysis
Training
Training data
title Exploiting Unintended Feature Leakage in Collaborative Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T21%3A46%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Exploiting%20Unintended%20Feature%20Leakage%20in%20Collaborative%20Learning&rft.btitle=2019%20IEEE%20Symposium%20on%20Security%20and%20Privacy%20(SP)&rft.au=Melis,%20Luca&rft.date=2019-05-01&rft.spage=691&rft.epage=706&rft.pages=691-706&rft.eissn=2375-1207&rft_id=info:doi/10.1109/SP.2019.00029&rft_dat=%3Cieee_RIE%3E8835269%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781538666609&rft.eisbn_list=153866660X&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8835269&rfr_iscdi=true