Membership Inference Attacks against Machine Learning Models
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training da...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2017-03 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Shokri, Reza Stronati, Marco Song, Congzheng Shmatikov, Vitaly |
description | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2075229072</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2075229072</sourcerecordid><originalsourceid>FETCH-proquest_journals_20752290723</originalsourceid><addsrcrecordid>eNqNyrEKwjAQgOEgCBbtOwScC_HSGgUXEUXBbu4l1mubWq81l76_Dj6A0z98_0REoPUq2aQAMxEzt0opWBvIMh2JXY6vO3pu3CAvVKFHKlHuQ7Dlk6WtrSMOMrdl4wjlFa0nR7XM-wd2vBDTynaM8a9zsTwdb4dzMvj-PSKHou1HT18qQJkMYKsM6P-uDyzpN18</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2075229072</pqid></control><display><type>article</type><title>Membership Inference Attacks against Machine Learning Models</title><source>Free E- Journals</source><creator>Shokri, Reza ; Stronati, Marco ; Song, Congzheng ; Shmatikov, Vitaly</creator><creatorcontrib>Shokri, Reza ; Stronati, Marco ; Song, Congzheng ; Shmatikov, Vitaly</creatorcontrib><description>We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Classification ; Inference ; Machine learning ; Target recognition</subject><ispartof>arXiv.org, 2017-03</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Shokri, Reza</creatorcontrib><creatorcontrib>Stronati, Marco</creatorcontrib><creatorcontrib>Song, Congzheng</creatorcontrib><creatorcontrib>Shmatikov, Vitaly</creatorcontrib><title>Membership Inference Attacks against Machine Learning Models</title><title>arXiv.org</title><description>We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.</description><subject>Artificial intelligence</subject><subject>Classification</subject><subject>Inference</subject><subject>Machine learning</subject><subject>Target recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyrEKwjAQgOEgCBbtOwScC_HSGgUXEUXBbu4l1mubWq81l76_Dj6A0z98_0REoPUq2aQAMxEzt0opWBvIMh2JXY6vO3pu3CAvVKFHKlHuQ7Dlk6WtrSMOMrdl4wjlFa0nR7XM-wd2vBDTynaM8a9zsTwdb4dzMvj-PSKHou1HT18qQJkMYKsM6P-uDyzpN18</recordid><startdate>20170331</startdate><enddate>20170331</enddate><creator>Shokri, Reza</creator><creator>Stronati, Marco</creator><creator>Song, Congzheng</creator><creator>Shmatikov, Vitaly</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20170331</creationdate><title>Membership Inference Attacks against Machine Learning Models</title><author>Shokri, Reza ; Stronati, Marco ; Song, Congzheng ; Shmatikov, Vitaly</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20752290723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Artificial intelligence</topic><topic>Classification</topic><topic>Inference</topic><topic>Machine learning</topic><topic>Target recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Shokri, Reza</creatorcontrib><creatorcontrib>Stronati, Marco</creatorcontrib><creatorcontrib>Song, Congzheng</creatorcontrib><creatorcontrib>Shmatikov, Vitaly</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shokri, Reza</au><au>Stronati, Marco</au><au>Song, Congzheng</au><au>Shmatikov, Vitaly</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Membership Inference Attacks against Machine Learning Models</atitle><jtitle>arXiv.org</jtitle><date>2017-03-31</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2017-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2075229072 |
source | Free E- Journals |
subjects | Artificial intelligence Classification Inference Machine learning Target recognition |
title | Membership Inference Attacks against Machine Learning Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T00%3A50%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Membership%20Inference%20Attacks%20against%20Machine%20Learning%20Models&rft.jtitle=arXiv.org&rft.au=Shokri,%20Reza&rft.date=2017-03-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2075229072%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2075229072&rft_id=info:pmid/&rfr_iscdi=true |