Privacy Leakage in Text Classification: A Data Extraction Approach
Recent work has demonstrated the successful extraction of training data from generative language models. However, it is not evident whether such extraction is feasible in text classification models since the training objective is to predict the class label as opposed to next-word prediction. This po...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Elmahdy, Adel Inan, Huseyin A Sim, Robert |
description | Recent work has demonstrated the successful extraction of training data from
generative language models. However, it is not evident whether such extraction
is feasible in text classification models since the training objective is to
predict the class label as opposed to next-word prediction. This poses an
interesting challenge and raises an important question regarding the privacy of
training data in text classification settings. Therefore, we study the
potential privacy leakage in the text classification domain by investigating
the problem of unintended memorization of training data that is not pertinent
to the learning task. We propose an algorithm to extract missing tokens of a
partial text by exploiting the likelihood of the class label provided by the
model. We test the effectiveness of our algorithm by inserting canaries into
the training set and attempting to extract tokens in these canaries
post-training. In our experiments, we demonstrate that successful extraction is
possible to some extent. This can also be used as an auditing strategy to
assess any potential unauthorized use of personal data without consent. |
doi_str_mv | 10.48550/arxiv.2206.04591 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_04591</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_04591</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-3177865f90861aad5a0f06ae9f42c1d4d4c5f1a49dfb052a0c12a87cbbb8ab7f3</originalsourceid><addsrcrecordid>eNotz7tOwzAAhWEvDKjwAEz4BRLsxLewhVCgUiQYskfHjk2tljZyoip9e9TCdKR_ONJHyANnuTBSsiekJZ7yomAqZ0JW_Ja8fKV4gjvT1mOHb0_jgXZ-mWmzxzTFEB3meDw805q-YgZdL3OCuyRaj2M6wm3vyE3AfvL3_7si3du6az6y9vN909RtBqV5VnKtjZKhYkZxYJBggSn4KojC8UEMwsnAIaohWCYLMMcLGO2stQZWh3JFHv9ur4h-TPEH6dxfMP0VU_4CRQJEZA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Privacy Leakage in Text Classification: A Data Extraction Approach</title><source>arXiv.org</source><creator>Elmahdy, Adel ; Inan, Huseyin A ; Sim, Robert</creator><creatorcontrib>Elmahdy, Adel ; Inan, Huseyin A ; Sim, Robert</creatorcontrib><description>Recent work has demonstrated the successful extraction of training data from
generative language models. However, it is not evident whether such extraction
is feasible in text classification models since the training objective is to
predict the class label as opposed to next-word prediction. This poses an
interesting challenge and raises an important question regarding the privacy of
training data in text classification settings. Therefore, we study the
potential privacy leakage in the text classification domain by investigating
the problem of unintended memorization of training data that is not pertinent
to the learning task. We propose an algorithm to extract missing tokens of a
partial text by exploiting the likelihood of the class label provided by the
model. We test the effectiveness of our algorithm by inserting canaries into
the training set and attempting to extract tokens in these canaries
post-training. In our experiments, we demonstrate that successful extraction is
possible to some extent. This can also be used as an auditing strategy to
assess any potential unauthorized use of personal data without consent.</description><identifier>DOI: 10.48550/arxiv.2206.04591</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2022-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.04591$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.04591$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Elmahdy, Adel</creatorcontrib><creatorcontrib>Inan, Huseyin A</creatorcontrib><creatorcontrib>Sim, Robert</creatorcontrib><title>Privacy Leakage in Text Classification: A Data Extraction Approach</title><description>Recent work has demonstrated the successful extraction of training data from
generative language models. However, it is not evident whether such extraction
is feasible in text classification models since the training objective is to
predict the class label as opposed to next-word prediction. This poses an
interesting challenge and raises an important question regarding the privacy of
training data in text classification settings. Therefore, we study the
potential privacy leakage in the text classification domain by investigating
the problem of unintended memorization of training data that is not pertinent
to the learning task. We propose an algorithm to extract missing tokens of a
partial text by exploiting the likelihood of the class label provided by the
model. We test the effectiveness of our algorithm by inserting canaries into
the training set and attempting to extract tokens in these canaries
post-training. In our experiments, we demonstrate that successful extraction is
possible to some extent. This can also be used as an auditing strategy to
assess any potential unauthorized use of personal data without consent.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOwzAAhWEvDKjwAEz4BRLsxLewhVCgUiQYskfHjk2tljZyoip9e9TCdKR_ONJHyANnuTBSsiekJZ7yomAqZ0JW_Ja8fKV4gjvT1mOHb0_jgXZ-mWmzxzTFEB3meDw805q-YgZdL3OCuyRaj2M6wm3vyE3AfvL3_7si3du6az6y9vN909RtBqV5VnKtjZKhYkZxYJBggSn4KojC8UEMwsnAIaohWCYLMMcLGO2stQZWh3JFHv9ur4h-TPEH6dxfMP0VU_4CRQJEZA</recordid><startdate>20220609</startdate><enddate>20220609</enddate><creator>Elmahdy, Adel</creator><creator>Inan, Huseyin A</creator><creator>Sim, Robert</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220609</creationdate><title>Privacy Leakage in Text Classification: A Data Extraction Approach</title><author>Elmahdy, Adel ; Inan, Huseyin A ; Sim, Robert</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-3177865f90861aad5a0f06ae9f42c1d4d4c5f1a49dfb052a0c12a87cbbb8ab7f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Elmahdy, Adel</creatorcontrib><creatorcontrib>Inan, Huseyin A</creatorcontrib><creatorcontrib>Sim, Robert</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Elmahdy, Adel</au><au>Inan, Huseyin A</au><au>Sim, Robert</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Privacy Leakage in Text Classification: A Data Extraction Approach</atitle><date>2022-06-09</date><risdate>2022</risdate><abstract>Recent work has demonstrated the successful extraction of training data from
generative language models. However, it is not evident whether such extraction
is feasible in text classification models since the training objective is to
predict the class label as opposed to next-word prediction. This poses an
interesting challenge and raises an important question regarding the privacy of
training data in text classification settings. Therefore, we study the
potential privacy leakage in the text classification domain by investigating
the problem of unintended memorization of training data that is not pertinent
to the learning task. We propose an algorithm to extract missing tokens of a
partial text by exploiting the likelihood of the class label provided by the
model. We test the effectiveness of our algorithm by inserting canaries into
the training set and attempting to extract tokens in these canaries
post-training. In our experiments, we demonstrate that successful extraction is
possible to some extent. This can also be used as an auditing strategy to
assess any potential unauthorized use of personal data without consent.</abstract><doi>10.48550/arxiv.2206.04591</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2206.04591 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2206_04591 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Cryptography and Security Computer Science - Learning |
title | Privacy Leakage in Text Classification: A Data Extraction Approach |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T12%3A57%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Privacy%20Leakage%20in%20Text%20Classification:%20A%20Data%20Extraction%20Approach&rft.au=Elmahdy,%20Adel&rft.date=2022-06-09&rft_id=info:doi/10.48550/arxiv.2206.04591&rft_dat=%3Carxiv_GOX%3E2206_04591%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |