Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models

Although language models (LMs) demonstrate exceptional capabilities on various tasks, they are potentially vulnerable to extraction attacks, which represent a significant privacy risk. To mitigate the privacy concerns of LMs, machine unlearning has emerged as an important research area, which is uti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lee, Dohyun, Rim, Daniel, Choi, Minseok, Choo, Jaegul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lee, Dohyun
Rim, Daniel
Choi, Minseok
Choo, Jaegul
description Although language models (LMs) demonstrate exceptional capabilities on various tasks, they are potentially vulnerable to extraction attacks, which represent a significant privacy risk. To mitigate the privacy concerns of LMs, machine unlearning has emerged as an important research area, which is utilized to induce the LM to selectively forget about some of its training data. While completely retraining the model will guarantee successful unlearning and privacy assurance, it is impractical for LMs, as it would be time-consuming and resource-intensive. Prior works efficiently unlearn the target token sequences, but upon subsequent iterations, the LM displays significant degradation in performance. In this work, we propose Privacy Protection via Optimal Parameters (POP), a novel unlearning method that effectively forgets the target token sequences from the pretrained LM by applying optimal gradient updates to the parameters. Inspired by the gradient derivation of complete retraining, we approximate the optimal training objective that successfully unlearns the target sequence while retaining the knowledge from the rest of the training data. Experimental results demonstrate that POP exhibits remarkable retention performance post-unlearning across 9 classification and 4 dialogue benchmarks, outperforming the state-of-the-art by a large margin. Furthermore, we introduce Remnant Memorization Accuracy that quantifies privacy risks based on token likelihood and validate its effectiveness through both qualitative and quantitative analyses.
doi_str_mv 10.48550/arxiv.2406.14091
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_14091</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_14091</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-d5fb9f2dcd22ab60d22b91a5f47bf4c9f7e465c34be4d2b292d654182eaac0b43</originalsourceid><addsrcrecordid>eNotj01ugzAUhL3pokp7gK7qC0BtY0NYRlH_pFRBKl2jZ_uZIBFDDUTJ7UtIVzPSjEbzEfLEWSzXSrEXCOfmFAvJ0phLlvN74orQjWjGxte0CM0JzIWWh9BN9YFu-j505-YIS7rvx9m2tIAARxwxDNR1gX7j74TeIP3xLULw12rj6Q58PUGN9Kuz2A4P5M5BO-Djv65I-fZabj-i3f79c7vZRZBmPLLK6dwJa6wQoFM2i845KCcz7aTJXYYyVSaRGqUVWuTCpkrytUAAw7RMVuT5NruAVn2YH4dLdQWuFuDkDwGmU0E</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models</title><source>arXiv.org</source><creator>Lee, Dohyun ; Rim, Daniel ; Choi, Minseok ; Choo, Jaegul</creator><creatorcontrib>Lee, Dohyun ; Rim, Daniel ; Choi, Minseok ; Choo, Jaegul</creatorcontrib><description>Although language models (LMs) demonstrate exceptional capabilities on various tasks, they are potentially vulnerable to extraction attacks, which represent a significant privacy risk. To mitigate the privacy concerns of LMs, machine unlearning has emerged as an important research area, which is utilized to induce the LM to selectively forget about some of its training data. While completely retraining the model will guarantee successful unlearning and privacy assurance, it is impractical for LMs, as it would be time-consuming and resource-intensive. Prior works efficiently unlearn the target token sequences, but upon subsequent iterations, the LM displays significant degradation in performance. In this work, we propose Privacy Protection via Optimal Parameters (POP), a novel unlearning method that effectively forgets the target token sequences from the pretrained LM by applying optimal gradient updates to the parameters. Inspired by the gradient derivation of complete retraining, we approximate the optimal training objective that successfully unlearns the target sequence while retaining the knowledge from the rest of the training data. Experimental results demonstrate that POP exhibits remarkable retention performance post-unlearning across 9 classification and 4 dialogue benchmarks, outperforming the state-of-the-art by a large margin. Furthermore, we introduce Remnant Memorization Accuracy that quantifies privacy risks based on token likelihood and validate its effectiveness through both qualitative and quantitative analyses.</description><identifier>DOI: 10.48550/arxiv.2406.14091</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.14091$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.14091$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lee, Dohyun</creatorcontrib><creatorcontrib>Rim, Daniel</creatorcontrib><creatorcontrib>Choi, Minseok</creatorcontrib><creatorcontrib>Choo, Jaegul</creatorcontrib><title>Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models</title><description>Although language models (LMs) demonstrate exceptional capabilities on various tasks, they are potentially vulnerable to extraction attacks, which represent a significant privacy risk. To mitigate the privacy concerns of LMs, machine unlearning has emerged as an important research area, which is utilized to induce the LM to selectively forget about some of its training data. While completely retraining the model will guarantee successful unlearning and privacy assurance, it is impractical for LMs, as it would be time-consuming and resource-intensive. Prior works efficiently unlearn the target token sequences, but upon subsequent iterations, the LM displays significant degradation in performance. In this work, we propose Privacy Protection via Optimal Parameters (POP), a novel unlearning method that effectively forgets the target token sequences from the pretrained LM by applying optimal gradient updates to the parameters. Inspired by the gradient derivation of complete retraining, we approximate the optimal training objective that successfully unlearns the target sequence while retaining the knowledge from the rest of the training data. Experimental results demonstrate that POP exhibits remarkable retention performance post-unlearning across 9 classification and 4 dialogue benchmarks, outperforming the state-of-the-art by a large margin. Furthermore, we introduce Remnant Memorization Accuracy that quantifies privacy risks based on token likelihood and validate its effectiveness through both qualitative and quantitative analyses.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj01ugzAUhL3pokp7gK7qC0BtY0NYRlH_pFRBKl2jZ_uZIBFDDUTJ7UtIVzPSjEbzEfLEWSzXSrEXCOfmFAvJ0phLlvN74orQjWjGxte0CM0JzIWWh9BN9YFu-j505-YIS7rvx9m2tIAARxwxDNR1gX7j74TeIP3xLULw12rj6Q58PUGN9Kuz2A4P5M5BO-Djv65I-fZabj-i3f79c7vZRZBmPLLK6dwJa6wQoFM2i845KCcz7aTJXYYyVSaRGqUVWuTCpkrytUAAw7RMVuT5NruAVn2YH4dLdQWuFuDkDwGmU0E</recordid><startdate>20240620</startdate><enddate>20240620</enddate><creator>Lee, Dohyun</creator><creator>Rim, Daniel</creator><creator>Choi, Minseok</creator><creator>Choo, Jaegul</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240620</creationdate><title>Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models</title><author>Lee, Dohyun ; Rim, Daniel ; Choi, Minseok ; Choo, Jaegul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-d5fb9f2dcd22ab60d22b91a5f47bf4c9f7e465c34be4d2b292d654182eaac0b43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Dohyun</creatorcontrib><creatorcontrib>Rim, Daniel</creatorcontrib><creatorcontrib>Choi, Minseok</creatorcontrib><creatorcontrib>Choo, Jaegul</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Dohyun</au><au>Rim, Daniel</au><au>Choi, Minseok</au><au>Choo, Jaegul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models</atitle><date>2024-06-20</date><risdate>2024</risdate><abstract>Although language models (LMs) demonstrate exceptional capabilities on various tasks, they are potentially vulnerable to extraction attacks, which represent a significant privacy risk. To mitigate the privacy concerns of LMs, machine unlearning has emerged as an important research area, which is utilized to induce the LM to selectively forget about some of its training data. While completely retraining the model will guarantee successful unlearning and privacy assurance, it is impractical for LMs, as it would be time-consuming and resource-intensive. Prior works efficiently unlearn the target token sequences, but upon subsequent iterations, the LM displays significant degradation in performance. In this work, we propose Privacy Protection via Optimal Parameters (POP), a novel unlearning method that effectively forgets the target token sequences from the pretrained LM by applying optimal gradient updates to the parameters. Inspired by the gradient derivation of complete retraining, we approximate the optimal training objective that successfully unlearns the target sequence while retaining the knowledge from the rest of the training data. Experimental results demonstrate that POP exhibits remarkable retention performance post-unlearning across 9 classification and 4 dialogue benchmarks, outperforming the state-of-the-art by a large margin. Furthermore, we introduce Remnant Memorization Accuracy that quantifies privacy risks based on token likelihood and validate its effectiveness through both qualitative and quantitative analyses.</abstract><doi>10.48550/arxiv.2406.14091</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.14091
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_14091
source arXiv.org
subjects Computer Science - Computation and Language
title Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T01%3A35%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Protecting%20Privacy%20Through%20Approximating%20Optimal%20Parameters%20for%20Sequence%20Unlearning%20in%20Language%20Models&rft.au=Lee,%20Dohyun&rft.date=2024-06-20&rft_id=info:doi/10.48550/arxiv.2406.14091&rft_dat=%3Carxiv_GOX%3E2406_14091%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true