Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Although language models (LMs) demonstrate exceptional capabilities on various tasks, they are potentially vulnerable to extraction attacks, which represent a significant privacy risk. To mitigate the privacy concerns of LMs, machine unlearning has emerged as an important research area, which is uti...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although language models (LMs) demonstrate exceptional capabilities on
various tasks, they are potentially vulnerable to extraction attacks, which
represent a significant privacy risk. To mitigate the privacy concerns of LMs,
machine unlearning has emerged as an important research area, which is utilized
to induce the LM to selectively forget about some of its training data. While
completely retraining the model will guarantee successful unlearning and
privacy assurance, it is impractical for LMs, as it would be time-consuming and
resource-intensive. Prior works efficiently unlearn the target token sequences,
but upon subsequent iterations, the LM displays significant degradation in
performance. In this work, we propose Privacy Protection via Optimal Parameters
(POP), a novel unlearning method that effectively forgets the target token
sequences from the pretrained LM by applying optimal gradient updates to the
parameters. Inspired by the gradient derivation of complete retraining, we
approximate the optimal training objective that successfully unlearns the
target sequence while retaining the knowledge from the rest of the training
data. Experimental results demonstrate that POP exhibits remarkable retention
performance post-unlearning across 9 classification and 4 dialogue benchmarks,
outperforming the state-of-the-art by a large margin. Furthermore, we introduce
Remnant Memorization Accuracy that quantifies privacy risks based on token
likelihood and validate its effectiveness through both qualitative and
quantitative analyses. |
---|---|
DOI: | 10.48550/arxiv.2406.14091 |