Privacy in Large Language Models: Attacks, Defenses and Future Directions
The advancement of large language models (LLMs) has significantly enhanced the ability to effectively tackle various downstream NLP tasks and unify these tasks into generative pipelines. On the one hand, powerful language models, trained on massive textual data, have brought unparalleled accessibili...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The advancement of large language models (LLMs) has significantly enhanced
the ability to effectively tackle various downstream NLP tasks and unify these
tasks into generative pipelines. On the one hand, powerful language models,
trained on massive textual data, have brought unparalleled accessibility and
usability for both models and users. On the other hand, unrestricted access to
these models can also introduce potential malicious and unintentional privacy
risks. Despite ongoing efforts to address the safety and privacy concerns
associated with LLMs, the problem remains unresolved. In this paper, we provide
a comprehensive analysis of the current privacy attacks targeting LLMs and
categorize them according to the adversary's assumed capabilities to shed light
on the potential vulnerabilities present in LLMs. Then, we present a detailed
overview of prominent defense strategies that have been developed to counter
these privacy attacks. Beyond existing works, we identify upcoming privacy
concerns as LLMs evolve. Lastly, we point out several potential avenues for
future exploration. |
---|---|
DOI: | 10.48550/arxiv.2310.10383 |