A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models
Ensuring the security of large language models (LLMs) is an ongoing challenge despite their widespread popularity. Developers work to enhance LLMs security, but vulnerabilities persist, even in advanced versions like GPT-4. Attackers exploit these weaknesses, highlighting the need for proactive cybe...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Ensuring the security of large language models (LLMs) is an ongoing challenge
despite their widespread popularity. Developers work to enhance LLMs security,
but vulnerabilities persist, even in advanced versions like GPT-4. Attackers
exploit these weaknesses, highlighting the need for proactive cybersecurity
measures in AI model development. This article explores two attack categories:
attacks on models themselves and attacks on model applications. The former
requires expertise, access to model data, and significant implementation time,
while the latter is more accessible to attackers and has seen increased
attention. Our study reviews over 100 recent research works, providing an
in-depth analysis of each attack type. We identify the latest attack methods
and explore various approaches to carry them out. We thoroughly investigate
mitigation techniques, assessing their effectiveness and limitations.
Furthermore, we summarize future defenses against these attacks. We also
examine real-world techniques, including reported and our implemented attacks
on LLMs, to consolidate our findings. Our research highlights the urgency of
addressing security concerns and aims to enhance the understanding of LLM
attacks, contributing to robust defense development in this evolving domain. |
---|---|
DOI: | 10.48550/arxiv.2312.10982 |