Aligning LLMs to Be Robust Against Prompt Injection
Large language models (LLMs) are becoming increasingly prevalent in modern software systems, interfacing between the user and the internet to assist with tasks that require advanced language understanding. To accomplish these tasks, the LLM often uses external data sources such as user documents, we...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) are becoming increasingly prevalent in modern
software systems, interfacing between the user and the internet to assist with
tasks that require advanced language understanding. To accomplish these tasks,
the LLM often uses external data sources such as user documents, web retrieval,
results from API calls, etc. This opens up new avenues for attackers to
manipulate the LLM via prompt injection. Adversarial prompts can be carefully
crafted and injected into external data sources to override the user's intended
instruction and instead execute a malicious instruction. Prompt injection
attacks constitute a major threat to LLM security, making the design and
implementation of practical countermeasures of paramount importance. To this
end, we show that alignment can be a powerful tool to make LLMs more robust
against prompt injection. Our method -- SecAlign -- first builds an alignment
dataset by simulating prompt injection attacks and constructing pairs of
desirable and undesirable responses. Then, we apply existing alignment
techniques to fine-tune the LLM to be robust against these simulated attacks.
Our experiments show that SecAlign robustifies the LLM substantially with a
negligible hurt on model utility. Moreover, SecAlign's protection generalizes
to strong attacks unseen in training. Specifically, the success rate of
state-of-the-art GCG-based prompt injections drops from 56% to 2% in Mistral-7B
after our alignment process. Our code is released at
https://github.com/facebookresearch/SecAlign |
---|---|
DOI: | 10.48550/arxiv.2410.05451 |