Defending Against Indirect Prompt Injection Attacks With Spotlighting
Large Language Models (LLMs), while powerful, are built and trained to process a single text input. In common applications, multiple inputs can be processed by concatenating them together into a single stream of text. However, the LLM is unable to distinguish which sections of prompt belong to vario...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs), while powerful, are built and trained to
process a single text input. In common applications, multiple inputs can be
processed by concatenating them together into a single stream of text. However,
the LLM is unable to distinguish which sections of prompt belong to various
input sources. Indirect prompt injection attacks take advantage of this
vulnerability by embedding adversarial instructions into untrusted data being
processed alongside user commands. Often, the LLM will mistake the adversarial
instructions as user commands to be followed, creating a security vulnerability
in the larger system. We introduce spotlighting, a family of prompt engineering
techniques that can be used to improve LLMs' ability to distinguish among
multiple sources of input. The key insight is to utilize transformations of an
input to provide a reliable and continuous signal of its provenance. We
evaluate spotlighting as a defense against indirect prompt injection attacks,
and find that it is a robust defense that has minimal detrimental impact to
underlying NLP tasks. Using GPT-family models, we find that spotlighting
reduces the attack success rate from greater than {50}\% to below {2}\% in our
experiments with minimal impact on task efficacy. |
---|---|
DOI: | 10.48550/arxiv.2403.14720 |