The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents
Large Language Model (LLM) agents are increasingly being deployed as conversational assistants capable of performing complex real-world tasks through tool integration. This enhanced ability to interact with external systems and process various data sources, while powerful, introduces significant sec...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Model (LLM) agents are increasingly being deployed as
conversational assistants capable of performing complex real-world tasks
through tool integration. This enhanced ability to interact with external
systems and process various data sources, while powerful, introduces
significant security vulnerabilities. In particular, indirect prompt injection
attacks pose a critical threat, where malicious instructions embedded within
external data sources can manipulate agents to deviate from user intentions.
While existing defenses based on rule constraints, source spotlighting, and
authentication protocols show promise, they struggle to maintain robust
security while preserving task functionality. We propose a novel and orthogonal
perspective that reframes agent security from preventing harmful actions to
ensuring task alignment, requiring every agent action to serve user objectives.
Based on this insight, we develop Task Shield, a test-time defense mechanism
that systematically verifies whether each instruction and tool call contributes
to user-specified goals. Through experiments on the AgentDojo benchmark, we
demonstrate that Task Shield reduces attack success rates (2.07\%) while
maintaining high task utility (69.79\%) on GPT-4o. |
---|---|
DOI: | 10.48550/arxiv.2412.16682 |