POEX: Policy Executable Embodied AI Jailbreak Attacks
The integration of large language models (LLMs) into the planning module of Embodied Artificial Intelligence (Embodied AI) systems has greatly enhanced their ability to translate complex user instructions into executable policies. In this paper, we demystified how traditional LLM jailbreak attacks b...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The integration of large language models (LLMs) into the planning module of
Embodied Artificial Intelligence (Embodied AI) systems has greatly enhanced
their ability to translate complex user instructions into executable policies.
In this paper, we demystified how traditional LLM jailbreak attacks behave in
the Embodied AI context. We conducted a comprehensive safety analysis of the
LLM-based planning module of embodied AI systems against jailbreak attacks.
Using the carefully crafted Harmful-RLbench, we accessed 20 open-source and
proprietary LLMs under traditional jailbreak attacks, and highlighted two key
challenges when adopting the prior jailbreak techniques to embodied AI
contexts: (1) The harmful text output by LLMs does not necessarily induce
harmful policies in Embodied AI context, and (2) even we can generate harmful
policies, we have to guarantee they are executable in practice. To overcome
those challenges, we propose Policy Executable (POEX) jailbreak attacks, where
harmful instructions and optimized suffixes are injected into LLM-based
planning modules, leading embodied AI to perform harmful actions in both
simulated and physical environments. Our approach involves constraining
adversarial suffixes to evade detection and fine-tuning a policy evaluater to
improve the executability of harmful policies. We conducted extensive
experiments on both a robotic arm embodied AI platform and simulators, to
validate the attack and policy success rates on 136 harmful instructions from
Harmful-RLbench. Our findings expose serious safety vulnerabilities in
LLM-based planning modules, including the ability of POEX to be transferred
across models. Finally, we propose mitigation strategies, such as
safety-constrained prompts, pre- and post-planning checks, to address these
vulnerabilities and ensure the safe deployment of embodied AI in real-world
settings. |
---|---|
DOI: | 10.48550/arxiv.2412.16633 |