A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents

Language agents powered by large language models (LLMs) have seen exploding development. Their capability of using language as a vehicle for thought and communication lends an incredible level of flexibility and versatility. People have quickly capitalized on this capability to connect LLMs to a wid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mo, Lingbo, Liao, Zeyi, Zheng, Boyuan, Su, Yu, Xiao, Chaowei, Sun, Huan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mo, Lingbo
Liao, Zeyi
Zheng, Boyuan
Su, Yu
Xiao, Chaowei
Sun, Huan
description Language agents powered by large language models (LLMs) have seen exploding development. Their capability of using language as a vehicle for thought and communication lends an incredible level of flexibility and versatility. People have quickly capitalized on this capability to connect LLMs to a wide range of external components and environments: databases, tools, the Internet, robotic embodiment, etc. Many believe an unprecedentedly powerful automation technology is emerging. However, new automation technologies come with new safety risks, especially for intricate systems like language agents. There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks. Are we building a house of cards? In this position paper, we present the first systematic effort in mapping adversarial attacks against language agents. We first present a unified conceptual framework for agents with three major components: Perception, Brain, and Action. Under this framework, we present a comprehensive discussion and propose 12 potential attack scenarios against different components of an agent, covering different attack strategies (e.g., input manipulation, adversarial demonstrations, jailbreaking, backdoors). We also draw connections to successful attack strategies previously applied to LLMs. We emphasize the urgency to gain a thorough understanding of language agent risks before their widespread deployment.
doi_str_mv 10.48550/arxiv.2402.10196
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_10196</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_10196</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-bb3f54329de4267eb5498dfe5d8fc341f7b522b34d5cfde5eb355b9bbdfca6ea3</originalsourceid><addsrcrecordid>eNotz71qwzAYhWEtHUrSC-hU3YBdW3-2pyBM2xQcsng3n6xPQtRxjOSE9u5L0k4H3uHAQ8hzWeSilrJ4hfgdrjkTBcvLomzUIzlq2kc8mSnMnu7Pl4T07GgL0aYdPcCy3Lq2V4wJYoCJ6nWF8StR8BDmtNIOZn8Bj1R7nNe0JQ8OpoRP_7sh_ftb3-6z7vjx2eouA1WpzBjupOCssSiYqtBI0dTWobS1G7koXWUkY4YLK0dnUaLhUprGGOtGUAh8Q17-bu-iYYnhBPFnuMmGu4z_AqPZSfQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents</title><source>arXiv.org</source><creator>Mo, Lingbo ; Liao, Zeyi ; Zheng, Boyuan ; Su, Yu ; Xiao, Chaowei ; Sun, Huan</creator><creatorcontrib>Mo, Lingbo ; Liao, Zeyi ; Zheng, Boyuan ; Su, Yu ; Xiao, Chaowei ; Sun, Huan</creatorcontrib><description>Language agents powered by large language models (LLMs) have seen exploding development. Their capability of using language as a vehicle for thought and communication lends an incredible level of flexibility and versatility. People have quickly capitalized on this capability to connect LLMs to a wide range of external components and environments: databases, tools, the Internet, robotic embodiment, etc. Many believe an unprecedentedly powerful automation technology is emerging. However, new automation technologies come with new safety risks, especially for intricate systems like language agents. There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks. Are we building a house of cards? In this position paper, we present the first systematic effort in mapping adversarial attacks against language agents. We first present a unified conceptual framework for agents with three major components: Perception, Brain, and Action. Under this framework, we present a comprehensive discussion and propose 12 potential attack scenarios against different components of an agent, covering different attack strategies (e.g., input manipulation, adversarial demonstrations, jailbreaking, backdoors). We also draw connections to successful attack strategies previously applied to LLMs. We emphasize the urgency to gain a thorough understanding of language agent risks before their widespread deployment.</description><identifier>DOI: 10.48550/arxiv.2402.10196</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.10196$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.10196$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mo, Lingbo</creatorcontrib><creatorcontrib>Liao, Zeyi</creatorcontrib><creatorcontrib>Zheng, Boyuan</creatorcontrib><creatorcontrib>Su, Yu</creatorcontrib><creatorcontrib>Xiao, Chaowei</creatorcontrib><creatorcontrib>Sun, Huan</creatorcontrib><title>A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents</title><description>Language agents powered by large language models (LLMs) have seen exploding development. Their capability of using language as a vehicle for thought and communication lends an incredible level of flexibility and versatility. People have quickly capitalized on this capability to connect LLMs to a wide range of external components and environments: databases, tools, the Internet, robotic embodiment, etc. Many believe an unprecedentedly powerful automation technology is emerging. However, new automation technologies come with new safety risks, especially for intricate systems like language agents. There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks. Are we building a house of cards? In this position paper, we present the first systematic effort in mapping adversarial attacks against language agents. We first present a unified conceptual framework for agents with three major components: Perception, Brain, and Action. Under this framework, we present a comprehensive discussion and propose 12 potential attack scenarios against different components of an agent, covering different attack strategies (e.g., input manipulation, adversarial demonstrations, jailbreaking, backdoors). We also draw connections to successful attack strategies previously applied to LLMs. We emphasize the urgency to gain a thorough understanding of language agent risks before their widespread deployment.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71qwzAYhWEtHUrSC-hU3YBdW3-2pyBM2xQcsng3n6xPQtRxjOSE9u5L0k4H3uHAQ8hzWeSilrJ4hfgdrjkTBcvLomzUIzlq2kc8mSnMnu7Pl4T07GgL0aYdPcCy3Lq2V4wJYoCJ6nWF8StR8BDmtNIOZn8Bj1R7nNe0JQ8OpoRP_7sh_ftb3-6z7vjx2eouA1WpzBjupOCssSiYqtBI0dTWobS1G7koXWUkY4YLK0dnUaLhUprGGOtGUAh8Q17-bu-iYYnhBPFnuMmGu4z_AqPZSfQ</recordid><startdate>20240215</startdate><enddate>20240215</enddate><creator>Mo, Lingbo</creator><creator>Liao, Zeyi</creator><creator>Zheng, Boyuan</creator><creator>Su, Yu</creator><creator>Xiao, Chaowei</creator><creator>Sun, Huan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240215</creationdate><title>A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents</title><author>Mo, Lingbo ; Liao, Zeyi ; Zheng, Boyuan ; Su, Yu ; Xiao, Chaowei ; Sun, Huan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-bb3f54329de4267eb5498dfe5d8fc341f7b522b34d5cfde5eb355b9bbdfca6ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Mo, Lingbo</creatorcontrib><creatorcontrib>Liao, Zeyi</creatorcontrib><creatorcontrib>Zheng, Boyuan</creatorcontrib><creatorcontrib>Su, Yu</creatorcontrib><creatorcontrib>Xiao, Chaowei</creatorcontrib><creatorcontrib>Sun, Huan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mo, Lingbo</au><au>Liao, Zeyi</au><au>Zheng, Boyuan</au><au>Su, Yu</au><au>Xiao, Chaowei</au><au>Sun, Huan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents</atitle><date>2024-02-15</date><risdate>2024</risdate><abstract>Language agents powered by large language models (LLMs) have seen exploding development. Their capability of using language as a vehicle for thought and communication lends an incredible level of flexibility and versatility. People have quickly capitalized on this capability to connect LLMs to a wide range of external components and environments: databases, tools, the Internet, robotic embodiment, etc. Many believe an unprecedentedly powerful automation technology is emerging. However, new automation technologies come with new safety risks, especially for intricate systems like language agents. There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks. Are we building a house of cards? In this position paper, we present the first systematic effort in mapping adversarial attacks against language agents. We first present a unified conceptual framework for agents with three major components: Perception, Brain, and Action. Under this framework, we present a comprehensive discussion and propose 12 potential attack scenarios against different components of an agent, covering different attack strategies (e.g., input manipulation, adversarial demonstrations, jailbreaking, backdoors). We also draw connections to successful attack strategies previously applied to LLMs. We emphasize the urgency to gain a thorough understanding of language agent risks before their widespread deployment.</abstract><doi>10.48550/arxiv.2402.10196</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.10196
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_10196
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T19%3A19%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Trembling%20House%20of%20Cards?%20Mapping%20Adversarial%20Attacks%20against%20Language%20Agents&rft.au=Mo,%20Lingbo&rft.date=2024-02-15&rft_id=info:doi/10.48550/arxiv.2402.10196&rft_dat=%3Carxiv_GOX%3E2402_10196%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true