MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue

Large Language Models (LLMs) demonstrate outstanding performance in their reservoir of knowledge and understanding capabilities, but they have also been shown to be prone to illegal or unethical reactions when subjected to jailbreak attacks. To ensure their responsible deployment in critical applica...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Wang, Fengxiang, Duan, Ranjie, Xiao, Peng, Jia, Xiaojun, Chen, YueFeng, Wang, Chongwen, Tao, Jialing, Su, Hang, Zhu, Jun, Xue, Hui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wang, Fengxiang
Duan, Ranjie
Xiao, Peng
Jia, Xiaojun
Chen, YueFeng
Wang, Chongwen
Tao, Jialing
Su, Hang
Zhu, Jun
Xue, Hui
description Large Language Models (LLMs) demonstrate outstanding performance in their reservoir of knowledge and understanding capabilities, but they have also been shown to be prone to illegal or unethical reactions when subjected to jailbreak attacks. To ensure their responsible deployment in critical applications, it is crucial to understand the safety capabilities and vulnerabilities of LLMs. Previous works mainly focus on jailbreak in single-round dialogue, overlooking the potential jailbreak risks in multi-round dialogues, which are a vital way humans interact with and extract information from LLMs. Some studies have increasingly concentrated on the risks associated with jailbreak in multi-round dialogues. These efforts typically involve the use of manually crafted templates or prompt engineering techniques. However, due to the inherent complexity of multi-round dialogues, their jailbreak performance is limited. To solve this problem, we propose a novel multi-round dialogue jailbreaking agent, emphasizing the importance of stealthiness in identifying and mitigating potential threats to human values posed by LLMs. We propose a risk decomposition strategy that distributes risks across multiple rounds of queries and utilizes psychological strategies to enhance attack strength. Extensive experiments show that our proposed method surpasses other attack methods and achieves state-of-the-art attack success rate. We will make the corresponding code and dataset available for future research. The code will be released soon.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3125869736</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3125869736</sourcerecordid><originalsourceid>FETCH-proquest_journals_31258697363</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw8A3y0nVMT80rsVJwzFNwTUtLTS7JLEtV8ErMzEkqSk3MVgDLKqTlFyn4luaUZOoG5ZfmpSi4ZCbm5KeXpvIwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyxoZGphZmlubGZMXGqAG9lN60</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3125869736</pqid></control><display><type>article</type><title>MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue</title><source>Free E- Journals</source><creator>Wang, Fengxiang ; Duan, Ranjie ; Xiao, Peng ; Jia, Xiaojun ; Chen, YueFeng ; Wang, Chongwen ; Tao, Jialing ; Su, Hang ; Zhu, Jun ; Xue, Hui</creator><creatorcontrib>Wang, Fengxiang ; Duan, Ranjie ; Xiao, Peng ; Jia, Xiaojun ; Chen, YueFeng ; Wang, Chongwen ; Tao, Jialing ; Su, Hang ; Zhu, Jun ; Xue, Hui</creatorcontrib><description>Large Language Models (LLMs) demonstrate outstanding performance in their reservoir of knowledge and understanding capabilities, but they have also been shown to be prone to illegal or unethical reactions when subjected to jailbreak attacks. To ensure their responsible deployment in critical applications, it is crucial to understand the safety capabilities and vulnerabilities of LLMs. Previous works mainly focus on jailbreak in single-round dialogue, overlooking the potential jailbreak risks in multi-round dialogues, which are a vital way humans interact with and extract information from LLMs. Some studies have increasingly concentrated on the risks associated with jailbreak in multi-round dialogues. These efforts typically involve the use of manually crafted templates or prompt engineering techniques. However, due to the inherent complexity of multi-round dialogues, their jailbreak performance is limited. To solve this problem, we propose a novel multi-round dialogue jailbreaking agent, emphasizing the importance of stealthiness in identifying and mitigating potential threats to human values posed by LLMs. We propose a risk decomposition strategy that distributes risks across multiple rounds of queries and utilizes psychological strategies to enhance attack strength. Extensive experiments show that our proposed method surpasses other attack methods and achieves state-of-the-art attack success rate. We will make the corresponding code and dataset available for future research. The code will be released soon.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Large language models ; Prompt engineering</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Wang, Fengxiang</creatorcontrib><creatorcontrib>Duan, Ranjie</creatorcontrib><creatorcontrib>Xiao, Peng</creatorcontrib><creatorcontrib>Jia, Xiaojun</creatorcontrib><creatorcontrib>Chen, YueFeng</creatorcontrib><creatorcontrib>Wang, Chongwen</creatorcontrib><creatorcontrib>Tao, Jialing</creatorcontrib><creatorcontrib>Su, Hang</creatorcontrib><creatorcontrib>Zhu, Jun</creatorcontrib><creatorcontrib>Xue, Hui</creatorcontrib><title>MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue</title><title>arXiv.org</title><description>Large Language Models (LLMs) demonstrate outstanding performance in their reservoir of knowledge and understanding capabilities, but they have also been shown to be prone to illegal or unethical reactions when subjected to jailbreak attacks. To ensure their responsible deployment in critical applications, it is crucial to understand the safety capabilities and vulnerabilities of LLMs. Previous works mainly focus on jailbreak in single-round dialogue, overlooking the potential jailbreak risks in multi-round dialogues, which are a vital way humans interact with and extract information from LLMs. Some studies have increasingly concentrated on the risks associated with jailbreak in multi-round dialogues. These efforts typically involve the use of manually crafted templates or prompt engineering techniques. However, due to the inherent complexity of multi-round dialogues, their jailbreak performance is limited. To solve this problem, we propose a novel multi-round dialogue jailbreaking agent, emphasizing the importance of stealthiness in identifying and mitigating potential threats to human values posed by LLMs. We propose a risk decomposition strategy that distributes risks across multiple rounds of queries and utilizes psychological strategies to enhance attack strength. Extensive experiments show that our proposed method surpasses other attack methods and achieves state-of-the-art attack success rate. We will make the corresponding code and dataset available for future research. The code will be released soon.</description><subject>Large language models</subject><subject>Prompt engineering</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw8A3y0nVMT80rsVJwzFNwTUtLTS7JLEtV8ErMzEkqSk3MVgDLKqTlFyn4luaUZOoG5ZfmpSi4ZCbm5KeXpvIwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyxoZGphZmlubGZMXGqAG9lN60</recordid><startdate>20241106</startdate><enddate>20241106</enddate><creator>Wang, Fengxiang</creator><creator>Duan, Ranjie</creator><creator>Xiao, Peng</creator><creator>Jia, Xiaojun</creator><creator>Chen, YueFeng</creator><creator>Wang, Chongwen</creator><creator>Tao, Jialing</creator><creator>Su, Hang</creator><creator>Zhu, Jun</creator><creator>Xue, Hui</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241106</creationdate><title>MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue</title><author>Wang, Fengxiang ; Duan, Ranjie ; Xiao, Peng ; Jia, Xiaojun ; Chen, YueFeng ; Wang, Chongwen ; Tao, Jialing ; Su, Hang ; Zhu, Jun ; Xue, Hui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31258697363</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Large language models</topic><topic>Prompt engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Fengxiang</creatorcontrib><creatorcontrib>Duan, Ranjie</creatorcontrib><creatorcontrib>Xiao, Peng</creatorcontrib><creatorcontrib>Jia, Xiaojun</creatorcontrib><creatorcontrib>Chen, YueFeng</creatorcontrib><creatorcontrib>Wang, Chongwen</creatorcontrib><creatorcontrib>Tao, Jialing</creatorcontrib><creatorcontrib>Su, Hang</creatorcontrib><creatorcontrib>Zhu, Jun</creatorcontrib><creatorcontrib>Xue, Hui</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Fengxiang</au><au>Duan, Ranjie</au><au>Xiao, Peng</au><au>Jia, Xiaojun</au><au>Chen, YueFeng</au><au>Wang, Chongwen</au><au>Tao, Jialing</au><au>Su, Hang</au><au>Zhu, Jun</au><au>Xue, Hui</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue</atitle><jtitle>arXiv.org</jtitle><date>2024-11-06</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) demonstrate outstanding performance in their reservoir of knowledge and understanding capabilities, but they have also been shown to be prone to illegal or unethical reactions when subjected to jailbreak attacks. To ensure their responsible deployment in critical applications, it is crucial to understand the safety capabilities and vulnerabilities of LLMs. Previous works mainly focus on jailbreak in single-round dialogue, overlooking the potential jailbreak risks in multi-round dialogues, which are a vital way humans interact with and extract information from LLMs. Some studies have increasingly concentrated on the risks associated with jailbreak in multi-round dialogues. These efforts typically involve the use of manually crafted templates or prompt engineering techniques. However, due to the inherent complexity of multi-round dialogues, their jailbreak performance is limited. To solve this problem, we propose a novel multi-round dialogue jailbreaking agent, emphasizing the importance of stealthiness in identifying and mitigating potential threats to human values posed by LLMs. We propose a risk decomposition strategy that distributes risks across multiple rounds of queries and utilizes psychological strategies to enhance attack strength. Extensive experiments show that our proposed method surpasses other attack methods and achieves state-of-the-art attack success rate. We will make the corresponding code and dataset available for future research. The code will be released soon.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3125869736
source Free E- Journals
subjects Large language models
Prompt engineering
title MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T23%3A38%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MRJ-Agent:%20An%20Effective%20Jailbreak%20Agent%20for%20Multi-Round%20Dialogue&rft.jtitle=arXiv.org&rft.au=Wang,%20Fengxiang&rft.date=2024-11-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3125869736%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3125869736&rft_id=info:pmid/&rfr_iscdi=true