Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts

As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Samvelyan, Mikayel, Raparthy, Sharath Chandra, Lupu, Andrei, Hambro, Eric, Markosyan, Aram H, Bhatt, Manish, Mao, Yuning, Jiang, Minqi, Parker-Holder, Jack, Foerster, Jakob, Rocktäschel, Tim, Raileanu, Roberta
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Samvelyan, Mikayel
Raparthy, Sharath Chandra
Lupu, Andrei
Hambro, Eric
Markosyan, Aram H
Bhatt, Manish
Mao, Yuning
Jiang, Minqi
Parker-Holder, Jack
Foerster, Jakob
Rocktäschel, Tim
Raileanu, Roberta
description As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel black-box approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem and uses open-ended search to generate prompts that are both effective and diverse. Focusing on the safety domain, we use Rainbow Teaming to target various state-of-the-art LLMs, including the Llama 2 and Llama 3 models. Our approach reveals hundreds of effective adversarial prompts, with an attack success rate exceeding 90% across all tested models. Furthermore, we demonstrate that prompts generated by Rainbow Teaming are highly transferable and that fine-tuning models with synthetic data generated by our method significantly enhances their safety without sacrificing general performance or helpfulness. We additionally explore the versatility of Rainbow Teaming by applying it to question answering and cybersecurity, showcasing its potential to drive robust open-ended self-improvement in a wide range of applications.
doi_str_mv 10.48550/arxiv.2402.16822
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_16822</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_16822</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-51607d1771790be44739d3cf5126012af5892034186cbb92291cb7829bca72843</originalsourceid><addsrcrecordid>eNotz09LwzAYgPFcPMj0A3gyX6A1efPf25h1CoMN6b28aVIJrGlJx9RvL5uentsDP0IeOKulVYo9YflO5xokg5prC3BLmg9M2U9ftI04pvz5TPdzzFWTQwx0G3MseEpTptNAX9I5liXSdbgUS8IjPZRpnE_LHbkZ8LjE-_-uSPvatJu3arffvm_Wuwq1gUpxzUzgxnDjmI9SGuGC6AfFQTMOOCjrgAnJre69dwCO995YcL5HA1aKFXn8214d3VzSiOWnu3i6q0f8AuZ9Q6s</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts</title><source>arXiv.org</source><creator>Samvelyan, Mikayel ; Raparthy, Sharath Chandra ; Lupu, Andrei ; Hambro, Eric ; Markosyan, Aram H ; Bhatt, Manish ; Mao, Yuning ; Jiang, Minqi ; Parker-Holder, Jack ; Foerster, Jakob ; Rocktäschel, Tim ; Raileanu, Roberta</creator><creatorcontrib>Samvelyan, Mikayel ; Raparthy, Sharath Chandra ; Lupu, Andrei ; Hambro, Eric ; Markosyan, Aram H ; Bhatt, Manish ; Mao, Yuning ; Jiang, Minqi ; Parker-Holder, Jack ; Foerster, Jakob ; Rocktäschel, Tim ; Raileanu, Roberta</creatorcontrib><description>As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel black-box approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem and uses open-ended search to generate prompts that are both effective and diverse. Focusing on the safety domain, we use Rainbow Teaming to target various state-of-the-art LLMs, including the Llama 2 and Llama 3 models. Our approach reveals hundreds of effective adversarial prompts, with an attack success rate exceeding 90% across all tested models. Furthermore, we demonstrate that prompts generated by Rainbow Teaming are highly transferable and that fine-tuning models with synthetic data generated by our method significantly enhances their safety without sacrificing general performance or helpfulness. We additionally explore the versatility of Rainbow Teaming by applying it to question answering and cybersecurity, showcasing its potential to drive robust open-ended self-improvement in a wide range of applications.</description><identifier>DOI: 10.48550/arxiv.2402.16822</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.16822$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.16822$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Samvelyan, Mikayel</creatorcontrib><creatorcontrib>Raparthy, Sharath Chandra</creatorcontrib><creatorcontrib>Lupu, Andrei</creatorcontrib><creatorcontrib>Hambro, Eric</creatorcontrib><creatorcontrib>Markosyan, Aram H</creatorcontrib><creatorcontrib>Bhatt, Manish</creatorcontrib><creatorcontrib>Mao, Yuning</creatorcontrib><creatorcontrib>Jiang, Minqi</creatorcontrib><creatorcontrib>Parker-Holder, Jack</creatorcontrib><creatorcontrib>Foerster, Jakob</creatorcontrib><creatorcontrib>Rocktäschel, Tim</creatorcontrib><creatorcontrib>Raileanu, Roberta</creatorcontrib><title>Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts</title><description>As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel black-box approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem and uses open-ended search to generate prompts that are both effective and diverse. Focusing on the safety domain, we use Rainbow Teaming to target various state-of-the-art LLMs, including the Llama 2 and Llama 3 models. Our approach reveals hundreds of effective adversarial prompts, with an attack success rate exceeding 90% across all tested models. Furthermore, we demonstrate that prompts generated by Rainbow Teaming are highly transferable and that fine-tuning models with synthetic data generated by our method significantly enhances their safety without sacrificing general performance or helpfulness. We additionally explore the versatility of Rainbow Teaming by applying it to question answering and cybersecurity, showcasing its potential to drive robust open-ended self-improvement in a wide range of applications.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz09LwzAYgPFcPMj0A3gyX6A1efPf25h1CoMN6b28aVIJrGlJx9RvL5uentsDP0IeOKulVYo9YflO5xokg5prC3BLmg9M2U9ftI04pvz5TPdzzFWTQwx0G3MseEpTptNAX9I5liXSdbgUS8IjPZRpnE_LHbkZ8LjE-_-uSPvatJu3arffvm_Wuwq1gUpxzUzgxnDjmI9SGuGC6AfFQTMOOCjrgAnJre69dwCO995YcL5HA1aKFXn8214d3VzSiOWnu3i6q0f8AuZ9Q6s</recordid><startdate>20240226</startdate><enddate>20240226</enddate><creator>Samvelyan, Mikayel</creator><creator>Raparthy, Sharath Chandra</creator><creator>Lupu, Andrei</creator><creator>Hambro, Eric</creator><creator>Markosyan, Aram H</creator><creator>Bhatt, Manish</creator><creator>Mao, Yuning</creator><creator>Jiang, Minqi</creator><creator>Parker-Holder, Jack</creator><creator>Foerster, Jakob</creator><creator>Rocktäschel, Tim</creator><creator>Raileanu, Roberta</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240226</creationdate><title>Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts</title><author>Samvelyan, Mikayel ; Raparthy, Sharath Chandra ; Lupu, Andrei ; Hambro, Eric ; Markosyan, Aram H ; Bhatt, Manish ; Mao, Yuning ; Jiang, Minqi ; Parker-Holder, Jack ; Foerster, Jakob ; Rocktäschel, Tim ; Raileanu, Roberta</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-51607d1771790be44739d3cf5126012af5892034186cbb92291cb7829bca72843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Samvelyan, Mikayel</creatorcontrib><creatorcontrib>Raparthy, Sharath Chandra</creatorcontrib><creatorcontrib>Lupu, Andrei</creatorcontrib><creatorcontrib>Hambro, Eric</creatorcontrib><creatorcontrib>Markosyan, Aram H</creatorcontrib><creatorcontrib>Bhatt, Manish</creatorcontrib><creatorcontrib>Mao, Yuning</creatorcontrib><creatorcontrib>Jiang, Minqi</creatorcontrib><creatorcontrib>Parker-Holder, Jack</creatorcontrib><creatorcontrib>Foerster, Jakob</creatorcontrib><creatorcontrib>Rocktäschel, Tim</creatorcontrib><creatorcontrib>Raileanu, Roberta</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Samvelyan, Mikayel</au><au>Raparthy, Sharath Chandra</au><au>Lupu, Andrei</au><au>Hambro, Eric</au><au>Markosyan, Aram H</au><au>Bhatt, Manish</au><au>Mao, Yuning</au><au>Jiang, Minqi</au><au>Parker-Holder, Jack</au><au>Foerster, Jakob</au><au>Rocktäschel, Tim</au><au>Raileanu, Roberta</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts</atitle><date>2024-02-26</date><risdate>2024</risdate><abstract>As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel black-box approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem and uses open-ended search to generate prompts that are both effective and diverse. Focusing on the safety domain, we use Rainbow Teaming to target various state-of-the-art LLMs, including the Llama 2 and Llama 3 models. Our approach reveals hundreds of effective adversarial prompts, with an attack success rate exceeding 90% across all tested models. Furthermore, we demonstrate that prompts generated by Rainbow Teaming are highly transferable and that fine-tuning models with synthetic data generated by our method significantly enhances their safety without sacrificing general performance or helpfulness. We additionally explore the versatility of Rainbow Teaming by applying it to question answering and cybersecurity, showcasing its potential to drive robust open-ended self-improvement in a wide range of applications.</abstract><doi>10.48550/arxiv.2402.16822</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.16822
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_16822
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T04%3A03%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rainbow%20Teaming:%20Open-Ended%20Generation%20of%20Diverse%20Adversarial%20Prompts&rft.au=Samvelyan,%20Mikayel&rft.date=2024-02-26&rft_id=info:doi/10.48550/arxiv.2402.16822&rft_dat=%3Carxiv_GOX%3E2402_16822%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true