Low-Resource Languages Jailbreak GPT-4

AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circum...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yong, Zheng-Xin, Menghini, Cristina, Bach, Stephen H
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yong, Zheng-Xin
Menghini, Cristina
Bach, Stephen H
description AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circumventing GPT-4's safeguard through translating unsafe English inputs into low-resource languages. On the AdvBenchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time, which is on par with or even surpassing state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have significantly lower attack success rate, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages. Previously, limited training on low-resource languages primarily affects speakers of those languages, causing technological disparities. However, our work highlights a crucial shift: this deficiency now poses a risk to all LLMs users. Publicly available translation APIs enable anyone to exploit LLMs' safety vulnerabilities. Therefore, our work calls for a more holistic red-teaming efforts to develop robust multilingual safeguards with wide language coverage.
doi_str_mv 10.48550/arxiv.2310.02446
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_02446</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_02446</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-79bf0e94395fda80c74ed990edff10f1cca77ceec63a122c02796e67ed5b13d03</originalsourceid><addsrcrecordid>eNotzrluAjEUhWE3FBHwAKmYis5wvYyNS4TYopGC0PSjO_Y1GrHKI5a8fYCkOtJfHH2MfQoY6UmewxjTo7mNpHoGkFqbDzYszne-pfZ8TZ6yAk-7K-6ozb6wOdSJcJ8tNyXXPdaJeGip_79dVi7m5WzFi-_lejYtOBpruHV1BHJauTwGnIC3moJzQCFGAVF4j9Z6Im8UCik9SOsMGUshr4UKoLps8Hf7hlaX1Bwx_VQvcPUGq1_zfjnV</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Low-Resource Languages Jailbreak GPT-4</title><source>arXiv.org</source><creator>Yong, Zheng-Xin ; Menghini, Cristina ; Bach, Stephen H</creator><creatorcontrib>Yong, Zheng-Xin ; Menghini, Cristina ; Bach, Stephen H</creatorcontrib><description>AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circumventing GPT-4's safeguard through translating unsafe English inputs into low-resource languages. On the AdvBenchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time, which is on par with or even surpassing state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have significantly lower attack success rate, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages. Previously, limited training on low-resource languages primarily affects speakers of those languages, causing technological disparities. However, our work highlights a crucial shift: this deficiency now poses a risk to all LLMs users. Publicly available translation APIs enable anyone to exploit LLMs' safety vulnerabilities. Therefore, our work calls for a more holistic red-teaming efforts to develop robust multilingual safeguards with wide language coverage.</description><identifier>DOI: 10.48550/arxiv.2310.02446</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.02446$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.02446$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yong, Zheng-Xin</creatorcontrib><creatorcontrib>Menghini, Cristina</creatorcontrib><creatorcontrib>Bach, Stephen H</creatorcontrib><title>Low-Resource Languages Jailbreak GPT-4</title><description>AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circumventing GPT-4's safeguard through translating unsafe English inputs into low-resource languages. On the AdvBenchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time, which is on par with or even surpassing state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have significantly lower attack success rate, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages. Previously, limited training on low-resource languages primarily affects speakers of those languages, causing technological disparities. However, our work highlights a crucial shift: this deficiency now poses a risk to all LLMs users. Publicly available translation APIs enable anyone to exploit LLMs' safety vulnerabilities. Therefore, our work calls for a more holistic red-teaming efforts to develop robust multilingual safeguards with wide language coverage.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrluAjEUhWE3FBHwAKmYis5wvYyNS4TYopGC0PSjO_Y1GrHKI5a8fYCkOtJfHH2MfQoY6UmewxjTo7mNpHoGkFqbDzYszne-pfZ8TZ6yAk-7K-6ozb6wOdSJcJ8tNyXXPdaJeGip_79dVi7m5WzFi-_lejYtOBpruHV1BHJauTwGnIC3moJzQCFGAVF4j9Z6Im8UCik9SOsMGUshr4UKoLps8Hf7hlaX1Bwx_VQvcPUGq1_zfjnV</recordid><startdate>20231003</startdate><enddate>20231003</enddate><creator>Yong, Zheng-Xin</creator><creator>Menghini, Cristina</creator><creator>Bach, Stephen H</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231003</creationdate><title>Low-Resource Languages Jailbreak GPT-4</title><author>Yong, Zheng-Xin ; Menghini, Cristina ; Bach, Stephen H</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-79bf0e94395fda80c74ed990edff10f1cca77ceec63a122c02796e67ed5b13d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yong, Zheng-Xin</creatorcontrib><creatorcontrib>Menghini, Cristina</creatorcontrib><creatorcontrib>Bach, Stephen H</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yong, Zheng-Xin</au><au>Menghini, Cristina</au><au>Bach, Stephen H</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Low-Resource Languages Jailbreak GPT-4</atitle><date>2023-10-03</date><risdate>2023</risdate><abstract>AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circumventing GPT-4's safeguard through translating unsafe English inputs into low-resource languages. On the AdvBenchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time, which is on par with or even surpassing state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have significantly lower attack success rate, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages. Previously, limited training on low-resource languages primarily affects speakers of those languages, causing technological disparities. However, our work highlights a crucial shift: this deficiency now poses a risk to all LLMs users. Publicly available translation APIs enable anyone to exploit LLMs' safety vulnerabilities. Therefore, our work calls for a more holistic red-teaming efforts to develop robust multilingual safeguards with wide language coverage.</abstract><doi>10.48550/arxiv.2310.02446</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.02446
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_02446
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Cryptography and Security
Computer Science - Learning
title Low-Resource Languages Jailbreak GPT-4
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T20%3A15%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Low-Resource%20Languages%20Jailbreak%20GPT-4&rft.au=Yong,%20Zheng-Xin&rft.date=2023-10-03&rft_id=info:doi/10.48550/arxiv.2310.02446&rft_dat=%3Carxiv_GOX%3E2310_02446%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true