CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models

It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness ag...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lou, Qian, Liang, Xin, Xue, Jiaqi, Zhang, Yancheng, Xie, Rui, Zheng, Mengxin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lou, Qian
Liang, Xin
Xue, Jiaqi
Zhang, Yancheng
Xie, Rui
Zheng, Mengxin
description It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness against Universal Text Perturbations (UTPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing certified robustness based on random smoothing has shown considerable promise in certifying the input-specific text perturbations (ISTPs), operating under the assumption that any random alteration of a sample's clean or adversarial words would negate the impact of sample-wise perturbations. However, with UTPs, masking only the adversarial words can eliminate the attack. A naive method is to simply increase the masking ratio and the likelihood of masking attack tokens, but it leads to a significant reduction in both certified accuracy and the certified radius due to input corruption by extensive masking. To solve this challenge, we introduce a novel approach, the superior prompt search method, designed to identify a superior prompt that maintains higher certified accuracy under extensive masking. Additionally, we theoretically motivate why ensembles are a particularly suitable choice as base prompts for random smoothing. The method is denoted by superior prompt ensembling technique. We also empirically confirm this technique, obtaining state-of-the-art results in multiple settings. These methodologies, for the first time, enable high certified accuracy against both UTPs and ISTPs. The source code of CR-UTP is available at \url {https://github.com/UCFML-Research/CR-UTP}.
doi_str_mv 10.48550/arxiv.2406.01873
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_01873</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_01873</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-a27cdb0026c0456b53ee18ed3d274fa74d280fb6b7e7ff34fe434b2ce9482adb3</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QGuapEl1J8U_6OAwdDZuyk1zUwI1laQdxre3jm7OOYuPAx8hNwXLZVWW7A7iyR9zLpnKWVFpcUk-6n12aHcPtMY4e-fR0v1kljQHTInCAD6kmR6CP2JMMNIWTzPdrewSDcx-ColOgTYQB1wzDAusYztZHNMVuXAwJrz-7w1pn5_a-jVr3l_e6scmA6VFBlz31jDGVc9kqUwpEIsKrbBcSwdaWl4xZ5TRqJ0T0qEU0vAe72XFwRqxIbd_t2e57iv6T4jf3a9kd5YUP4zoTZw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models</title><source>arXiv.org</source><creator>Lou, Qian ; Liang, Xin ; Xue, Jiaqi ; Zhang, Yancheng ; Xie, Rui ; Zheng, Mengxin</creator><creatorcontrib>Lou, Qian ; Liang, Xin ; Xue, Jiaqi ; Zhang, Yancheng ; Xie, Rui ; Zheng, Mengxin</creatorcontrib><description>It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness against Universal Text Perturbations (UTPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing certified robustness based on random smoothing has shown considerable promise in certifying the input-specific text perturbations (ISTPs), operating under the assumption that any random alteration of a sample's clean or adversarial words would negate the impact of sample-wise perturbations. However, with UTPs, masking only the adversarial words can eliminate the attack. A naive method is to simply increase the masking ratio and the likelihood of masking attack tokens, but it leads to a significant reduction in both certified accuracy and the certified radius due to input corruption by extensive masking. To solve this challenge, we introduce a novel approach, the superior prompt search method, designed to identify a superior prompt that maintains higher certified accuracy under extensive masking. Additionally, we theoretically motivate why ensembles are a particularly suitable choice as base prompts for random smoothing. The method is denoted by superior prompt ensembling technique. We also empirically confirm this technique, obtaining state-of-the-art results in multiple settings. These methodologies, for the first time, enable high certified accuracy against both UTPs and ISTPs. The source code of CR-UTP is available at \url {https://github.com/UCFML-Research/CR-UTP}.</description><identifier>DOI: 10.48550/arxiv.2406.01873</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.01873$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.01873$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lou, Qian</creatorcontrib><creatorcontrib>Liang, Xin</creatorcontrib><creatorcontrib>Xue, Jiaqi</creatorcontrib><creatorcontrib>Zhang, Yancheng</creatorcontrib><creatorcontrib>Xie, Rui</creatorcontrib><creatorcontrib>Zheng, Mengxin</creatorcontrib><title>CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models</title><description>It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness against Universal Text Perturbations (UTPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing certified robustness based on random smoothing has shown considerable promise in certifying the input-specific text perturbations (ISTPs), operating under the assumption that any random alteration of a sample's clean or adversarial words would negate the impact of sample-wise perturbations. However, with UTPs, masking only the adversarial words can eliminate the attack. A naive method is to simply increase the masking ratio and the likelihood of masking attack tokens, but it leads to a significant reduction in both certified accuracy and the certified radius due to input corruption by extensive masking. To solve this challenge, we introduce a novel approach, the superior prompt search method, designed to identify a superior prompt that maintains higher certified accuracy under extensive masking. Additionally, we theoretically motivate why ensembles are a particularly suitable choice as base prompts for random smoothing. The method is denoted by superior prompt ensembling technique. We also empirically confirm this technique, obtaining state-of-the-art results in multiple settings. These methodologies, for the first time, enable high certified accuracy against both UTPs and ISTPs. The source code of CR-UTP is available at \url {https://github.com/UCFML-Research/CR-UTP}.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QGuapEl1J8U_6OAwdDZuyk1zUwI1laQdxre3jm7OOYuPAx8hNwXLZVWW7A7iyR9zLpnKWVFpcUk-6n12aHcPtMY4e-fR0v1kljQHTInCAD6kmR6CP2JMMNIWTzPdrewSDcx-ColOgTYQB1wzDAusYztZHNMVuXAwJrz-7w1pn5_a-jVr3l_e6scmA6VFBlz31jDGVc9kqUwpEIsKrbBcSwdaWl4xZ5TRqJ0T0qEU0vAe72XFwRqxIbd_t2e57iv6T4jf3a9kd5YUP4zoTZw</recordid><startdate>20240603</startdate><enddate>20240603</enddate><creator>Lou, Qian</creator><creator>Liang, Xin</creator><creator>Xue, Jiaqi</creator><creator>Zhang, Yancheng</creator><creator>Xie, Rui</creator><creator>Zheng, Mengxin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240603</creationdate><title>CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models</title><author>Lou, Qian ; Liang, Xin ; Xue, Jiaqi ; Zhang, Yancheng ; Xie, Rui ; Zheng, Mengxin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-a27cdb0026c0456b53ee18ed3d274fa74d280fb6b7e7ff34fe434b2ce9482adb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lou, Qian</creatorcontrib><creatorcontrib>Liang, Xin</creatorcontrib><creatorcontrib>Xue, Jiaqi</creatorcontrib><creatorcontrib>Zhang, Yancheng</creatorcontrib><creatorcontrib>Xie, Rui</creatorcontrib><creatorcontrib>Zheng, Mengxin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lou, Qian</au><au>Liang, Xin</au><au>Xue, Jiaqi</au><au>Zhang, Yancheng</au><au>Xie, Rui</au><au>Zheng, Mengxin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models</atitle><date>2024-06-03</date><risdate>2024</risdate><abstract>It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness against Universal Text Perturbations (UTPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing certified robustness based on random smoothing has shown considerable promise in certifying the input-specific text perturbations (ISTPs), operating under the assumption that any random alteration of a sample's clean or adversarial words would negate the impact of sample-wise perturbations. However, with UTPs, masking only the adversarial words can eliminate the attack. A naive method is to simply increase the masking ratio and the likelihood of masking attack tokens, but it leads to a significant reduction in both certified accuracy and the certified radius due to input corruption by extensive masking. To solve this challenge, we introduce a novel approach, the superior prompt search method, designed to identify a superior prompt that maintains higher certified accuracy under extensive masking. Additionally, we theoretically motivate why ensembles are a particularly suitable choice as base prompts for random smoothing. The method is denoted by superior prompt ensembling technique. We also empirically confirm this technique, obtaining state-of-the-art results in multiple settings. These methodologies, for the first time, enable high certified accuracy against both UTPs and ISTPs. The source code of CR-UTP is available at \url {https://github.com/UCFML-Research/CR-UTP}.</abstract><doi>10.48550/arxiv.2406.01873</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.01873
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_01873
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Cryptography and Security
Computer Science - Learning
title CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T22%3A07%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CR-UTP:%20Certified%20Robustness%20against%20Universal%20Text%20Perturbations%20on%20Large%20Language%20Models&rft.au=Lou,%20Qian&rft.date=2024-06-03&rft_id=info:doi/10.48550/arxiv.2406.01873&rft_dat=%3Carxiv_GOX%3E2406_01873%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true