Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory

Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs). Strengthening attacks and uncovering vulnerabilities, especially common issues in VLP models (e.g., high transferable...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gao, Sensen, Jia, Xiaojun, Ren, Xuhong, Tsang, Ivor, Guo, Qing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gao, Sensen
Jia, Xiaojun
Ren, Xuhong
Tsang, Ivor
Guo, Qing
description Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs). Strengthening attacks and uncovering vulnerabilities, especially common issues in VLP models (e.g., high transferable AEs), can advance reliable and practical VLP models. A recent work (i.e., Set-level guidance attack) indicates that augmenting image-text pairs to increase AE diversity along the optimization path enhances the transferability of adversarial examples significantly. However, this approach predominantly emphasizes diversity around the online adversarial examples (i.e., AEs in the optimization period), leading to the risk of overfitting the victim model and affecting the transferability. In this study, we posit that the diversity of adversarial examples towards the clean input and online AEs are both pivotal for enhancing transferability across VLP models. Consequently, we propose using diversification along the intersection region of adversarial trajectory to expand the diversity of AEs. To fully leverage the interaction between modalities, we introduce text-guided adversarial example selection during optimization. Furthermore, to further mitigate the potential overfitting, we direct the adversarial text deviating from the last intersection region along the optimization path, rather than adversarial images as in existing methods. Extensive experiments affirm the effectiveness of our method in improving transferability across various VLP models and downstream vision-and-language tasks.
doi_str_mv 10.48550/arxiv.2403.12445
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_12445</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_12445</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-c29d2cc56bd2b26f7c020eb34828fa24a9e7c76913f239c958e13ab758fc86f63</originalsourceid><addsrcrecordid>eNotkM1OhDAUhdm4MKMP4Mq-AAgtLWWJ498kJCaGuCWX0uJVLKatxFn56gLO6iT3nHtO8kXRVZYmueQ8vQH3g3NC85QlGc1zfh793k6TD2gH0jiw3mgHHY4YjgQteUWPk41rsMM3DJpUIYD68GRGIHc4a-fRoIKwhAiM01IS3jQ52LA4Wm3nFz2sMhlS9esDOIRx3XpfApM7XkRnBkavL0-6i5qH-2b_FNfPj4d9VccgCh4rWvZUKS66nnZUmEKlNNUdyyWVBmgOpS5UIcqMGcpKVXKpMwZdwaVRUhjBdtH1f-1GoP1y-Anu2K4k2o0E-wPCylwN</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory</title><source>arXiv.org</source><creator>Gao, Sensen ; Jia, Xiaojun ; Ren, Xuhong ; Tsang, Ivor ; Guo, Qing</creator><creatorcontrib>Gao, Sensen ; Jia, Xiaojun ; Ren, Xuhong ; Tsang, Ivor ; Guo, Qing</creatorcontrib><description>Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs). Strengthening attacks and uncovering vulnerabilities, especially common issues in VLP models (e.g., high transferable AEs), can advance reliable and practical VLP models. A recent work (i.e., Set-level guidance attack) indicates that augmenting image-text pairs to increase AE diversity along the optimization path enhances the transferability of adversarial examples significantly. However, this approach predominantly emphasizes diversity around the online adversarial examples (i.e., AEs in the optimization period), leading to the risk of overfitting the victim model and affecting the transferability. In this study, we posit that the diversity of adversarial examples towards the clean input and online AEs are both pivotal for enhancing transferability across VLP models. Consequently, we propose using diversification along the intersection region of adversarial trajectory to expand the diversity of AEs. To fully leverage the interaction between modalities, we introduce text-guided adversarial example selection during optimization. Furthermore, to further mitigate the potential overfitting, we direct the adversarial text deviating from the last intersection region along the optimization path, rather than adversarial images as in existing methods. Extensive experiments affirm the effectiveness of our method in improving transferability across various VLP models and downstream vision-and-language tasks.</description><identifier>DOI: 10.48550/arxiv.2403.12445</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.12445$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.12445$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gao, Sensen</creatorcontrib><creatorcontrib>Jia, Xiaojun</creatorcontrib><creatorcontrib>Ren, Xuhong</creatorcontrib><creatorcontrib>Tsang, Ivor</creatorcontrib><creatorcontrib>Guo, Qing</creatorcontrib><title>Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory</title><description>Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs). Strengthening attacks and uncovering vulnerabilities, especially common issues in VLP models (e.g., high transferable AEs), can advance reliable and practical VLP models. A recent work (i.e., Set-level guidance attack) indicates that augmenting image-text pairs to increase AE diversity along the optimization path enhances the transferability of adversarial examples significantly. However, this approach predominantly emphasizes diversity around the online adversarial examples (i.e., AEs in the optimization period), leading to the risk of overfitting the victim model and affecting the transferability. In this study, we posit that the diversity of adversarial examples towards the clean input and online AEs are both pivotal for enhancing transferability across VLP models. Consequently, we propose using diversification along the intersection region of adversarial trajectory to expand the diversity of AEs. To fully leverage the interaction between modalities, we introduce text-guided adversarial example selection during optimization. Furthermore, to further mitigate the potential overfitting, we direct the adversarial text deviating from the last intersection region along the optimization path, rather than adversarial images as in existing methods. Extensive experiments affirm the effectiveness of our method in improving transferability across various VLP models and downstream vision-and-language tasks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkM1OhDAUhdm4MKMP4Mq-AAgtLWWJ498kJCaGuCWX0uJVLKatxFn56gLO6iT3nHtO8kXRVZYmueQ8vQH3g3NC85QlGc1zfh793k6TD2gH0jiw3mgHHY4YjgQteUWPk41rsMM3DJpUIYD68GRGIHc4a-fRoIKwhAiM01IS3jQ52LA4Wm3nFz2sMhlS9esDOIRx3XpfApM7XkRnBkavL0-6i5qH-2b_FNfPj4d9VccgCh4rWvZUKS66nnZUmEKlNNUdyyWVBmgOpS5UIcqMGcpKVXKpMwZdwaVRUhjBdtH1f-1GoP1y-Anu2K4k2o0E-wPCylwN</recordid><startdate>20240319</startdate><enddate>20240319</enddate><creator>Gao, Sensen</creator><creator>Jia, Xiaojun</creator><creator>Ren, Xuhong</creator><creator>Tsang, Ivor</creator><creator>Guo, Qing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240319</creationdate><title>Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory</title><author>Gao, Sensen ; Jia, Xiaojun ; Ren, Xuhong ; Tsang, Ivor ; Guo, Qing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-c29d2cc56bd2b26f7c020eb34828fa24a9e7c76913f239c958e13ab758fc86f63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Gao, Sensen</creatorcontrib><creatorcontrib>Jia, Xiaojun</creatorcontrib><creatorcontrib>Ren, Xuhong</creatorcontrib><creatorcontrib>Tsang, Ivor</creatorcontrib><creatorcontrib>Guo, Qing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gao, Sensen</au><au>Jia, Xiaojun</au><au>Ren, Xuhong</au><au>Tsang, Ivor</au><au>Guo, Qing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory</atitle><date>2024-03-19</date><risdate>2024</risdate><abstract>Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs). Strengthening attacks and uncovering vulnerabilities, especially common issues in VLP models (e.g., high transferable AEs), can advance reliable and practical VLP models. A recent work (i.e., Set-level guidance attack) indicates that augmenting image-text pairs to increase AE diversity along the optimization path enhances the transferability of adversarial examples significantly. However, this approach predominantly emphasizes diversity around the online adversarial examples (i.e., AEs in the optimization period), leading to the risk of overfitting the victim model and affecting the transferability. In this study, we posit that the diversity of adversarial examples towards the clean input and online AEs are both pivotal for enhancing transferability across VLP models. Consequently, we propose using diversification along the intersection region of adversarial trajectory to expand the diversity of AEs. To fully leverage the interaction between modalities, we introduce text-guided adversarial example selection during optimization. Furthermore, to further mitigate the potential overfitting, we direct the adversarial text deviating from the last intersection region along the optimization path, rather than adversarial images as in existing methods. Extensive experiments affirm the effectiveness of our method in improving transferability across various VLP models and downstream vision-and-language tasks.</abstract><doi>10.48550/arxiv.2403.12445</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.12445
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_12445
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T18%3A57%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Boosting%20Transferability%20in%20Vision-Language%20Attacks%20via%20Diversification%20along%20the%20Intersection%20Region%20of%20Adversarial%20Trajectory&rft.au=Gao,%20Sensen&rft.date=2024-03-19&rft_id=info:doi/10.48550/arxiv.2403.12445&rft_dat=%3Carxiv_GOX%3E2403_12445%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true