Improving Adversarial Transferability of Vision-Language Pre-training Models through Collaborative Multimodal Interaction
Despite the substantial advancements in Vision-Language Pre-training (VLP) models, their susceptibility to adversarial attacks poses a significant challenge. Existing work rarely studies the transferability of attacks on VLP models, resulting in a substantial performance gap from white-box attacks....
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Fu, Jiyuan Chen, Zhaoyu Jiang, Kaixun Guo, Haijing Wang, Jiafeng Gao, Shuyong Zhang, Wenqiang |
description | Despite the substantial advancements in Vision-Language Pre-training (VLP)
models, their susceptibility to adversarial attacks poses a significant
challenge. Existing work rarely studies the transferability of attacks on VLP
models, resulting in a substantial performance gap from white-box attacks. We
observe that prior work overlooks the interaction mechanisms between
modalities, which plays a crucial role in understanding the intricacies of VLP
models. In response, we propose a novel attack, called Collaborative Multimodal
Interaction Attack (CMI-Attack), leveraging modality interaction through
embedding guidance and interaction enhancement. Specifically, attacking text at
the embedding level while preserving semantics, as well as utilizing
interaction image gradients to enhance constraints on perturbations of texts
and images. Significantly, in the image-text retrieval task on Flickr30K
dataset, CMI-Attack raises the transfer success rates from ALBEF to TCL,
$\text{CLIP}_{\text{ViT}}$ and $\text{CLIP}_{\text{CNN}}$ by 8.11%-16.75% over
state-of-the-art methods. Moreover, CMI-Attack also demonstrates superior
performance in cross-task generalization scenarios. Our work addresses the
underexplored realm of transfer attacks on VLP models, shedding light on the
importance of modality interaction for enhanced adversarial robustness. |
doi_str_mv | 10.48550/arxiv.2403.10883 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_10883</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_10883</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2403_108833</originalsourceid><addsrcrecordid>eNqFjsEKgkAURWfTIqoPaNX8gKap4DakSEhoEW3lmaM-GGfkzSj596m0b3U35x4OY3vfc8M4irwj0AcH9xR6get7cRys2Zi2HekBVc3P5SDIACFI_iRQphIEBUq0I9cVf6FBrZw7qLqHWvAHCccSoJq_mS6FNNw2pPu64YmWEgpNYHEQPOulxVaXkzdVdpK-7WTaslUF0ojdbzfscL08k5uzROYdYQs05nNsvsQG_4kv1ShNMg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Improving Adversarial Transferability of Vision-Language Pre-training Models through Collaborative Multimodal Interaction</title><source>arXiv.org</source><creator>Fu, Jiyuan ; Chen, Zhaoyu ; Jiang, Kaixun ; Guo, Haijing ; Wang, Jiafeng ; Gao, Shuyong ; Zhang, Wenqiang</creator><creatorcontrib>Fu, Jiyuan ; Chen, Zhaoyu ; Jiang, Kaixun ; Guo, Haijing ; Wang, Jiafeng ; Gao, Shuyong ; Zhang, Wenqiang</creatorcontrib><description>Despite the substantial advancements in Vision-Language Pre-training (VLP)
models, their susceptibility to adversarial attacks poses a significant
challenge. Existing work rarely studies the transferability of attacks on VLP
models, resulting in a substantial performance gap from white-box attacks. We
observe that prior work overlooks the interaction mechanisms between
modalities, which plays a crucial role in understanding the intricacies of VLP
models. In response, we propose a novel attack, called Collaborative Multimodal
Interaction Attack (CMI-Attack), leveraging modality interaction through
embedding guidance and interaction enhancement. Specifically, attacking text at
the embedding level while preserving semantics, as well as utilizing
interaction image gradients to enhance constraints on perturbations of texts
and images. Significantly, in the image-text retrieval task on Flickr30K
dataset, CMI-Attack raises the transfer success rates from ALBEF to TCL,
$\text{CLIP}_{\text{ViT}}$ and $\text{CLIP}_{\text{CNN}}$ by 8.11%-16.75% over
state-of-the-art methods. Moreover, CMI-Attack also demonstrates superior
performance in cross-task generalization scenarios. Our work addresses the
underexplored realm of transfer attacks on VLP models, shedding light on the
importance of modality interaction for enhanced adversarial robustness.</description><identifier>DOI: 10.48550/arxiv.2403.10883</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Cryptography and Security ; Computer Science - Multimedia</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.10883$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.10883$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fu, Jiyuan</creatorcontrib><creatorcontrib>Chen, Zhaoyu</creatorcontrib><creatorcontrib>Jiang, Kaixun</creatorcontrib><creatorcontrib>Guo, Haijing</creatorcontrib><creatorcontrib>Wang, Jiafeng</creatorcontrib><creatorcontrib>Gao, Shuyong</creatorcontrib><creatorcontrib>Zhang, Wenqiang</creatorcontrib><title>Improving Adversarial Transferability of Vision-Language Pre-training Models through Collaborative Multimodal Interaction</title><description>Despite the substantial advancements in Vision-Language Pre-training (VLP)
models, their susceptibility to adversarial attacks poses a significant
challenge. Existing work rarely studies the transferability of attacks on VLP
models, resulting in a substantial performance gap from white-box attacks. We
observe that prior work overlooks the interaction mechanisms between
modalities, which plays a crucial role in understanding the intricacies of VLP
models. In response, we propose a novel attack, called Collaborative Multimodal
Interaction Attack (CMI-Attack), leveraging modality interaction through
embedding guidance and interaction enhancement. Specifically, attacking text at
the embedding level while preserving semantics, as well as utilizing
interaction image gradients to enhance constraints on perturbations of texts
and images. Significantly, in the image-text retrieval task on Flickr30K
dataset, CMI-Attack raises the transfer success rates from ALBEF to TCL,
$\text{CLIP}_{\text{ViT}}$ and $\text{CLIP}_{\text{CNN}}$ by 8.11%-16.75% over
state-of-the-art methods. Moreover, CMI-Attack also demonstrates superior
performance in cross-task generalization scenarios. Our work addresses the
underexplored realm of transfer attacks on VLP models, shedding light on the
importance of modality interaction for enhanced adversarial robustness.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjsEKgkAURWfTIqoPaNX8gKap4DakSEhoEW3lmaM-GGfkzSj596m0b3U35x4OY3vfc8M4irwj0AcH9xR6get7cRys2Zi2HekBVc3P5SDIACFI_iRQphIEBUq0I9cVf6FBrZw7qLqHWvAHCccSoJq_mS6FNNw2pPu64YmWEgpNYHEQPOulxVaXkzdVdpK-7WTaslUF0ojdbzfscL08k5uzROYdYQs05nNsvsQG_4kv1ShNMg</recordid><startdate>20240316</startdate><enddate>20240316</enddate><creator>Fu, Jiyuan</creator><creator>Chen, Zhaoyu</creator><creator>Jiang, Kaixun</creator><creator>Guo, Haijing</creator><creator>Wang, Jiafeng</creator><creator>Gao, Shuyong</creator><creator>Zhang, Wenqiang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240316</creationdate><title>Improving Adversarial Transferability of Vision-Language Pre-training Models through Collaborative Multimodal Interaction</title><author>Fu, Jiyuan ; Chen, Zhaoyu ; Jiang, Kaixun ; Guo, Haijing ; Wang, Jiafeng ; Gao, Shuyong ; Zhang, Wenqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2403_108833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Fu, Jiyuan</creatorcontrib><creatorcontrib>Chen, Zhaoyu</creatorcontrib><creatorcontrib>Jiang, Kaixun</creatorcontrib><creatorcontrib>Guo, Haijing</creatorcontrib><creatorcontrib>Wang, Jiafeng</creatorcontrib><creatorcontrib>Gao, Shuyong</creatorcontrib><creatorcontrib>Zhang, Wenqiang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fu, Jiyuan</au><au>Chen, Zhaoyu</au><au>Jiang, Kaixun</au><au>Guo, Haijing</au><au>Wang, Jiafeng</au><au>Gao, Shuyong</au><au>Zhang, Wenqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Adversarial Transferability of Vision-Language Pre-training Models through Collaborative Multimodal Interaction</atitle><date>2024-03-16</date><risdate>2024</risdate><abstract>Despite the substantial advancements in Vision-Language Pre-training (VLP)
models, their susceptibility to adversarial attacks poses a significant
challenge. Existing work rarely studies the transferability of attacks on VLP
models, resulting in a substantial performance gap from white-box attacks. We
observe that prior work overlooks the interaction mechanisms between
modalities, which plays a crucial role in understanding the intricacies of VLP
models. In response, we propose a novel attack, called Collaborative Multimodal
Interaction Attack (CMI-Attack), leveraging modality interaction through
embedding guidance and interaction enhancement. Specifically, attacking text at
the embedding level while preserving semantics, as well as utilizing
interaction image gradients to enhance constraints on perturbations of texts
and images. Significantly, in the image-text retrieval task on Flickr30K
dataset, CMI-Attack raises the transfer success rates from ALBEF to TCL,
$\text{CLIP}_{\text{ViT}}$ and $\text{CLIP}_{\text{CNN}}$ by 8.11%-16.75% over
state-of-the-art methods. Moreover, CMI-Attack also demonstrates superior
performance in cross-task generalization scenarios. Our work addresses the
underexplored realm of transfer attacks on VLP models, shedding light on the
importance of modality interaction for enhanced adversarial robustness.</abstract><doi>10.48550/arxiv.2403.10883</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2403.10883 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2403_10883 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Cryptography and Security Computer Science - Multimedia |
title | Improving Adversarial Transferability of Vision-Language Pre-training Models through Collaborative Multimodal Interaction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T00%3A22%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Adversarial%20Transferability%20of%20Vision-Language%20Pre-training%20Models%20through%20Collaborative%20Multimodal%20Interaction&rft.au=Fu,%20Jiyuan&rft.date=2024-03-16&rft_id=info:doi/10.48550/arxiv.2403.10883&rft_dat=%3Carxiv_GOX%3E2403_10883%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |