Global and Local Semantic Completion Learning for Vision-Language Pre-training
Cross-modal alignment plays a crucial role in vision-language pre-training (VLP) models, enabling them to capture meaningful associations across different modalities. For this purpose, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core ide...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Tu, Rong-Cheng Ji, Yatai Jiang, Jie Kong, Weijie Cai, Chengfei Zhao, Wenzhe Wang, Hongfa Yang, Yujiu Liu, Wei |
description | Cross-modal alignment plays a crucial role in vision-language pre-training
(VLP) models, enabling them to capture meaningful associations across different
modalities. For this purpose, numerous masked modeling tasks have been proposed
for VLP to further promote cross-modal interactions. The core idea of previous
masked modeling tasks is to focus on reconstructing the masked tokens based on
visible context for learning local-local alignment. However, most of them pay
little attention to the global semantic features generated for the masked data,
resulting in a limited cross-modal alignment ability of global representations
to local features of the other modality. Therefore, in this paper, we propose a
novel Global and Local Semantic Completion Learning (GLSCL) task to facilitate
global-local alignment and local-local alignment simultaneously. Specifically,
the GLSCL task complements the missing semantics of masked data and recovers
global and local features by cross-modal interactions. Our GLSCL consists of
masked global semantic completion (MGSC) and masked local token completion
(MLTC). MGSC promotes learning more representative global features, which have
a great impact on the performance of downstream tasks, while MLTC reconstructs
modal-fusion local tokens, further enhancing accurate comprehension of
multimodal data. To evaluate the proposed approaches on cross-modal alignment,
we develop a validation benchmark called ALIGN-BENCH. Moreover, we present a
flexible vision encoder, enabling our model to simultaneously perform
image-text and video-text multimodal tasks. Experimental results show that our
proposed method obtains state-of-the-art performance on various vision-language
benchmarks, such as visual question answering, image-text retrieval, and
video-text retrieval. |
doi_str_mv | 10.48550/arxiv.2306.07096 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_07096</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_07096</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-1bbd9e882fa5fd893718f3b5b7acbd835032cd8f160f3143f1345dc2612beaa3</originalsourceid><addsrcrecordid>eNotj8tKxDAYhbNxIaMP4Mq8QGouTZoupegoBBVG3JY_txJokyFTRd_emdHVORw-DnwI3TDatFpKegf1O301XFDV0I726hK9bOdiYcaQPTbFHdsuLJDX5PBQlv0c1lQyNgFqTnnCsVT8kQ7HjRjI0ydMAb_VQNYK6QRcoYsI8yFc_-cG7R4f3ocnYl63z8O9IaA6RZi1vg9a8wgyet2LjukorLQdOOu1kFRw53VkikbBWhGZaKV3XDFuA4DYoNu_17PPuK9pgfoznrzGs5f4BTCKSKk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Global and Local Semantic Completion Learning for Vision-Language Pre-training</title><source>arXiv.org</source><creator>Tu, Rong-Cheng ; Ji, Yatai ; Jiang, Jie ; Kong, Weijie ; Cai, Chengfei ; Zhao, Wenzhe ; Wang, Hongfa ; Yang, Yujiu ; Liu, Wei</creator><creatorcontrib>Tu, Rong-Cheng ; Ji, Yatai ; Jiang, Jie ; Kong, Weijie ; Cai, Chengfei ; Zhao, Wenzhe ; Wang, Hongfa ; Yang, Yujiu ; Liu, Wei</creatorcontrib><description>Cross-modal alignment plays a crucial role in vision-language pre-training
(VLP) models, enabling them to capture meaningful associations across different
modalities. For this purpose, numerous masked modeling tasks have been proposed
for VLP to further promote cross-modal interactions. The core idea of previous
masked modeling tasks is to focus on reconstructing the masked tokens based on
visible context for learning local-local alignment. However, most of them pay
little attention to the global semantic features generated for the masked data,
resulting in a limited cross-modal alignment ability of global representations
to local features of the other modality. Therefore, in this paper, we propose a
novel Global and Local Semantic Completion Learning (GLSCL) task to facilitate
global-local alignment and local-local alignment simultaneously. Specifically,
the GLSCL task complements the missing semantics of masked data and recovers
global and local features by cross-modal interactions. Our GLSCL consists of
masked global semantic completion (MGSC) and masked local token completion
(MLTC). MGSC promotes learning more representative global features, which have
a great impact on the performance of downstream tasks, while MLTC reconstructs
modal-fusion local tokens, further enhancing accurate comprehension of
multimodal data. To evaluate the proposed approaches on cross-modal alignment,
we develop a validation benchmark called ALIGN-BENCH. Moreover, we present a
flexible vision encoder, enabling our model to simultaneously perform
image-text and video-text multimodal tasks. Experimental results show that our
proposed method obtains state-of-the-art performance on various vision-language
benchmarks, such as visual question answering, image-text retrieval, and
video-text retrieval.</description><identifier>DOI: 10.48550/arxiv.2306.07096</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.07096$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.07096$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tu, Rong-Cheng</creatorcontrib><creatorcontrib>Ji, Yatai</creatorcontrib><creatorcontrib>Jiang, Jie</creatorcontrib><creatorcontrib>Kong, Weijie</creatorcontrib><creatorcontrib>Cai, Chengfei</creatorcontrib><creatorcontrib>Zhao, Wenzhe</creatorcontrib><creatorcontrib>Wang, Hongfa</creatorcontrib><creatorcontrib>Yang, Yujiu</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><title>Global and Local Semantic Completion Learning for Vision-Language Pre-training</title><description>Cross-modal alignment plays a crucial role in vision-language pre-training
(VLP) models, enabling them to capture meaningful associations across different
modalities. For this purpose, numerous masked modeling tasks have been proposed
for VLP to further promote cross-modal interactions. The core idea of previous
masked modeling tasks is to focus on reconstructing the masked tokens based on
visible context for learning local-local alignment. However, most of them pay
little attention to the global semantic features generated for the masked data,
resulting in a limited cross-modal alignment ability of global representations
to local features of the other modality. Therefore, in this paper, we propose a
novel Global and Local Semantic Completion Learning (GLSCL) task to facilitate
global-local alignment and local-local alignment simultaneously. Specifically,
the GLSCL task complements the missing semantics of masked data and recovers
global and local features by cross-modal interactions. Our GLSCL consists of
masked global semantic completion (MGSC) and masked local token completion
(MLTC). MGSC promotes learning more representative global features, which have
a great impact on the performance of downstream tasks, while MLTC reconstructs
modal-fusion local tokens, further enhancing accurate comprehension of
multimodal data. To evaluate the proposed approaches on cross-modal alignment,
we develop a validation benchmark called ALIGN-BENCH. Moreover, we present a
flexible vision encoder, enabling our model to simultaneously perform
image-text and video-text multimodal tasks. Experimental results show that our
proposed method obtains state-of-the-art performance on various vision-language
benchmarks, such as visual question answering, image-text retrieval, and
video-text retrieval.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKxDAYhbNxIaMP4Mq8QGouTZoupegoBBVG3JY_txJokyFTRd_emdHVORw-DnwI3TDatFpKegf1O301XFDV0I726hK9bOdiYcaQPTbFHdsuLJDX5PBQlv0c1lQyNgFqTnnCsVT8kQ7HjRjI0ydMAb_VQNYK6QRcoYsI8yFc_-cG7R4f3ocnYl63z8O9IaA6RZi1vg9a8wgyet2LjukorLQdOOu1kFRw53VkikbBWhGZaKV3XDFuA4DYoNu_17PPuK9pgfoznrzGs5f4BTCKSKk</recordid><startdate>20230612</startdate><enddate>20230612</enddate><creator>Tu, Rong-Cheng</creator><creator>Ji, Yatai</creator><creator>Jiang, Jie</creator><creator>Kong, Weijie</creator><creator>Cai, Chengfei</creator><creator>Zhao, Wenzhe</creator><creator>Wang, Hongfa</creator><creator>Yang, Yujiu</creator><creator>Liu, Wei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230612</creationdate><title>Global and Local Semantic Completion Learning for Vision-Language Pre-training</title><author>Tu, Rong-Cheng ; Ji, Yatai ; Jiang, Jie ; Kong, Weijie ; Cai, Chengfei ; Zhao, Wenzhe ; Wang, Hongfa ; Yang, Yujiu ; Liu, Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-1bbd9e882fa5fd893718f3b5b7acbd835032cd8f160f3143f1345dc2612beaa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Tu, Rong-Cheng</creatorcontrib><creatorcontrib>Ji, Yatai</creatorcontrib><creatorcontrib>Jiang, Jie</creatorcontrib><creatorcontrib>Kong, Weijie</creatorcontrib><creatorcontrib>Cai, Chengfei</creatorcontrib><creatorcontrib>Zhao, Wenzhe</creatorcontrib><creatorcontrib>Wang, Hongfa</creatorcontrib><creatorcontrib>Yang, Yujiu</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tu, Rong-Cheng</au><au>Ji, Yatai</au><au>Jiang, Jie</au><au>Kong, Weijie</au><au>Cai, Chengfei</au><au>Zhao, Wenzhe</au><au>Wang, Hongfa</au><au>Yang, Yujiu</au><au>Liu, Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Global and Local Semantic Completion Learning for Vision-Language Pre-training</atitle><date>2023-06-12</date><risdate>2023</risdate><abstract>Cross-modal alignment plays a crucial role in vision-language pre-training
(VLP) models, enabling them to capture meaningful associations across different
modalities. For this purpose, numerous masked modeling tasks have been proposed
for VLP to further promote cross-modal interactions. The core idea of previous
masked modeling tasks is to focus on reconstructing the masked tokens based on
visible context for learning local-local alignment. However, most of them pay
little attention to the global semantic features generated for the masked data,
resulting in a limited cross-modal alignment ability of global representations
to local features of the other modality. Therefore, in this paper, we propose a
novel Global and Local Semantic Completion Learning (GLSCL) task to facilitate
global-local alignment and local-local alignment simultaneously. Specifically,
the GLSCL task complements the missing semantics of masked data and recovers
global and local features by cross-modal interactions. Our GLSCL consists of
masked global semantic completion (MGSC) and masked local token completion
(MLTC). MGSC promotes learning more representative global features, which have
a great impact on the performance of downstream tasks, while MLTC reconstructs
modal-fusion local tokens, further enhancing accurate comprehension of
multimodal data. To evaluate the proposed approaches on cross-modal alignment,
we develop a validation benchmark called ALIGN-BENCH. Moreover, we present a
flexible vision encoder, enabling our model to simultaneously perform
image-text and video-text multimodal tasks. Experimental results show that our
proposed method obtains state-of-the-art performance on various vision-language
benchmarks, such as visual question answering, image-text retrieval, and
video-text retrieval.</abstract><doi>10.48550/arxiv.2306.07096</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2306.07096 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2306_07096 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Global and Local Semantic Completion Learning for Vision-Language Pre-training |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T14%3A34%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Global%20and%20Local%20Semantic%20Completion%20Learning%20for%20Vision-Language%20Pre-training&rft.au=Tu,%20Rong-Cheng&rft.date=2023-06-12&rft_id=info:doi/10.48550/arxiv.2306.07096&rft_dat=%3Carxiv_GOX%3E2306_07096%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |