Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning

Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ji, Yatai, Tu, Rongcheng, Jiang, Jie, Kong, Weijie, Cai, Chengfei, Zhao, Wenzhe, Wang, Hongfa, Yang, Yujiu, Liu, Wei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ji, Yatai
Tu, Rongcheng
Jiang, Jie
Kong, Weijie
Cai, Chengfei
Zhao, Wenzhe
Wang, Hongfa
Yang, Yujiu
Liu, Wei
description Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval.
doi_str_mv 10.48550/arxiv.2211.13437
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2211_13437</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2211_13437</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-45d453029a2cfd534717a923f780ae497ad4905266d58c7cca1d2d54960509d93</originalsourceid><addsrcrecordid>eNotz8tKxDAYhuFsXMjoBbgyN5CaY9O4k-IJKgpTHGZVfpK0E5imQ5rxcPfa0dW3efjgReiK0UJWStEbSF_ho-CcsYIJKfQ5atfehzjgzQ4y3k5H_BLm-Ra_hzlMkTQQhyMMHr8lT3KCEBf7GfIOr_0IMQeL62k87H3-5bjxkBZxgc562M_-8n9XqH24b-sn0rw-Ptd3DYFSayKVk0pQboDb3ikhNdNguOh1RcFLo8FJQxUvS6cqq60F5rhT0pRUUeOMWKHrv9tTVndIYYT03S153SlP_AALQUnD</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning</title><source>arXiv.org</source><creator>Ji, Yatai ; Tu, Rongcheng ; Jiang, Jie ; Kong, Weijie ; Cai, Chengfei ; Zhao, Wenzhe ; Wang, Hongfa ; Yang, Yujiu ; Liu, Wei</creator><creatorcontrib>Ji, Yatai ; Tu, Rongcheng ; Jiang, Jie ; Kong, Weijie ; Cai, Chengfei ; Zhao, Wenzhe ; Wang, Hongfa ; Yang, Yujiu ; Liu, Wei</creatorcontrib><description>Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval.</description><identifier>DOI: 10.48550/arxiv.2211.13437</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia</subject><creationdate>2022-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2211.13437$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.13437$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ji, Yatai</creatorcontrib><creatorcontrib>Tu, Rongcheng</creatorcontrib><creatorcontrib>Jiang, Jie</creatorcontrib><creatorcontrib>Kong, Weijie</creatorcontrib><creatorcontrib>Cai, Chengfei</creatorcontrib><creatorcontrib>Zhao, Wenzhe</creatorcontrib><creatorcontrib>Wang, Hongfa</creatorcontrib><creatorcontrib>Yang, Yujiu</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><title>Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning</title><description>Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tKxDAYhuFsXMjoBbgyN5CaY9O4k-IJKgpTHGZVfpK0E5imQ5rxcPfa0dW3efjgReiK0UJWStEbSF_ho-CcsYIJKfQ5atfehzjgzQ4y3k5H_BLm-Ra_hzlMkTQQhyMMHr8lT3KCEBf7GfIOr_0IMQeL62k87H3-5bjxkBZxgc562M_-8n9XqH24b-sn0rw-Ptd3DYFSayKVk0pQboDb3ikhNdNguOh1RcFLo8FJQxUvS6cqq60F5rhT0pRUUeOMWKHrv9tTVndIYYT03S153SlP_AALQUnD</recordid><startdate>20221124</startdate><enddate>20221124</enddate><creator>Ji, Yatai</creator><creator>Tu, Rongcheng</creator><creator>Jiang, Jie</creator><creator>Kong, Weijie</creator><creator>Cai, Chengfei</creator><creator>Zhao, Wenzhe</creator><creator>Wang, Hongfa</creator><creator>Yang, Yujiu</creator><creator>Liu, Wei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221124</creationdate><title>Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning</title><author>Ji, Yatai ; Tu, Rongcheng ; Jiang, Jie ; Kong, Weijie ; Cai, Chengfei ; Zhao, Wenzhe ; Wang, Hongfa ; Yang, Yujiu ; Liu, Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-45d453029a2cfd534717a923f780ae497ad4905266d58c7cca1d2d54960509d93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Ji, Yatai</creatorcontrib><creatorcontrib>Tu, Rongcheng</creatorcontrib><creatorcontrib>Jiang, Jie</creatorcontrib><creatorcontrib>Kong, Weijie</creatorcontrib><creatorcontrib>Cai, Chengfei</creatorcontrib><creatorcontrib>Zhao, Wenzhe</creatorcontrib><creatorcontrib>Wang, Hongfa</creatorcontrib><creatorcontrib>Yang, Yujiu</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ji, Yatai</au><au>Tu, Rongcheng</au><au>Jiang, Jie</au><au>Kong, Weijie</au><au>Cai, Chengfei</au><au>Zhao, Wenzhe</au><au>Wang, Hongfa</au><au>Yang, Yujiu</au><au>Liu, Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning</atitle><date>2022-11-24</date><risdate>2022</risdate><abstract>Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval.</abstract><doi>10.48550/arxiv.2211.13437</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2211.13437
ispartof
issn
language eng
recordid cdi_arxiv_primary_2211_13437
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Multimedia
title Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T07%3A15%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Seeing%20What%20You%20Miss:%20Vision-Language%20Pre-training%20with%20Semantic%20Completion%20Learning&rft.au=Ji,%20Yatai&rft.date=2022-11-24&rft_id=info:doi/10.48550/arxiv.2211.13437&rft_dat=%3Carxiv_GOX%3E2211_13437%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true