ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data
In this paper, we introduce a new vision-language pre-trained model -- ImageBERT -- for image-text joint embedding. Our model is a Transformer-based model, which takes different modalities as input and models the relationship between them. The model is pre-trained on four tasks simultaneously: Maske...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Qi, Di Su, Lin Song, Jia Cui, Edward Bharti, Taroon Sacheti, Arun |
description | In this paper, we introduce a new vision-language pre-trained model --
ImageBERT -- for image-text joint embedding. Our model is a Transformer-based
model, which takes different modalities as input and models the relationship
between them. The model is pre-trained on four tasks simultaneously: Masked
Language Modeling (MLM), Masked Object Classification (MOC), Masked Region
Feature Regression (MRFR), and Image Text Matching (ITM). To further enhance
the pre-training quality, we have collected a Large-scale weAk-supervised
Image-Text (LAIT) dataset from Web. We first pre-train the model on this
dataset, then conduct a second stage pre-training on Conceptual Captions and
SBU Captions. Our experiments show that multi-stage pre-training strategy
outperforms single-stage pre-training. We also fine-tune and evaluate our
pre-trained ImageBERT model on image retrieval and text retrieval tasks, and
achieve new state-of-the-art results on both MSCOCO and Flickr30k datasets. |
doi_str_mv | 10.48550/arxiv.2001.07966 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2001_07966</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2001_07966</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-8c04e8702dcef3bfa6972ddcece31564fcf94ee58a65bf692e25f840733244ef3</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDMOEXcHAc_yRsEApUigRClhijW-c6WCRtZYdS3p42MH06w3ekQ8hVzjNZKsVvIB7CPhOc5xk3ldbnxK5G6PF--WZvaR23KbFx28FAXyOyKULYhE1Pv8P0QRuIPbLkYED6jvDJ0tcO4z4k7OgsYRYPE32ACS7ImYch4eX_Loh9XNr6mTUvT6v6rmGgjWal4xJLw0Xn0BdrD7oyojuCwyJXWnrnK4moStBq7XUlUChfSm6KQkh5vCzI9Z92zmp3MYwQf9pTXjvnFb8fYEqv</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data</title><source>arXiv.org</source><creator>Qi, Di ; Su, Lin ; Song, Jia ; Cui, Edward ; Bharti, Taroon ; Sacheti, Arun</creator><creatorcontrib>Qi, Di ; Su, Lin ; Song, Jia ; Cui, Edward ; Bharti, Taroon ; Sacheti, Arun</creatorcontrib><description>In this paper, we introduce a new vision-language pre-trained model --
ImageBERT -- for image-text joint embedding. Our model is a Transformer-based
model, which takes different modalities as input and models the relationship
between them. The model is pre-trained on four tasks simultaneously: Masked
Language Modeling (MLM), Masked Object Classification (MOC), Masked Region
Feature Regression (MRFR), and Image Text Matching (ITM). To further enhance
the pre-training quality, we have collected a Large-scale weAk-supervised
Image-Text (LAIT) dataset from Web. We first pre-train the model on this
dataset, then conduct a second stage pre-training on Conceptual Captions and
SBU Captions. Our experiments show that multi-stage pre-training strategy
outperforms single-stage pre-training. We also fine-tune and evaluate our
pre-trained ImageBERT model on image retrieval and text retrieval tasks, and
achieve new state-of-the-art results on both MSCOCO and Flickr30k datasets.</description><identifier>DOI: 10.48550/arxiv.2001.07966</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2020-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2001.07966$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2001.07966$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Qi, Di</creatorcontrib><creatorcontrib>Su, Lin</creatorcontrib><creatorcontrib>Song, Jia</creatorcontrib><creatorcontrib>Cui, Edward</creatorcontrib><creatorcontrib>Bharti, Taroon</creatorcontrib><creatorcontrib>Sacheti, Arun</creatorcontrib><title>ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data</title><description>In this paper, we introduce a new vision-language pre-trained model --
ImageBERT -- for image-text joint embedding. Our model is a Transformer-based
model, which takes different modalities as input and models the relationship
between them. The model is pre-trained on four tasks simultaneously: Masked
Language Modeling (MLM), Masked Object Classification (MOC), Masked Region
Feature Regression (MRFR), and Image Text Matching (ITM). To further enhance
the pre-training quality, we have collected a Large-scale weAk-supervised
Image-Text (LAIT) dataset from Web. We first pre-train the model on this
dataset, then conduct a second stage pre-training on Conceptual Captions and
SBU Captions. Our experiments show that multi-stage pre-training strategy
outperforms single-stage pre-training. We also fine-tune and evaluate our
pre-trained ImageBERT model on image retrieval and text retrieval tasks, and
achieve new state-of-the-art results on both MSCOCO and Flickr30k datasets.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDMOEXcHAc_yRsEApUigRClhijW-c6WCRtZYdS3p42MH06w3ekQ8hVzjNZKsVvIB7CPhOc5xk3ldbnxK5G6PF--WZvaR23KbFx28FAXyOyKULYhE1Pv8P0QRuIPbLkYED6jvDJ0tcO4z4k7OgsYRYPE32ACS7ImYch4eX_Loh9XNr6mTUvT6v6rmGgjWal4xJLw0Xn0BdrD7oyojuCwyJXWnrnK4moStBq7XUlUChfSm6KQkh5vCzI9Z92zmp3MYwQf9pTXjvnFb8fYEqv</recordid><startdate>20200122</startdate><enddate>20200122</enddate><creator>Qi, Di</creator><creator>Su, Lin</creator><creator>Song, Jia</creator><creator>Cui, Edward</creator><creator>Bharti, Taroon</creator><creator>Sacheti, Arun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200122</creationdate><title>ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data</title><author>Qi, Di ; Su, Lin ; Song, Jia ; Cui, Edward ; Bharti, Taroon ; Sacheti, Arun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-8c04e8702dcef3bfa6972ddcece31564fcf94ee58a65bf692e25f840733244ef3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Qi, Di</creatorcontrib><creatorcontrib>Su, Lin</creatorcontrib><creatorcontrib>Song, Jia</creatorcontrib><creatorcontrib>Cui, Edward</creatorcontrib><creatorcontrib>Bharti, Taroon</creatorcontrib><creatorcontrib>Sacheti, Arun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qi, Di</au><au>Su, Lin</au><au>Song, Jia</au><au>Cui, Edward</au><au>Bharti, Taroon</au><au>Sacheti, Arun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data</atitle><date>2020-01-22</date><risdate>2020</risdate><abstract>In this paper, we introduce a new vision-language pre-trained model --
ImageBERT -- for image-text joint embedding. Our model is a Transformer-based
model, which takes different modalities as input and models the relationship
between them. The model is pre-trained on four tasks simultaneously: Masked
Language Modeling (MLM), Masked Object Classification (MOC), Masked Region
Feature Regression (MRFR), and Image Text Matching (ITM). To further enhance
the pre-training quality, we have collected a Large-scale weAk-supervised
Image-Text (LAIT) dataset from Web. We first pre-train the model on this
dataset, then conduct a second stage pre-training on Conceptual Captions and
SBU Captions. Our experiments show that multi-stage pre-training strategy
outperforms single-stage pre-training. We also fine-tune and evaluate our
pre-trained ImageBERT model on image retrieval and text retrieval tasks, and
achieve new state-of-the-art results on both MSCOCO and Flickr30k datasets.</abstract><doi>10.48550/arxiv.2001.07966</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2001.07966 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2001_07966 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T20%3A44%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ImageBERT:%20Cross-modal%20Pre-training%20with%20Large-scale%20Weak-supervised%20Image-Text%20Data&rft.au=Qi,%20Di&rft.date=2020-01-22&rft_id=info:doi/10.48550/arxiv.2001.07966&rft_dat=%3Carxiv_GOX%3E2001_07966%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |