What BERT Sees: Cross-Modal Transfer for Visual Question Generation

Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Scialom, Thomas, Bordes, Patrick, Dray, Paul-Alexis, Staiano, Jacopo, Gallinari, Patrick
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Scialom, Thomas
Bordes, Patrick
Dray, Paul-Alexis
Staiano, Jacopo
Gallinari, Patrick
description Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.
doi_str_mv 10.48550/arxiv.2002.10832
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_10832</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_10832</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-a15a3a6fdccb59f4cb5e5029064b0a0350db95e4a425d6850ae960e72ab425213</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3pAVE-gBP-gYSN7TVJbyUCWomqAqL2GG3IWo1EE2QDav--gXKZGb3DSE-IcQKxSRFhSv6nucQKQMUJpFoNRP75RSc5X2wLuWMOTzL3XQjRW1fTQRae2uDYS9d5-dGEc882Zw6npmvlilv2dJ2P4sHRIfDo3kNRLBdF_hKt31ev-fM6IjtTESVImqyr9_sKM2f6ZASVgTUVEGiEusqQDRmFtU0RiDMLPFNU9UQleigm_7c3i_Lom2_yv-XVprzZ6D9B-kOh</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>What BERT Sees: Cross-Modal Transfer for Visual Question Generation</title><source>arXiv.org</source><creator>Scialom, Thomas ; Bordes, Patrick ; Dray, Paul-Alexis ; Staiano, Jacopo ; Gallinari, Patrick</creator><creatorcontrib>Scialom, Thomas ; Bordes, Patrick ; Dray, Paul-Alexis ; Staiano, Jacopo ; Gallinari, Patrick</creatorcontrib><description>Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.</description><identifier>DOI: 10.48550/arxiv.2002.10832</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.10832$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.10832$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Scialom, Thomas</creatorcontrib><creatorcontrib>Bordes, Patrick</creatorcontrib><creatorcontrib>Dray, Paul-Alexis</creatorcontrib><creatorcontrib>Staiano, Jacopo</creatorcontrib><creatorcontrib>Gallinari, Patrick</creatorcontrib><title>What BERT Sees: Cross-Modal Transfer for Visual Question Generation</title><description>Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3pAVE-gBP-gYSN7TVJbyUCWomqAqL2GG3IWo1EE2QDav--gXKZGb3DSE-IcQKxSRFhSv6nucQKQMUJpFoNRP75RSc5X2wLuWMOTzL3XQjRW1fTQRae2uDYS9d5-dGEc882Zw6npmvlilv2dJ2P4sHRIfDo3kNRLBdF_hKt31ev-fM6IjtTESVImqyr9_sKM2f6ZASVgTUVEGiEusqQDRmFtU0RiDMLPFNU9UQleigm_7c3i_Lom2_yv-XVprzZ6D9B-kOh</recordid><startdate>20200225</startdate><enddate>20200225</enddate><creator>Scialom, Thomas</creator><creator>Bordes, Patrick</creator><creator>Dray, Paul-Alexis</creator><creator>Staiano, Jacopo</creator><creator>Gallinari, Patrick</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200225</creationdate><title>What BERT Sees: Cross-Modal Transfer for Visual Question Generation</title><author>Scialom, Thomas ; Bordes, Patrick ; Dray, Paul-Alexis ; Staiano, Jacopo ; Gallinari, Patrick</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-a15a3a6fdccb59f4cb5e5029064b0a0350db95e4a425d6850ae960e72ab425213</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Scialom, Thomas</creatorcontrib><creatorcontrib>Bordes, Patrick</creatorcontrib><creatorcontrib>Dray, Paul-Alexis</creatorcontrib><creatorcontrib>Staiano, Jacopo</creatorcontrib><creatorcontrib>Gallinari, Patrick</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Scialom, Thomas</au><au>Bordes, Patrick</au><au>Dray, Paul-Alexis</au><au>Staiano, Jacopo</au><au>Gallinari, Patrick</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>What BERT Sees: Cross-Modal Transfer for Visual Question Generation</atitle><date>2020-02-25</date><risdate>2020</risdate><abstract>Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pre-training relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.</abstract><doi>10.48550/arxiv.2002.10832</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2002.10832
ispartof
issn
language eng
recordid cdi_arxiv_primary_2002_10832
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title What BERT Sees: Cross-Modal Transfer for Visual Question Generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T06%3A24%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=What%20BERT%20Sees:%20Cross-Modal%20Transfer%20for%20Visual%20Question%20Generation&rft.au=Scialom,%20Thomas&rft.date=2020-02-25&rft_id=info:doi/10.48550/arxiv.2002.10832&rft_dat=%3Carxiv_GOX%3E2002_10832%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true