GEM: A General Evaluation Benchmark for Multimodal Tasks

In this paper, we present GEM as a General Evaluation benchmark for Multimodal tasks. Different from existing datasets such as GLUE, SuperGLUE, XGLUE and XTREME that mainly focus on natural language tasks, GEM is a large-scale vision-language benchmark, which consists of GEM-I for image-language tas...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Su, Lin, Duan, Nan, Cui, Edward, Ji, Lei, Wu, Chenfei, Luo, Huaishao, Liu, Yongfei, Zhong, Ming, Bharti, Taroon, Sacheti, Arun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Su, Lin
Duan, Nan
Cui, Edward
Ji, Lei
Wu, Chenfei
Luo, Huaishao
Liu, Yongfei
Zhong, Ming
Bharti, Taroon
Sacheti, Arun
description In this paper, we present GEM as a General Evaluation benchmark for Multimodal tasks. Different from existing datasets such as GLUE, SuperGLUE, XGLUE and XTREME that mainly focus on natural language tasks, GEM is a large-scale vision-language benchmark, which consists of GEM-I for image-language tasks and GEM-V for video-language tasks. Comparing with existing multimodal datasets such as MSCOCO and Flicker30K for image-language tasks, YouCook2 and MSR-VTT for video-language tasks, GEM is not only the largest vision-language dataset covering image-language tasks and video-language tasks at the same time, but also labeled in multiple languages. We also provide two baseline models for this benchmark. We will release the dataset, code and baseline models, aiming to advance the development of multilingual multimodal research.
doi_str_mv 10.48550/arxiv.2106.09889
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2106_09889</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2106_09889</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-513a6d430c3809e2568b1c89430468310f602161804d2f6c9a9e3286616cbd6f3</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3hUEE_oKf6B5Ku7WRZc6MoTSuBesk9WhxbRIQEOYDav29KexppNBq9J8STgjSjPIcXjl_tLdUKMAVLZB8ElcVuJdey9L2P3Mnixt2VL-3Qy1ffu8OJ41GGIcrdtbu0p6GZNhWPx3EhZoG70T_-51xUb0W1eU-2n-XHZr1NGJc2yZVhbDIDzhBYr3OkvXJkpyZDMgoCglaoCLJGB3SWrTeaEBW6fYPBzMXz3-0dvT7HdiL6rn8V6ruC-QFiDD5W</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GEM: A General Evaluation Benchmark for Multimodal Tasks</title><source>arXiv.org</source><creator>Su, Lin ; Duan, Nan ; Cui, Edward ; Ji, Lei ; Wu, Chenfei ; Luo, Huaishao ; Liu, Yongfei ; Zhong, Ming ; Bharti, Taroon ; Sacheti, Arun</creator><creatorcontrib>Su, Lin ; Duan, Nan ; Cui, Edward ; Ji, Lei ; Wu, Chenfei ; Luo, Huaishao ; Liu, Yongfei ; Zhong, Ming ; Bharti, Taroon ; Sacheti, Arun</creatorcontrib><description>In this paper, we present GEM as a General Evaluation benchmark for Multimodal tasks. Different from existing datasets such as GLUE, SuperGLUE, XGLUE and XTREME that mainly focus on natural language tasks, GEM is a large-scale vision-language benchmark, which consists of GEM-I for image-language tasks and GEM-V for video-language tasks. Comparing with existing multimodal datasets such as MSCOCO and Flicker30K for image-language tasks, YouCook2 and MSR-VTT for video-language tasks, GEM is not only the largest vision-language dataset covering image-language tasks and video-language tasks at the same time, but also labeled in multiple languages. We also provide two baseline models for this benchmark. We will release the dataset, code and baseline models, aiming to advance the development of multilingual multimodal research.</description><identifier>DOI: 10.48550/arxiv.2106.09889</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia</subject><creationdate>2021-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2106.09889$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2106.09889$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Su, Lin</creatorcontrib><creatorcontrib>Duan, Nan</creatorcontrib><creatorcontrib>Cui, Edward</creatorcontrib><creatorcontrib>Ji, Lei</creatorcontrib><creatorcontrib>Wu, Chenfei</creatorcontrib><creatorcontrib>Luo, Huaishao</creatorcontrib><creatorcontrib>Liu, Yongfei</creatorcontrib><creatorcontrib>Zhong, Ming</creatorcontrib><creatorcontrib>Bharti, Taroon</creatorcontrib><creatorcontrib>Sacheti, Arun</creatorcontrib><title>GEM: A General Evaluation Benchmark for Multimodal Tasks</title><description>In this paper, we present GEM as a General Evaluation benchmark for Multimodal tasks. Different from existing datasets such as GLUE, SuperGLUE, XGLUE and XTREME that mainly focus on natural language tasks, GEM is a large-scale vision-language benchmark, which consists of GEM-I for image-language tasks and GEM-V for video-language tasks. Comparing with existing multimodal datasets such as MSCOCO and Flicker30K for image-language tasks, YouCook2 and MSR-VTT for video-language tasks, GEM is not only the largest vision-language dataset covering image-language tasks and video-language tasks at the same time, but also labeled in multiple languages. We also provide two baseline models for this benchmark. We will release the dataset, code and baseline models, aiming to advance the development of multilingual multimodal research.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3hUEE_oKf6B5Ku7WRZc6MoTSuBesk9WhxbRIQEOYDav29KexppNBq9J8STgjSjPIcXjl_tLdUKMAVLZB8ElcVuJdey9L2P3Mnixt2VL-3Qy1ffu8OJ41GGIcrdtbu0p6GZNhWPx3EhZoG70T_-51xUb0W1eU-2n-XHZr1NGJc2yZVhbDIDzhBYr3OkvXJkpyZDMgoCglaoCLJGB3SWrTeaEBW6fYPBzMXz3-0dvT7HdiL6rn8V6ruC-QFiDD5W</recordid><startdate>20210617</startdate><enddate>20210617</enddate><creator>Su, Lin</creator><creator>Duan, Nan</creator><creator>Cui, Edward</creator><creator>Ji, Lei</creator><creator>Wu, Chenfei</creator><creator>Luo, Huaishao</creator><creator>Liu, Yongfei</creator><creator>Zhong, Ming</creator><creator>Bharti, Taroon</creator><creator>Sacheti, Arun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210617</creationdate><title>GEM: A General Evaluation Benchmark for Multimodal Tasks</title><author>Su, Lin ; Duan, Nan ; Cui, Edward ; Ji, Lei ; Wu, Chenfei ; Luo, Huaishao ; Liu, Yongfei ; Zhong, Ming ; Bharti, Taroon ; Sacheti, Arun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-513a6d430c3809e2568b1c89430468310f602161804d2f6c9a9e3286616cbd6f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Su, Lin</creatorcontrib><creatorcontrib>Duan, Nan</creatorcontrib><creatorcontrib>Cui, Edward</creatorcontrib><creatorcontrib>Ji, Lei</creatorcontrib><creatorcontrib>Wu, Chenfei</creatorcontrib><creatorcontrib>Luo, Huaishao</creatorcontrib><creatorcontrib>Liu, Yongfei</creatorcontrib><creatorcontrib>Zhong, Ming</creatorcontrib><creatorcontrib>Bharti, Taroon</creatorcontrib><creatorcontrib>Sacheti, Arun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Su, Lin</au><au>Duan, Nan</au><au>Cui, Edward</au><au>Ji, Lei</au><au>Wu, Chenfei</au><au>Luo, Huaishao</au><au>Liu, Yongfei</au><au>Zhong, Ming</au><au>Bharti, Taroon</au><au>Sacheti, Arun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GEM: A General Evaluation Benchmark for Multimodal Tasks</atitle><date>2021-06-17</date><risdate>2021</risdate><abstract>In this paper, we present GEM as a General Evaluation benchmark for Multimodal tasks. Different from existing datasets such as GLUE, SuperGLUE, XGLUE and XTREME that mainly focus on natural language tasks, GEM is a large-scale vision-language benchmark, which consists of GEM-I for image-language tasks and GEM-V for video-language tasks. Comparing with existing multimodal datasets such as MSCOCO and Flicker30K for image-language tasks, YouCook2 and MSR-VTT for video-language tasks, GEM is not only the largest vision-language dataset covering image-language tasks and video-language tasks at the same time, but also labeled in multiple languages. We also provide two baseline models for this benchmark. We will release the dataset, code and baseline models, aiming to advance the development of multilingual multimodal research.</abstract><doi>10.48550/arxiv.2106.09889</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2106.09889
ispartof
issn
language eng
recordid cdi_arxiv_primary_2106_09889
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Multimedia
title GEM: A General Evaluation Benchmark for Multimodal Tasks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T03%3A21%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GEM:%20A%20General%20Evaluation%20Benchmark%20for%20Multimodal%20Tasks&rft.au=Su,%20Lin&rft.date=2021-06-17&rft_id=info:doi/10.48550/arxiv.2106.09889&rft_dat=%3Carxiv_GOX%3E2106_09889%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true