VQA: Visual Question Answering: www.visualqa.org

We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2017-05, Vol.123 (1), p.4-31
Hauptverfasser: Agrawal, Aishwarya, Lu, Jiasen, Antol, Stanislaw, Mitchell, Margaret, Zitnick, C. Lawrence, Parikh, Devi, Batra, Dhruv
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 31
container_issue 1
container_start_page 4
container_title International journal of computer vision
container_volume 123
creator Agrawal, Aishwarya
Lu, Jiasen
Antol, Stanislaw
Mitchell, Margaret
Zitnick, C. Lawrence
Parikh, Devi
Batra, Dhruv
description We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼ 0.25 M images, ∼ 0.76 M questions, and ∼ 10 M answers ( www.visualqa.org ), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV ( http://cloudcv.org/vqa ).
doi_str_mv 10.1007/s11263-016-0966-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1904247196</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1904247196</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2166-855605d0efba32499a2222e2efca4c0491b5b0022b0547b374215a414a218d593</originalsourceid><addsrcrecordid>eNp1kM1Lw0AQxRdRsFb_AC9S8OJldWb2I1lvpfgFBSlor8sm3UhKmtRdg_jfuyEeRHAuc_m992YeY-cI1wiQ3URE0oIDag5Ga64P2ARVJjhKUIdsAoaAK23wmJ3EuAUAyklM2MV6Nb-drevYu2a26n38qLt2Nm_jpw91-3bKjirXRH_2s6fs9f7uZfHIl88PT4v5kpeEKS1XSoPagK8KJ0ga4yiNJ1-VTpYgDRaqSJFUgJJZITJJqJxE6QjzjTJiyq5G333o3ocr7K6OpW8a1_qujxYNSJIZGp3Qyz_otutDm66zmBtCQQlKFI5UGboYg6_sPtQ7F74sgh0as2NjNjVmh8bs4EyjJu6H33345fyv6BsSFGmI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1892132963</pqid></control><display><type>article</type><title>VQA: Visual Question Answering: www.visualqa.org</title><source>SpringerLink Journals</source><creator>Agrawal, Aishwarya ; Lu, Jiasen ; Antol, Stanislaw ; Mitchell, Margaret ; Zitnick, C. Lawrence ; Parikh, Devi ; Batra, Dhruv</creator><creatorcontrib>Agrawal, Aishwarya ; Lu, Jiasen ; Antol, Stanislaw ; Mitchell, Margaret ; Zitnick, C. Lawrence ; Parikh, Devi ; Batra, Dhruv</creatorcontrib><description>We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼ 0.25 M images, ∼ 0.76 M questions, and ∼ 10 M answers ( www.visualqa.org ), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV ( http://cloudcv.org/vqa ).</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-016-0966-6</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Computer vision ; Datasets ; Human performance ; Image Processing and Computer Vision ; Image processing systems ; Information dissemination ; Language ; Natural language ; Natural language (computers) ; Natural language processing ; Pattern Recognition ; Pattern Recognition and Graphics ; Pizza ; Questioning ; Studies ; Tasks ; Texts ; Vision ; Vision systems</subject><ispartof>International journal of computer vision, 2017-05, Vol.123 (1), p.4-31</ispartof><rights>Springer Science+Business Media New York 2016</rights><rights>International Journal of Computer Vision is a copyright of Springer, 2017.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2166-855605d0efba32499a2222e2efca4c0491b5b0022b0547b374215a414a218d593</cites><orcidid>0000-0002-8620-8077</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-016-0966-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-016-0966-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Agrawal, Aishwarya</creatorcontrib><creatorcontrib>Lu, Jiasen</creatorcontrib><creatorcontrib>Antol, Stanislaw</creatorcontrib><creatorcontrib>Mitchell, Margaret</creatorcontrib><creatorcontrib>Zitnick, C. Lawrence</creatorcontrib><creatorcontrib>Parikh, Devi</creatorcontrib><creatorcontrib>Batra, Dhruv</creatorcontrib><title>VQA: Visual Question Answering: www.visualqa.org</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼ 0.25 M images, ∼ 0.76 M questions, and ∼ 10 M answers ( www.visualqa.org ), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV ( http://cloudcv.org/vqa ).</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Human performance</subject><subject>Image Processing and Computer Vision</subject><subject>Image processing systems</subject><subject>Information dissemination</subject><subject>Language</subject><subject>Natural language</subject><subject>Natural language (computers)</subject><subject>Natural language processing</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Pizza</subject><subject>Questioning</subject><subject>Studies</subject><subject>Tasks</subject><subject>Texts</subject><subject>Vision</subject><subject>Vision systems</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp1kM1Lw0AQxRdRsFb_AC9S8OJldWb2I1lvpfgFBSlor8sm3UhKmtRdg_jfuyEeRHAuc_m992YeY-cI1wiQ3URE0oIDag5Ga64P2ARVJjhKUIdsAoaAK23wmJ3EuAUAyklM2MV6Nb-drevYu2a26n38qLt2Nm_jpw91-3bKjirXRH_2s6fs9f7uZfHIl88PT4v5kpeEKS1XSoPagK8KJ0ga4yiNJ1-VTpYgDRaqSJFUgJJZITJJqJxE6QjzjTJiyq5G333o3ocr7K6OpW8a1_qujxYNSJIZGp3Qyz_otutDm66zmBtCQQlKFI5UGboYg6_sPtQ7F74sgh0as2NjNjVmh8bs4EyjJu6H33345fyv6BsSFGmI</recordid><startdate>20170501</startdate><enddate>20170501</enddate><creator>Agrawal, Aishwarya</creator><creator>Lu, Jiasen</creator><creator>Antol, Stanislaw</creator><creator>Mitchell, Margaret</creator><creator>Zitnick, C. Lawrence</creator><creator>Parikh, Devi</creator><creator>Batra, Dhruv</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-8620-8077</orcidid></search><sort><creationdate>20170501</creationdate><title>VQA: Visual Question Answering</title><author>Agrawal, Aishwarya ; Lu, Jiasen ; Antol, Stanislaw ; Mitchell, Margaret ; Zitnick, C. Lawrence ; Parikh, Devi ; Batra, Dhruv</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2166-855605d0efba32499a2222e2efca4c0491b5b0022b0547b374215a414a218d593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Human performance</topic><topic>Image Processing and Computer Vision</topic><topic>Image processing systems</topic><topic>Information dissemination</topic><topic>Language</topic><topic>Natural language</topic><topic>Natural language (computers)</topic><topic>Natural language processing</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Pizza</topic><topic>Questioning</topic><topic>Studies</topic><topic>Tasks</topic><topic>Texts</topic><topic>Vision</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Agrawal, Aishwarya</creatorcontrib><creatorcontrib>Lu, Jiasen</creatorcontrib><creatorcontrib>Antol, Stanislaw</creatorcontrib><creatorcontrib>Mitchell, Margaret</creatorcontrib><creatorcontrib>Zitnick, C. Lawrence</creatorcontrib><creatorcontrib>Parikh, Devi</creatorcontrib><creatorcontrib>Batra, Dhruv</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Agrawal, Aishwarya</au><au>Lu, Jiasen</au><au>Antol, Stanislaw</au><au>Mitchell, Margaret</au><au>Zitnick, C. Lawrence</au><au>Parikh, Devi</au><au>Batra, Dhruv</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>VQA: Visual Question Answering: www.visualqa.org</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2017-05-01</date><risdate>2017</risdate><volume>123</volume><issue>1</issue><spage>4</spage><epage>31</epage><pages>4-31</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼ 0.25 M images, ∼ 0.76 M questions, and ∼ 10 M answers ( www.visualqa.org ), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV ( http://cloudcv.org/vqa ).</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-016-0966-6</doi><tpages>28</tpages><orcidid>https://orcid.org/0000-0002-8620-8077</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2017-05, Vol.123 (1), p.4-31
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_miscellaneous_1904247196
source SpringerLink Journals
subjects Algorithms
Artificial Intelligence
Computer Imaging
Computer Science
Computer vision
Datasets
Human performance
Image Processing and Computer Vision
Image processing systems
Information dissemination
Language
Natural language
Natural language (computers)
Natural language processing
Pattern Recognition
Pattern Recognition and Graphics
Pizza
Questioning
Studies
Tasks
Texts
Vision
Vision systems
title VQA: Visual Question Answering: www.visualqa.org
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T04%3A36%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=VQA:%20Visual%20Question%20Answering:%20www.visualqa.org&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Agrawal,%20Aishwarya&rft.date=2017-05-01&rft.volume=123&rft.issue=1&rft.spage=4&rft.epage=31&rft.pages=4-31&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-016-0966-6&rft_dat=%3Cproquest_cross%3E1904247196%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1892132963&rft_id=info:pmid/&rfr_iscdi=true