An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA

Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the input image, and question for answer prediction. Howe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Zhengyuan, Gan, Zhe, Wang, Jianfeng, Hu, Xiaowei, Lu, Yumao, Liu, Zicheng, Wang, Lijuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Zhengyuan
Gan, Zhe
Wang, Jianfeng
Hu, Xiaowei
Lu, Yumao
Liu, Zicheng
Wang, Lijuan
description Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the input image, and question for answer prediction. However, this two-step approach could lead to mismatches that potentially limit the VQA performance. For example, the retrieved knowledge might be noisy and irrelevant to the question, and the re-embedded knowledge features during reasoning might deviate from their original meanings in the knowledge base (KB). To address this challenge, we propose PICa, a simple yet effective method that Prompts GPT3 via the use of Image Captions, for knowledge-based VQA. Inspired by GPT-3's power in knowledge retrieval and question answering, instead of using structured KBs as in previous work, we treat GPT-3 as an implicit and unstructured KB that can jointly acquire and process relevant knowledge. Specifically, we first convert the image into captions (or tags) that GPT-3 can understand, then adapt GPT-3 to solve the VQA task in a few-shot manner by just providing a few in-context VQA examples. We further boost performance by carefully investigating: (i) what text formats best describe the image content, and (ii) how in-context examples can be better selected and used. PICa unlocks the first use of GPT-3 for multimodal tasks. By using only 16 examples, PICa surpasses the supervised state of the art by an absolute +8.6 points on the OK-VQA dataset. We also benchmark PICa on VQAv2, where PICa also shows a decent few-shot performance.
doi_str_mv 10.48550/arxiv.2109.05014
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2109_05014</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2109_05014</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-773b29010219e9e2804bf00e833d4bfbe2bb257b5b69901ff1a55faa867d8ce83</originalsourceid><addsrcrecordid>eNotj8FOAjEURbtxQdAPcGV_oONrO522iZuBABpIlDBhO2npq0wyMKSgyN87Iqt7Fyc39xDyyCHLjVLw7NJP850JDjYDBTwfkJdyTye7Q5OajWvp6vQVLrSLdPZRMUljl-gUz2y17U50vu_OLYZPZCN3xEDXy_Ke3EXXHvHhlkNSTSfV-JUt3mdv43LBXKFzprX0wgIHwS1aFAZyHwHQSBn65lF4L5T2yhe2x2LkTqnonCl0MJseG5Kn_9nr_fqQmp1Ll_pPo75qyF9AqT_4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA</title><source>arXiv.org</source><creator>Yang, Zhengyuan ; Gan, Zhe ; Wang, Jianfeng ; Hu, Xiaowei ; Lu, Yumao ; Liu, Zicheng ; Wang, Lijuan</creator><creatorcontrib>Yang, Zhengyuan ; Gan, Zhe ; Wang, Jianfeng ; Hu, Xiaowei ; Lu, Yumao ; Liu, Zicheng ; Wang, Lijuan</creatorcontrib><description>Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the input image, and question for answer prediction. However, this two-step approach could lead to mismatches that potentially limit the VQA performance. For example, the retrieved knowledge might be noisy and irrelevant to the question, and the re-embedded knowledge features during reasoning might deviate from their original meanings in the knowledge base (KB). To address this challenge, we propose PICa, a simple yet effective method that Prompts GPT3 via the use of Image Captions, for knowledge-based VQA. Inspired by GPT-3's power in knowledge retrieval and question answering, instead of using structured KBs as in previous work, we treat GPT-3 as an implicit and unstructured KB that can jointly acquire and process relevant knowledge. Specifically, we first convert the image into captions (or tags) that GPT-3 can understand, then adapt GPT-3 to solve the VQA task in a few-shot manner by just providing a few in-context VQA examples. We further boost performance by carefully investigating: (i) what text formats best describe the image content, and (ii) how in-context examples can be better selected and used. PICa unlocks the first use of GPT-3 for multimodal tasks. By using only 16 examples, PICa surpasses the supervised state of the art by an absolute +8.6 points on the OK-VQA dataset. We also benchmark PICa on VQAv2, where PICa also shows a decent few-shot performance.</description><identifier>DOI: 10.48550/arxiv.2109.05014</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2109.05014$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2109.05014$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Zhengyuan</creatorcontrib><creatorcontrib>Gan, Zhe</creatorcontrib><creatorcontrib>Wang, Jianfeng</creatorcontrib><creatorcontrib>Hu, Xiaowei</creatorcontrib><creatorcontrib>Lu, Yumao</creatorcontrib><creatorcontrib>Liu, Zicheng</creatorcontrib><creatorcontrib>Wang, Lijuan</creatorcontrib><title>An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA</title><description>Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the input image, and question for answer prediction. However, this two-step approach could lead to mismatches that potentially limit the VQA performance. For example, the retrieved knowledge might be noisy and irrelevant to the question, and the re-embedded knowledge features during reasoning might deviate from their original meanings in the knowledge base (KB). To address this challenge, we propose PICa, a simple yet effective method that Prompts GPT3 via the use of Image Captions, for knowledge-based VQA. Inspired by GPT-3's power in knowledge retrieval and question answering, instead of using structured KBs as in previous work, we treat GPT-3 as an implicit and unstructured KB that can jointly acquire and process relevant knowledge. Specifically, we first convert the image into captions (or tags) that GPT-3 can understand, then adapt GPT-3 to solve the VQA task in a few-shot manner by just providing a few in-context VQA examples. We further boost performance by carefully investigating: (i) what text formats best describe the image content, and (ii) how in-context examples can be better selected and used. PICa unlocks the first use of GPT-3 for multimodal tasks. By using only 16 examples, PICa surpasses the supervised state of the art by an absolute +8.6 points on the OK-VQA dataset. We also benchmark PICa on VQAv2, where PICa also shows a decent few-shot performance.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOAjEURbtxQdAPcGV_oONrO522iZuBABpIlDBhO2npq0wyMKSgyN87Iqt7Fyc39xDyyCHLjVLw7NJP850JDjYDBTwfkJdyTye7Q5OajWvp6vQVLrSLdPZRMUljl-gUz2y17U50vu_OLYZPZCN3xEDXy_Ke3EXXHvHhlkNSTSfV-JUt3mdv43LBXKFzprX0wgIHwS1aFAZyHwHQSBn65lF4L5T2yhe2x2LkTqnonCl0MJseG5Kn_9nr_fqQmp1Ll_pPo75qyF9AqT_4</recordid><startdate>20210910</startdate><enddate>20210910</enddate><creator>Yang, Zhengyuan</creator><creator>Gan, Zhe</creator><creator>Wang, Jianfeng</creator><creator>Hu, Xiaowei</creator><creator>Lu, Yumao</creator><creator>Liu, Zicheng</creator><creator>Wang, Lijuan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210910</creationdate><title>An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA</title><author>Yang, Zhengyuan ; Gan, Zhe ; Wang, Jianfeng ; Hu, Xiaowei ; Lu, Yumao ; Liu, Zicheng ; Wang, Lijuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-773b29010219e9e2804bf00e833d4bfbe2bb257b5b69901ff1a55faa867d8ce83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Zhengyuan</creatorcontrib><creatorcontrib>Gan, Zhe</creatorcontrib><creatorcontrib>Wang, Jianfeng</creatorcontrib><creatorcontrib>Hu, Xiaowei</creatorcontrib><creatorcontrib>Lu, Yumao</creatorcontrib><creatorcontrib>Liu, Zicheng</creatorcontrib><creatorcontrib>Wang, Lijuan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Zhengyuan</au><au>Gan, Zhe</au><au>Wang, Jianfeng</au><au>Hu, Xiaowei</au><au>Lu, Yumao</au><au>Liu, Zicheng</au><au>Wang, Lijuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA</atitle><date>2021-09-10</date><risdate>2021</risdate><abstract>Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the input image, and question for answer prediction. However, this two-step approach could lead to mismatches that potentially limit the VQA performance. For example, the retrieved knowledge might be noisy and irrelevant to the question, and the re-embedded knowledge features during reasoning might deviate from their original meanings in the knowledge base (KB). To address this challenge, we propose PICa, a simple yet effective method that Prompts GPT3 via the use of Image Captions, for knowledge-based VQA. Inspired by GPT-3's power in knowledge retrieval and question answering, instead of using structured KBs as in previous work, we treat GPT-3 as an implicit and unstructured KB that can jointly acquire and process relevant knowledge. Specifically, we first convert the image into captions (or tags) that GPT-3 can understand, then adapt GPT-3 to solve the VQA task in a few-shot manner by just providing a few in-context VQA examples. We further boost performance by carefully investigating: (i) what text formats best describe the image content, and (ii) how in-context examples can be better selected and used. PICa unlocks the first use of GPT-3 for multimodal tasks. By using only 16 examples, PICa surpasses the supervised state of the art by an absolute +8.6 points on the OK-VQA dataset. We also benchmark PICa on VQAv2, where PICa also shows a decent few-shot performance.</abstract><doi>10.48550/arxiv.2109.05014</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2109.05014
ispartof
issn
language eng
recordid cdi_arxiv_primary_2109_05014
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T00%3A42%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Empirical%20Study%20of%20GPT-3%20for%20Few-Shot%20Knowledge-Based%20VQA&rft.au=Yang,%20Zhengyuan&rft.date=2021-09-10&rft_id=info:doi/10.48550/arxiv.2109.05014&rft_dat=%3Carxiv_GOX%3E2109_05014%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true