Generalizing Visual Question Answering from Synthetic to Human-Written Questions via a Chain of QA with a Large Language Model
Visual question answering (VQA) is a task where an image is given, and a series of questions are asked about the image. To build an efficient VQA algorithm, a large amount of QA data is required which is very expensive. Generating synthetic QA pairs based on templates is a practical way to obtain da...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual question answering (VQA) is a task where an image is given, and a
series of questions are asked about the image. To build an efficient VQA
algorithm, a large amount of QA data is required which is very expensive.
Generating synthetic QA pairs based on templates is a practical way to obtain
data. However, VQA models trained on those data do not perform well on complex,
human-written questions. To address this issue, we propose a new method called
{\it chain of QA for human-written questions} (CoQAH). CoQAH utilizes a
sequence of QA interactions between a large language model and a VQA model
trained on synthetic data to reason and derive logical answers for
human-written questions. We tested the effectiveness of CoQAH on two types of
human-written VQA datasets for 3D-rendered and chest X-ray images and found
that it achieved state-of-the-art accuracy in both types of data. Notably,
CoQAH outperformed general vision-language models, VQA models, and medical
foundation models with no finetuning. |
---|---|
DOI: | 10.48550/arxiv.2401.06400 |