All You May Need for VQA are Image Captions
Visual Question Answering (VQA) has benefited from increasingly sophisticated models, but has not enjoyed the same level of engagement in terms of data creation. In this paper, we propose a method that automatically derives VQA examples at volume, by leveraging the abundance of existing image-captio...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual Question Answering (VQA) has benefited from increasingly sophisticated
models, but has not enjoyed the same level of engagement in terms of data
creation. In this paper, we propose a method that automatically derives VQA
examples at volume, by leveraging the abundance of existing image-caption
annotations combined with neural models for textual question generation. We
show that the resulting data is of high-quality. VQA models trained on our data
improve state-of-the-art zero-shot accuracy by double digits and achieve a
level of robustness that lacks in the same model trained on human-annotated VQA
data. |
---|---|
DOI: | 10.48550/arxiv.2205.01883 |