MaXM: Towards Multilingual Visual Question Answering
Visual Question Answering (VQA) has been primarily studied through the lens of the English language. Yet, tackling VQA in other languages in the same manner would require a considerable amount of resources. In this paper, we propose scalable solutions to multilingual visual question answering (mVQA)...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual Question Answering (VQA) has been primarily studied through the lens
of the English language. Yet, tackling VQA in other languages in the same
manner would require a considerable amount of resources. In this paper, we
propose scalable solutions to multilingual visual question answering (mVQA), on
both data and modeling fronts. We first propose a translation-based framework
to mVQA data generation that requires much less human annotation efforts than
the conventional approach of directly collection questions and answers. Then,
we apply our framework to the multilingual captions in the Crossmodal-3600
dataset and develop an efficient annotation protocol to create MaXM, a
test-only VQA benchmark in 7 diverse languages. Finally, we develop a simple,
lightweight, and effective approach as well as benchmark state-of-the-art
English and multilingual VQA models. We hope that our benchmark encourages
further research on mVQA. |
---|---|
DOI: | 10.48550/arxiv.2209.05401 |