CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark
Visual Question Answering (VQA) is an important task in multimodal AI, and it is often used to test the ability of vision-language models to understand and reason on knowledge present in both visual and textual data. However, most of the current VQA models use datasets that are primarily focused on...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual Question Answering (VQA) is an important task in multimodal AI, and it
is often used to test the ability of vision-language models to understand and
reason on knowledge present in both visual and textual data. However, most of
the current VQA models use datasets that are primarily focused on English and a
few major world languages, with images that are typically Western-centric.
While recent efforts have tried to increase the number of languages covered on
VQA datasets, they still lack diversity in low-resource languages. More
importantly, although these datasets often extend their linguistic range via
translation or some other approaches, they usually keep images the same,
resulting in narrow cultural representation. To address these limitations, we
construct CVQA, a new Culturally-diverse multilingual Visual Question Answering
benchmark, designed to cover a rich set of languages and cultures, where we
engage native speakers and cultural experts in the data collection process. As
a result, CVQA includes culturally-driven images and questions from across 30
countries on four continents, covering 31 languages with 13 scripts, providing
a total of 10k questions. We then benchmark several Multimodal Large Language
Models (MLLMs) on CVQA, and show that the dataset is challenging for the
current state-of-the-art models. This benchmark can serve as a probing
evaluation suite for assessing the cultural capability and bias of multimodal
models and hopefully encourage more research efforts toward increasing cultural
awareness and linguistic diversity in this field. |
---|---|
DOI: | 10.48550/arxiv.2406.05967 |