HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal Reasoning
Hallucination has been a major problem for large language models and remains a critical challenge when it comes to multimodality in which vision-language models (VLMs) have to deal with not just textual but also visual inputs. Despite rapid progress in VLMs, resources for evaluating and addressing m...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Hallucination has been a major problem for large language models and remains
a critical challenge when it comes to multimodality in which vision-language
models (VLMs) have to deal with not just textual but also visual inputs.
Despite rapid progress in VLMs, resources for evaluating and addressing
multimodal hallucination are limited and mostly focused on evaluation. This
work introduces HaloQuest, a novel visual question answering dataset that
captures various aspects of multimodal hallucination such as false premises,
insufficient contexts, and visual challenges. A novel idea from HaloQuest is to
leverage synthetic images, apart from real ones, to enable dataset creation at
scale. With over 7.7K examples spanning across a wide variety of categories,
HaloQuest was designed to be both a challenging benchmark for VLMs and a
fine-tuning dataset for advancing multimodal reasoning. Our experiments reveal
that current models struggle with HaloQuest, with all open-source VLMs
achieving below 36% accuracy. On the other hand, fine-tuning on HaloQuest
significantly reduces hallucination rates while preserving performance on
standard reasoning tasks. Our results discover that benchmarking with generated
images is highly correlated (r=0.97) with real images. Last but not least, we
propose a novel Auto-Eval mechanism that is highly correlated with human raters
(r=0.99) for evaluating VLMs. In sum, this work makes concrete strides towards
understanding, evaluating, and mitigating hallucination in VLMs, serving as an
important step towards more reliable multimodal AI systems in the future. |
---|---|
DOI: | 10.48550/arxiv.2407.15680 |