Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA
To increase the generalization capability of VQA systems, many recent studies have tried to de-bias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simulta...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To increase the generalization capability of VQA systems, many recent studies
have tried to de-bias spurious language or vision associations that shortcut
the question or image to the answer. Despite these efforts, the literature
fails to address the confounding effect of vision and language simultaneously.
As a result, when they reduce bias learned from one modality, they usually
increase bias from the other. In this paper, we first model a confounding
effect that causes language and vision bias simultaneously, then propose a
counterfactual inference to remove the influence of this effect. The model
trained in this strategy can concurrently and efficiently reduce vision and
language bias. To the best of our knowledge, this is the first work to reduce
biases resulting from confounding effects of vision and language in VQA,
leveraging causal explain-away relations. We accompany our method with an
explain-away strategy, pushing the accuracy of the questions with numerical
answers results compared to existing methods that have been an open problem.
The proposed method outperforms the state-of-the-art methods in VQA-CP v2
datasets. |
---|---|
DOI: | 10.48550/arxiv.2305.19664 |