Distraction-free Embeddings for Robust VQA
The generation of effective latent representations and their subsequent refinement to incorporate precise information is an essential prerequisite for Vision-Language Understanding (VLU) tasks such as Video Question Answering (VQA). However, most existing methods for VLU focus on sparsely sampling o...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The generation of effective latent representations and their subsequent
refinement to incorporate precise information is an essential prerequisite for
Vision-Language Understanding (VLU) tasks such as Video Question Answering
(VQA). However, most existing methods for VLU focus on sparsely sampling or
fine-graining the input information (e.g., sampling a sparse set of frames or
text tokens), or adding external knowledge. We present a novel "DRAX:
Distraction Removal and Attended Cross-Alignment" method to rid our cross-modal
representations of distractors in the latent space. We do not exclusively
confine the perception of any input information from various modalities but
instead use an attention-guided distraction removal method to increase focus on
task-relevant information in latent embeddings. DRAX also ensures semantic
alignment of embeddings during cross-modal fusions. We evaluate our approach on
a challenging benchmark (SUTD-TrafficQA dataset), testing the framework's
abilities for feature and event queries, temporal relation understanding,
forecasting, hypothesis, and causal analysis through extensive experiments. |
---|---|
DOI: | 10.48550/arxiv.2309.00133 |