Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models
In the realm of multimodal tasks, Visual Question Answering (VQA) plays a crucial role by addressing natural language questions grounded in visual content. Knowledge-Based Visual Question Answering (KBVQA) advances this concept by adding external knowledge along with images to respond to questions....
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the realm of multimodal tasks, Visual Question Answering (VQA) plays a
crucial role by addressing natural language questions grounded in visual
content. Knowledge-Based Visual Question Answering (KBVQA) advances this
concept by adding external knowledge along with images to respond to questions.
We introduce an approach for KBVQA, augmenting the existing vision-language
transformer encoder-decoder (OFA) model. Our main contribution involves
enhancing questions by incorporating relevant external knowledge extracted from
knowledge graphs, using a dynamic triple extraction method. We supply a
flexible number of triples from the knowledge graph as context, tailored to
meet the requirements for answering the question. Our model, enriched with
knowledge, demonstrates an average improvement of 4.75\% in Exact Match Score
over the state-of-the-art on three different KBVQA datasets. Through
experiments and analysis, we demonstrate that furnishing variable triples for
each question improves the reasoning capabilities of the language model in
contrast to supplying a fixed number of triples. This is illustrated even for
recent large language models. Additionally, we highlight the model's
generalization capability by showcasing its SOTA-beating performance on a small
dataset, achieved through straightforward fine-tuning. |
---|---|
DOI: | 10.48550/arxiv.2406.09994 |