Syntax Tree Constrained Graph Network for Visual Question Answering
Visual Question Answering (VQA) aims to automatically answer natural language questions related to given image content. Existing VQA methods integrate vision modeling and language understanding to explore the deep semantics of the question. However, these methods ignore the significant syntax inform...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual Question Answering (VQA) aims to automatically answer natural language
questions related to given image content. Existing VQA methods integrate vision
modeling and language understanding to explore the deep semantics of the
question. However, these methods ignore the significant syntax information of
the question, which plays a vital role in understanding the essential semantics
of the question and guiding the visual feature refinement. To fill the gap, we
suggested a novel Syntax Tree Constrained Graph Network (STCGN) for VQA based
on entity message passing and syntax tree. This model is able to extract a
syntax tree from questions and obtain more precise syntax information.
Specifically, we parse questions and obtain the question syntax tree using the
Stanford syntax parsing tool. From the word level and phrase level, syntactic
phrase features and question features are extracted using a hierarchical tree
convolutional network. We then design a message-passing mechanism for
phrase-aware visual entities and capture entity features according to a given
visual context. Extensive experiments on VQA2.0 datasets demonstrate the
superiority of our proposed model. |
---|---|
DOI: | 10.48550/arxiv.2309.09179 |