Achieving Human Parity on Visual Question Answering

The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image. It has been a popular research topic with an increasing number of real-world applications in the last decade. This paper introduces a novel hierarchical in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on information systems 2023-04, Vol.41 (3), p.1-40, Article 79
Hauptverfasser: Yan, Ming, Xu, Haiyang, Li, Chenliang, Tian, Junfeng, Bi, Bin, Wang, Wei, Xu, Xianzhe, Zhang, Ji, Huang, Songfang, Huang, Fei, Si, Luo, Jin, Rong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image. It has been a popular research topic with an increasing number of real-world applications in the last decade. This paper introduces a novel hierarchical integration of vision and language AliceMind-MMU (ALIbaba’s Collection of Encoder-decoders from Machine IntelligeNce lab of Damo academy - MultiMedia Understanding), which leads to similar or even slightly better results than a human being does on VQA. A hierarchical framework is designed to tackle the practical problems of VQA in a cascade manner including: (1) diverse visual semantics learning for comprehensive image content understanding; (2) enhanced multi-modal pre-training with modality adaptive attention; and (3) a knowledge-guided model integration with three specialized expert modules for the complex VQA task. Treating different types of visual questions with corresponding expertise needed plays an important role in boosting the performance of our VQA architecture up to the human level. An extensive set of experiments and analysis are conducted to demonstrate the effectiveness of the new research work.
ISSN:1046-8188
1558-2868
DOI:10.1145/3572833