VIKSER: Visual Knowledge-Driven Self-Reinforcing Reasoning Framework
Visual reasoning refers to the task of solving questions about visual information. Current visual reasoning methods typically employ pre-trained vision-language model (VLM) strategies or deep neural network approaches. However, existing efforts are constrained by limited reasoning interpretability,...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual reasoning refers to the task of solving questions about visual
information. Current visual reasoning methods typically employ pre-trained
vision-language model (VLM) strategies or deep neural network approaches.
However, existing efforts are constrained by limited reasoning
interpretability, while hindering by the phenomenon of underspecification in
the question text. Additionally, the absence of fine-grained visual knowledge
limits the precise understanding of subject behavior in visual reasoning tasks.
To address these issues, we propose VIKSER (Visual Knowledge-Driven
Self-Reinforcing Reasoning Framework). Specifically, VIKSER, trained using
knowledge distilled from large language models, extracts fine-grained visual
knowledge with the assistance of visual relationship detection techniques.
Subsequently, VIKSER utilizes fine-grained visual knowledge to paraphrase the
question with underspecification. Additionally, we design a novel prompting
method called Chain-of-Evidence (CoE), which leverages the power of ``evidence
for reasoning'' to endow VIKSER with interpretable reasoning capabilities.
Meanwhile, the integration of self-reflection technology empowers VIKSER with
the ability to learn and improve from its mistakes. Experiments conducted on
widely used datasets demonstrate that VIKSER achieves new state-of-the-art
(SOTA) results in relevant tasks. |
---|---|
DOI: | 10.48550/arxiv.2502.00711 |