Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning
Multi-Modal Large Language Models (MLLMs) have demonstrated impressive performance in various VQA tasks. However, they often lack interpretability and struggle with complex visual inputs, especially when the resolution of the input image is high or when the interested region that could provide key i...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-Modal Large Language Models (MLLMs) have demonstrated impressive
performance in various VQA tasks. However, they often lack interpretability and
struggle with complex visual inputs, especially when the resolution of the
input image is high or when the interested region that could provide key
information for answering the question is small. To address these challenges,
we collect and introduce the large-scale Visual CoT dataset comprising 438k
question-answer pairs, annotated with intermediate bounding boxes highlighting
key regions essential for answering the questions. Additionally, about 98k
pairs of them are annotated with detailed reasoning steps. Importantly, we
propose a multi-turn processing pipeline that dynamically focuses on visual
inputs and provides interpretable thoughts. We also introduce the related
benchmark to evaluate the MLLMs in scenarios requiring specific local region
identification. Extensive experiments demonstrate the effectiveness of our
framework and shed light on better inference strategies. The Visual CoT
dataset, benchmark, and pre-trained models are available on
https://hao-shao.com/projects/viscot.html to support further research in this
area. |
---|---|
DOI: | 10.48550/arxiv.2403.16999 |