3D Concept Learning and Reasoning from Multi-View Images
Humans are able to accurately reason in 3D by gathering multi-view observations of the surrounding world. Inspired by this insight, we introduce a new large-scale benchmark for 3D multi-view visual question answering (3DMV-VQA). This dataset is collected by an embodied agent actively moving and capt...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans are able to accurately reason in 3D by gathering multi-view
observations of the surrounding world. Inspired by this insight, we introduce a
new large-scale benchmark for 3D multi-view visual question answering
(3DMV-VQA). This dataset is collected by an embodied agent actively moving and
capturing RGB images in an environment using the Habitat simulator. In total,
it consists of approximately 5k scenes, 600k images, paired with 50k questions.
We evaluate various state-of-the-art models for visual reasoning on our
benchmark and find that they all perform poorly. We suggest that a principled
approach for 3D reasoning from multi-view images should be to infer a compact
3D representation of the world from the multi-view images, which is further
grounded on open-vocabulary semantic concepts, and then to execute reasoning on
these 3D representations. As the first step towards this approach, we propose a
novel 3D concept learning and reasoning (3D-CLR) framework that seamlessly
combines these components via neural fields, 2D pre-trained vision-language
models, and neural reasoning operators. Experimental results suggest that our
framework outperforms baseline models by a large margin, but the challenge
remains largely unsolved. We further perform an in-depth analysis of the
challenges and highlight potential future directions. |
---|---|
DOI: | 10.48550/arxiv.2303.11327 |