The All-Seeing Project V2: Towards General Relation Comprehension of the Open World
We present the All-Seeing Project V2: a new model and dataset designed for understanding object relations in images. Specifically, we propose the All-Seeing Model V2 (ASMv2) that integrates the formulation of text generation, object localization, and relation comprehension into a relation conversati...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present the All-Seeing Project V2: a new model and dataset designed for
understanding object relations in images. Specifically, we propose the
All-Seeing Model V2 (ASMv2) that integrates the formulation of text generation,
object localization, and relation comprehension into a relation conversation
(ReC) task. Leveraging this unified task, our model excels not only in
perceiving and recognizing all objects within the image but also in grasping
the intricate relation graph between them, diminishing the relation
hallucination often encountered by Multi-modal Large Language Models (MLLMs).
To facilitate training and evaluation of MLLMs in relation understanding, we
created the first high-quality ReC dataset ({AS-V2) which is aligned with the
format of standard instruction tuning data. In addition, we design a new
benchmark, termed Circular-based Relation Probing Evaluation (CRPE) for
comprehensively evaluating the relation comprehension capabilities of MLLMs.
Notably, our ASMv2 achieves an overall accuracy of 52.04 on this relation-aware
benchmark, surpassing the 43.14 of LLaVA-1.5 by a large margin. We hope that
our work can inspire more future research and contribute to the evolution
towards artificial general intelligence. Our project is released at
https://github.com/OpenGVLab/all-seeing. |
---|---|
DOI: | 10.48550/arxiv.2402.19474 |