MULTI: Multimodal Understanding Leaderboard with Text and Images
The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present MULTI, a Ch...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid development of multimodal large language models (MLLMs) raises the
question of how they compare to human performance. While existing datasets
often feature synthetic or overly simplistic tasks, some models have already
surpassed human expert baselines. In this paper, we present MULTI, a Chinese
multimodal dataset derived from authentic examination questions. Comprising
over 18,000 carefully selected and refined questions, MULTI evaluates models
using real-world examination standards, encompassing image-text comprehension,
complex reasoning, and knowledge recall. Additionally, We also introduce
MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend with more
than 4,500 external knowledge context pieces for testing in-context learning
capabilities. Our evaluation highlights substantial room for MLLM advancement,
with Qwen2-VL-72B achieving a 76.9% accuracy on MULTI and 53.1% on MULTI-Elite
leading 25 evaluated models, compared to human expert baselines of 86.1% and
73.1%. MULTI serves not only as a robust evaluation platform but also paves the
way for the development of expert-level AI. |
---|---|
DOI: | 10.48550/arxiv.2402.03173 |