HarmonicEval: Multi-modal, Multi-task, Multi-criteria Automatic Evaluation Using a Vision Language Model
Vision-language models (VLMs) have shown impressive abilities in text and image understanding. However, existing metrics for evaluating the text generated by VLMs focus exclusively on overall quality, leading to two limitations: 1) it is challenging to identify which aspects of the text need improve...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-language models (VLMs) have shown impressive abilities in text and
image understanding. However, existing metrics for evaluating the text
generated by VLMs focus exclusively on overall quality, leading to two
limitations: 1) it is challenging to identify which aspects of the text need
improvement from the overall score; 2) metrics may overlook specific evaluation
criteria when predicting an overall score. To address these limitations, we
propose HarmonicEval, a reference-free evaluation metric that aggregates
criterion-wise scores to produce the overall score in a bottom-up manner.
Furthermore, we construct the Multi-task Multi-criteria Human Evaluation (MMHE)
dataset, which comprises 18,000 expert human judgments across four
vision-language tasks. Our experiments demonstrate that HarmonicEval achieves
higher correlations with human judgments than conventional metrics while
providing numerical scores for each criterion. |
---|---|
DOI: | 10.48550/arxiv.2412.14613 |