FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation
Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality di...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent model-based reference-free metrics for open-domain dialogue evaluation
exhibit promising correlations with human judgment. However, they either
perform turn-level evaluation or look at a single dialogue quality dimension.
One would expect a good evaluation metric to assess multiple quality dimensions
at the dialogue level. To this end, we are motivated to propose a
multi-dimensional dialogue-level metric, which consists of three sub-metrics
with each targeting a specific dimension. The sub-metrics are trained with
novel self-supervised objectives and exhibit strong correlations with human
judgment for their respective dimensions. Moreover, we explore two approaches
to combine the sub-metrics: metric ensemble and multitask learning. Both
approaches yield a holistic metric that significantly outperforms individual
sub-metrics. Compared to the existing state-of-the-art metric, the combined
metrics achieve around 16% relative improvement on average across three
high-quality dialogue-level evaluation benchmarks. |
---|---|
DOI: | 10.48550/arxiv.2210.13832 |