To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation
Automatic metrics are commonly used as the exclusive tool for declaring the superiority of one machine translation system's quality over another. The community choice of automatic metric guides research directions and industrial developments by deciding which models are deemed better. Evaluatin...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automatic metrics are commonly used as the exclusive tool for declaring the
superiority of one machine translation system's quality over another. The
community choice of automatic metric guides research directions and industrial
developments by deciding which models are deemed better. Evaluating metrics
correlations with sets of human judgements has been limited by the size of
these sets. In this paper, we corroborate how reliable metrics are in contrast
to human judgements on -- to the best of our knowledge -- the largest
collection of judgements reported in the literature. Arguably, pairwise
rankings of two systems are the most common evaluation tasks in research or
deployment scenarios. Taking human judgement as a gold standard, we investigate
which metrics have the highest accuracy in predicting translation quality
rankings for such system pairs. Furthermore, we evaluate the performance of
various metrics across different language pairs and domains. Lastly, we show
that the sole use of BLEU impeded the development of improved models leading to
bad deployment decisions. We release the collection of 2.3M sentence-level
human judgements for 4380 systems for further analysis and replication of our
work. |
---|---|
DOI: | 10.48550/arxiv.2107.10821 |