xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection
Widely used learned metrics for machine translation evaluation, such as COMET and BLEURT, estimate the quality of a translation hypothesis by providing a single sentence-level score. As such, they offer little insight into translation errors (e.g., what are the errors and what is their severity). On...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Widely used learned metrics for machine translation evaluation, such as COMET
and BLEURT, estimate the quality of a translation hypothesis by providing a
single sentence-level score. As such, they offer little insight into
translation errors (e.g., what are the errors and what is their severity). On
the other hand, generative large language models (LLMs) are amplifying the
adoption of more granular strategies to evaluation, attempting to detail and
categorize translation errors. In this work, we introduce xCOMET, an
open-source learned metric designed to bridge the gap between these approaches.
xCOMET integrates both sentence-level evaluation and error span detection
capabilities, exhibiting state-of-the-art performance across all types of
evaluation (sentence-level, system-level, and error span detection). Moreover,
it does so while highlighting and categorizing error spans, thus enriching the
quality assessment. We also provide a robustness analysis with stress tests,
and show that xCOMET is largely capable of identifying localized critical
errors and hallucinations. |
---|---|
DOI: | 10.48550/arxiv.2310.10482 |