The impact of reference normalization on automatic MT evaluation
Automatic methods for MT evaluation are often depends on high quality data as references that allow the comparison between automatic and human translations. However, independently produced human translations are necessarily different not only in the choice of words but also in word orthography and w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automatic methods for MT evaluation are often depends on high quality data as references that allow the comparison between automatic and human translations. However, independently produced human translations are necessarily different not only in the choice of words but also in word orthography and writing style. This inconsistency between references texts can negatively influence the quality of automatic evaluation of machine translation especially in high morphological language such as Farsi. In this paper, we study the effect of character and word-level reference preprocessing schemes for Farsi on quality of machine translation evaluation. For this purpose, we experimentally look into their impact on three established evaluation measures. Our results show that reference normalization results in a significant increase in Automatic MT Evaluation scores. |
---|---|
DOI: | 10.1109/ISTEL.2012.6483097 |