Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study
With the advent of Neural Machine Translation (NMT) systems, the MT output has reached unprecedented accuracy levels which resulted in the ubiquity of MT tools on almost all online platforms with multilingual content. However, NMT systems, like other state-of-the-art AI generative systems, are prone...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advent of Neural Machine Translation (NMT) systems, the MT output
has reached unprecedented accuracy levels which resulted in the ubiquity of MT
tools on almost all online platforms with multilingual content. However, NMT
systems, like other state-of-the-art AI generative systems, are prone to errors
that are deemed machine hallucinations. The problem with NMT hallucinations is
that they are remarkably \textit{fluent} hallucinations. Since they are trained
to produce grammatically correct utterances, NMT systems are capable of
producing mistranslations that are too fluent to be recognised by both users of
the MT tool, as well as by automatic quality metrics that are used to gauge
their performance. In this paper, we introduce an authentic dataset of machine
translation critical errors to point to the ethical and safety issues involved
in the common use of MT. The dataset comprises mistranslations of Arabic mental
health postings manually annotated with critical error types. We also show how
the commonly used quality metrics do not penalise critical errors and highlight
this as a critical issue that merits further attention from researchers. |
---|---|
DOI: | 10.48550/arxiv.2405.11668 |