Evaluating Robustness to Input Perturbations for Neural Machine Translation
Neural Machine Translation (NMT) models are sensitive to small perturbations in the input. Robustness to such perturbations is typically measured using translation quality metrics such as BLEU on the noisy input. This paper proposes additional metrics which measure the relative degradation and chang...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Niu, Xing Mathur, Prashant Dinu, Georgiana Al-Onaizan, Yaser |
description | Neural Machine Translation (NMT) models are sensitive to small perturbations
in the input. Robustness to such perturbations is typically measured using
translation quality metrics such as BLEU on the noisy input. This paper
proposes additional metrics which measure the relative degradation and changes
in translation when small perturbations are added to the input. We focus on a
class of models employing subword regularization to address robustness and
perform extensive evaluations of these models using the robustness measures
proposed. Results show that our proposed metrics reveal a clear trend of
improved robustness to perturbations when subword regularization methods are
used. |
doi_str_mv | 10.48550/arxiv.2005.00580 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2005_00580</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2005_00580</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-30825d5faf8dd07da90ff1a0aadb186725d1894328667a0a956a3393d36c3ad73</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QMJNbv3IElUFKspDKPvoprYhUnAqPyr4e0JgMRppZjTSYeyqgnKthYAbCl_DqawBRDlLwzl73J5ozJQG_87fpj7H5G2MPE1854858VcbUg79PJh85G4K_NnmQCN_osPH4C1vA_k4Lv0FO3M0Rnv57yvW3m3bzUOxf7nfbW73BUkFBYKuhRGOnDYGlKEGnKsIiExfaanmstLNGmstpZrjRkhCbNCgPCAZhSt2_Xe70HTHMHxS-O5-qbqFCn8ADPNIJQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Evaluating Robustness to Input Perturbations for Neural Machine Translation</title><source>arXiv.org</source><creator>Niu, Xing ; Mathur, Prashant ; Dinu, Georgiana ; Al-Onaizan, Yaser</creator><creatorcontrib>Niu, Xing ; Mathur, Prashant ; Dinu, Georgiana ; Al-Onaizan, Yaser</creatorcontrib><description>Neural Machine Translation (NMT) models are sensitive to small perturbations
in the input. Robustness to such perturbations is typically measured using
translation quality metrics such as BLEU on the noisy input. This paper
proposes additional metrics which measure the relative degradation and changes
in translation when small perturbations are added to the input. We focus on a
class of models employing subword regularization to address robustness and
perform extensive evaluations of these models using the robustness measures
proposed. Results show that our proposed metrics reveal a clear trend of
improved robustness to perturbations when subword regularization methods are
used.</description><identifier>DOI: 10.48550/arxiv.2005.00580</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2020-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2005.00580$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2005.00580$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Niu, Xing</creatorcontrib><creatorcontrib>Mathur, Prashant</creatorcontrib><creatorcontrib>Dinu, Georgiana</creatorcontrib><creatorcontrib>Al-Onaizan, Yaser</creatorcontrib><title>Evaluating Robustness to Input Perturbations for Neural Machine Translation</title><description>Neural Machine Translation (NMT) models are sensitive to small perturbations
in the input. Robustness to such perturbations is typically measured using
translation quality metrics such as BLEU on the noisy input. This paper
proposes additional metrics which measure the relative degradation and changes
in translation when small perturbations are added to the input. We focus on a
class of models employing subword regularization to address robustness and
perform extensive evaluations of these models using the robustness measures
proposed. Results show that our proposed metrics reveal a clear trend of
improved robustness to perturbations when subword regularization methods are
used.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QMJNbv3IElUFKspDKPvoprYhUnAqPyr4e0JgMRppZjTSYeyqgnKthYAbCl_DqawBRDlLwzl73J5ozJQG_87fpj7H5G2MPE1854858VcbUg79PJh85G4K_NnmQCN_osPH4C1vA_k4Lv0FO3M0Rnv57yvW3m3bzUOxf7nfbW73BUkFBYKuhRGOnDYGlKEGnKsIiExfaanmstLNGmstpZrjRkhCbNCgPCAZhSt2_Xe70HTHMHxS-O5-qbqFCn8ADPNIJQ</recordid><startdate>20200501</startdate><enddate>20200501</enddate><creator>Niu, Xing</creator><creator>Mathur, Prashant</creator><creator>Dinu, Georgiana</creator><creator>Al-Onaizan, Yaser</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200501</creationdate><title>Evaluating Robustness to Input Perturbations for Neural Machine Translation</title><author>Niu, Xing ; Mathur, Prashant ; Dinu, Georgiana ; Al-Onaizan, Yaser</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-30825d5faf8dd07da90ff1a0aadb186725d1894328667a0a956a3393d36c3ad73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Niu, Xing</creatorcontrib><creatorcontrib>Mathur, Prashant</creatorcontrib><creatorcontrib>Dinu, Georgiana</creatorcontrib><creatorcontrib>Al-Onaizan, Yaser</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Niu, Xing</au><au>Mathur, Prashant</au><au>Dinu, Georgiana</au><au>Al-Onaizan, Yaser</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluating Robustness to Input Perturbations for Neural Machine Translation</atitle><date>2020-05-01</date><risdate>2020</risdate><abstract>Neural Machine Translation (NMT) models are sensitive to small perturbations
in the input. Robustness to such perturbations is typically measured using
translation quality metrics such as BLEU on the noisy input. This paper
proposes additional metrics which measure the relative degradation and changes
in translation when small perturbations are added to the input. We focus on a
class of models employing subword regularization to address robustness and
perform extensive evaluations of these models using the robustness measures
proposed. Results show that our proposed metrics reveal a clear trend of
improved robustness to perturbations when subword regularization methods are
used.</abstract><doi>10.48550/arxiv.2005.00580</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2005.00580 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2005_00580 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | Evaluating Robustness to Input Perturbations for Neural Machine Translation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T13%3A14%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluating%20Robustness%20to%20Input%20Perturbations%20for%20Neural%20Machine%20Translation&rft.au=Niu,%20Xing&rft.date=2020-05-01&rft_id=info:doi/10.48550/arxiv.2005.00580&rft_dat=%3Carxiv_GOX%3E2005_00580%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |