Lack of Fluency is Hurting Your Translation Model

Many machine translation models are trained on bilingual corpus, which consist of aligned sentence pairs from two different languages with same semantic. However, there is a qualitative discrepancy between train and test set in bilingual corpus. While the most train sentences are created via automat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yoo, Jaehyo, Kang, Jaewoo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yoo, Jaehyo
Kang, Jaewoo
description Many machine translation models are trained on bilingual corpus, which consist of aligned sentence pairs from two different languages with same semantic. However, there is a qualitative discrepancy between train and test set in bilingual corpus. While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human. We suppose this discrepancy in training corpus will yield performance drop of translation model. In this work, we define \textit{fluency noise} to determine which parts of train sentences cause them to seem unnatural. We show that \textit{fluency noise} can be detected by simple gradient-based method with pre-trained classifier. By removing \textit{fluency noise} in train sentences, our final model outperforms the baseline on WMT-14 DE$\rightarrow$EN and RU$\rightarrow$EN. We also show the compatibility with back-translation augmentation, which has been commonly used to improve the fluency of the translation model. At last, the qualitative analysis of \textit{fluency noise} provides the insight of what points we should focus on.
doi_str_mv 10.48550/arxiv.2205.11826
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_11826</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_11826</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-f73e6bdd6f054c5222f5981f71494d9589bfc69f5c69d20baf47a89152559ccf3</originalsourceid><addsrcrecordid>eNotzrFOwzAUhWEvDFXpA3TCL5BgO76OPaKKUqRUXbIwRTd2bmU1JMhpEH37Quly_u3oY2wtRa4tgHjG9BO_c6UE5FJaZRZMVuhPfCS-7edu8BceJ76b0zkOR_4xzonXCYepx3McB74fQ9c_sgfCfupW9y5ZvX2tN7usOry9b16qDE1pMiqLzrQhGBKgPSilCJyVVErtdHBgXUveOILfCUq0SLpE6yQoAOc9FUv29H97MzdfKX5iujR_9uZmL64YYz2h</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Lack of Fluency is Hurting Your Translation Model</title><source>arXiv.org</source><creator>Yoo, Jaehyo ; Kang, Jaewoo</creator><creatorcontrib>Yoo, Jaehyo ; Kang, Jaewoo</creatorcontrib><description>Many machine translation models are trained on bilingual corpus, which consist of aligned sentence pairs from two different languages with same semantic. However, there is a qualitative discrepancy between train and test set in bilingual corpus. While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human. We suppose this discrepancy in training corpus will yield performance drop of translation model. In this work, we define \textit{fluency noise} to determine which parts of train sentences cause them to seem unnatural. We show that \textit{fluency noise} can be detected by simple gradient-based method with pre-trained classifier. By removing \textit{fluency noise} in train sentences, our final model outperforms the baseline on WMT-14 DE$\rightarrow$EN and RU$\rightarrow$EN. We also show the compatibility with back-translation augmentation, which has been commonly used to improve the fluency of the translation model. At last, the qualitative analysis of \textit{fluency noise} provides the insight of what points we should focus on.</description><identifier>DOI: 10.48550/arxiv.2205.11826</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2022-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.11826$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.11826$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yoo, Jaehyo</creatorcontrib><creatorcontrib>Kang, Jaewoo</creatorcontrib><title>Lack of Fluency is Hurting Your Translation Model</title><description>Many machine translation models are trained on bilingual corpus, which consist of aligned sentence pairs from two different languages with same semantic. However, there is a qualitative discrepancy between train and test set in bilingual corpus. While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human. We suppose this discrepancy in training corpus will yield performance drop of translation model. In this work, we define \textit{fluency noise} to determine which parts of train sentences cause them to seem unnatural. We show that \textit{fluency noise} can be detected by simple gradient-based method with pre-trained classifier. By removing \textit{fluency noise} in train sentences, our final model outperforms the baseline on WMT-14 DE$\rightarrow$EN and RU$\rightarrow$EN. We also show the compatibility with back-translation augmentation, which has been commonly used to improve the fluency of the translation model. At last, the qualitative analysis of \textit{fluency noise} provides the insight of what points we should focus on.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrFOwzAUhWEvDFXpA3TCL5BgO76OPaKKUqRUXbIwRTd2bmU1JMhpEH37Quly_u3oY2wtRa4tgHjG9BO_c6UE5FJaZRZMVuhPfCS-7edu8BceJ76b0zkOR_4xzonXCYepx3McB74fQ9c_sgfCfupW9y5ZvX2tN7usOry9b16qDE1pMiqLzrQhGBKgPSilCJyVVErtdHBgXUveOILfCUq0SLpE6yQoAOc9FUv29H97MzdfKX5iujR_9uZmL64YYz2h</recordid><startdate>20220524</startdate><enddate>20220524</enddate><creator>Yoo, Jaehyo</creator><creator>Kang, Jaewoo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220524</creationdate><title>Lack of Fluency is Hurting Your Translation Model</title><author>Yoo, Jaehyo ; Kang, Jaewoo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-f73e6bdd6f054c5222f5981f71494d9589bfc69f5c69d20baf47a89152559ccf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Yoo, Jaehyo</creatorcontrib><creatorcontrib>Kang, Jaewoo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yoo, Jaehyo</au><au>Kang, Jaewoo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Lack of Fluency is Hurting Your Translation Model</atitle><date>2022-05-24</date><risdate>2022</risdate><abstract>Many machine translation models are trained on bilingual corpus, which consist of aligned sentence pairs from two different languages with same semantic. However, there is a qualitative discrepancy between train and test set in bilingual corpus. While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human. We suppose this discrepancy in training corpus will yield performance drop of translation model. In this work, we define \textit{fluency noise} to determine which parts of train sentences cause them to seem unnatural. We show that \textit{fluency noise} can be detected by simple gradient-based method with pre-trained classifier. By removing \textit{fluency noise} in train sentences, our final model outperforms the baseline on WMT-14 DE$\rightarrow$EN and RU$\rightarrow$EN. We also show the compatibility with back-translation augmentation, which has been commonly used to improve the fluency of the translation model. At last, the qualitative analysis of \textit{fluency noise} provides the insight of what points we should focus on.</abstract><doi>10.48550/arxiv.2205.11826</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2205.11826
ispartof
issn
language eng
recordid cdi_arxiv_primary_2205_11826
source arXiv.org
subjects Computer Science - Computation and Language
title Lack of Fluency is Hurting Your Translation Model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T20%3A37%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Lack%20of%20Fluency%20is%20Hurting%20Your%20Translation%20Model&rft.au=Yoo,%20Jaehyo&rft.date=2022-05-24&rft_id=info:doi/10.48550/arxiv.2205.11826&rft_dat=%3Carxiv_GOX%3E2205_11826%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true