Data Scaling Laws in NMT: The Effect of Noise and Architecture
In this work, we study the effect of varying the architecture and training data quality on the data scaling properties of Neural Machine Translation (NMT). First, we establish that the test loss of encoder-decoder transformer models scales as a power law in the number of training samples, with a dep...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bansal, Yamini Ghorbani, Behrooz Garg, Ankush Zhang, Biao Krikun, Maxim Cherry, Colin Neyshabur, Behnam Firat, Orhan |
description | In this work, we study the effect of varying the architecture and training
data quality on the data scaling properties of Neural Machine Translation
(NMT). First, we establish that the test loss of encoder-decoder transformer
models scales as a power law in the number of training samples, with a
dependence on the model size. Then, we systematically vary aspects of the
training setup to understand how they impact the data scaling laws. In
particular, we change the following (1) Architecture and task setup: We compare
to a transformer-LSTM hybrid, and a decoder-only transformer with a language
modeling loss (2) Noise level in the training distribution: We experiment with
filtering, and adding iid synthetic noise. In all the above cases, we find that
the data scaling exponents are minimally impacted, suggesting that marginally
worse architectures or training data can be compensated for by adding more
data. Lastly, we find that using back-translated data instead of parallel data,
can significantly degrade the scaling exponent. |
doi_str_mv | 10.48550/arxiv.2202.01994 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_01994</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_01994</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-caef281d1476e684b80173ff65ae2713c738b2d7a0d8ddd094753b085c5652cf3</originalsourceid><addsrcrecordid>eNotz8tOwkAYhuHZuDDIBbjyv4GWOc_UhQkBPCQVF3bf_J2DTAKFTIvK3Svo6kvexZc8hNwyWkqrFJ1h_k6fJeeUl5RVlbwmD0scEd4dblP_ATV-DZB6WL8299BsAqxiDG6EfYT1Pg0BsPcwz26Txt98zOGGXEXcDmH6vxPSPK6axXNRvz29LOZ1gdrIwmGI3DLPpNFBW9lZyoyIUSsM3DDhjLAd9wapt957WkmjREetckor7qKYkLu_2wugPeS0w3xqz5D2AhE_JLJBAg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Data Scaling Laws in NMT: The Effect of Noise and Architecture</title><source>arXiv.org</source><creator>Bansal, Yamini ; Ghorbani, Behrooz ; Garg, Ankush ; Zhang, Biao ; Krikun, Maxim ; Cherry, Colin ; Neyshabur, Behnam ; Firat, Orhan</creator><creatorcontrib>Bansal, Yamini ; Ghorbani, Behrooz ; Garg, Ankush ; Zhang, Biao ; Krikun, Maxim ; Cherry, Colin ; Neyshabur, Behnam ; Firat, Orhan</creatorcontrib><description>In this work, we study the effect of varying the architecture and training
data quality on the data scaling properties of Neural Machine Translation
(NMT). First, we establish that the test loss of encoder-decoder transformer
models scales as a power law in the number of training samples, with a
dependence on the model size. Then, we systematically vary aspects of the
training setup to understand how they impact the data scaling laws. In
particular, we change the following (1) Architecture and task setup: We compare
to a transformer-LSTM hybrid, and a decoder-only transformer with a language
modeling loss (2) Noise level in the training distribution: We experiment with
filtering, and adding iid synthetic noise. In all the above cases, we find that
the data scaling exponents are minimally impacted, suggesting that marginally
worse architectures or training data can be compensated for by adding more
data. Lastly, we find that using back-translated data instead of parallel data,
can significantly degrade the scaling exponent.</description><identifier>DOI: 10.48550/arxiv.2202.01994</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2022-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.01994$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.01994$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bansal, Yamini</creatorcontrib><creatorcontrib>Ghorbani, Behrooz</creatorcontrib><creatorcontrib>Garg, Ankush</creatorcontrib><creatorcontrib>Zhang, Biao</creatorcontrib><creatorcontrib>Krikun, Maxim</creatorcontrib><creatorcontrib>Cherry, Colin</creatorcontrib><creatorcontrib>Neyshabur, Behnam</creatorcontrib><creatorcontrib>Firat, Orhan</creatorcontrib><title>Data Scaling Laws in NMT: The Effect of Noise and Architecture</title><description>In this work, we study the effect of varying the architecture and training
data quality on the data scaling properties of Neural Machine Translation
(NMT). First, we establish that the test loss of encoder-decoder transformer
models scales as a power law in the number of training samples, with a
dependence on the model size. Then, we systematically vary aspects of the
training setup to understand how they impact the data scaling laws. In
particular, we change the following (1) Architecture and task setup: We compare
to a transformer-LSTM hybrid, and a decoder-only transformer with a language
modeling loss (2) Noise level in the training distribution: We experiment with
filtering, and adding iid synthetic noise. In all the above cases, we find that
the data scaling exponents are minimally impacted, suggesting that marginally
worse architectures or training data can be compensated for by adding more
data. Lastly, we find that using back-translated data instead of parallel data,
can significantly degrade the scaling exponent.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tOwkAYhuHZuDDIBbjyv4GWOc_UhQkBPCQVF3bf_J2DTAKFTIvK3Svo6kvexZc8hNwyWkqrFJ1h_k6fJeeUl5RVlbwmD0scEd4dblP_ATV-DZB6WL8299BsAqxiDG6EfYT1Pg0BsPcwz26Txt98zOGGXEXcDmH6vxPSPK6axXNRvz29LOZ1gdrIwmGI3DLPpNFBW9lZyoyIUSsM3DDhjLAd9wapt957WkmjREetckor7qKYkLu_2wugPeS0w3xqz5D2AhE_JLJBAg</recordid><startdate>20220204</startdate><enddate>20220204</enddate><creator>Bansal, Yamini</creator><creator>Ghorbani, Behrooz</creator><creator>Garg, Ankush</creator><creator>Zhang, Biao</creator><creator>Krikun, Maxim</creator><creator>Cherry, Colin</creator><creator>Neyshabur, Behnam</creator><creator>Firat, Orhan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220204</creationdate><title>Data Scaling Laws in NMT: The Effect of Noise and Architecture</title><author>Bansal, Yamini ; Ghorbani, Behrooz ; Garg, Ankush ; Zhang, Biao ; Krikun, Maxim ; Cherry, Colin ; Neyshabur, Behnam ; Firat, Orhan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-caef281d1476e684b80173ff65ae2713c738b2d7a0d8ddd094753b085c5652cf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bansal, Yamini</creatorcontrib><creatorcontrib>Ghorbani, Behrooz</creatorcontrib><creatorcontrib>Garg, Ankush</creatorcontrib><creatorcontrib>Zhang, Biao</creatorcontrib><creatorcontrib>Krikun, Maxim</creatorcontrib><creatorcontrib>Cherry, Colin</creatorcontrib><creatorcontrib>Neyshabur, Behnam</creatorcontrib><creatorcontrib>Firat, Orhan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bansal, Yamini</au><au>Ghorbani, Behrooz</au><au>Garg, Ankush</au><au>Zhang, Biao</au><au>Krikun, Maxim</au><au>Cherry, Colin</au><au>Neyshabur, Behnam</au><au>Firat, Orhan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Data Scaling Laws in NMT: The Effect of Noise and Architecture</atitle><date>2022-02-04</date><risdate>2022</risdate><abstract>In this work, we study the effect of varying the architecture and training
data quality on the data scaling properties of Neural Machine Translation
(NMT). First, we establish that the test loss of encoder-decoder transformer
models scales as a power law in the number of training samples, with a
dependence on the model size. Then, we systematically vary aspects of the
training setup to understand how they impact the data scaling laws. In
particular, we change the following (1) Architecture and task setup: We compare
to a transformer-LSTM hybrid, and a decoder-only transformer with a language
modeling loss (2) Noise level in the training distribution: We experiment with
filtering, and adding iid synthetic noise. In all the above cases, we find that
the data scaling exponents are minimally impacted, suggesting that marginally
worse architectures or training data can be compensated for by adding more
data. Lastly, we find that using back-translated data instead of parallel data,
can significantly degrade the scaling exponent.</abstract><doi>10.48550/arxiv.2202.01994</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2202.01994 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2202_01994 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | Data Scaling Laws in NMT: The Effect of Noise and Architecture |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T23%3A58%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Data%20Scaling%20Laws%20in%20NMT:%20The%20Effect%20of%20Noise%20and%20Architecture&rft.au=Bansal,%20Yamini&rft.date=2022-02-04&rft_id=info:doi/10.48550/arxiv.2202.01994&rft_dat=%3Carxiv_GOX%3E2202_01994%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |