No Language Left Behind: Scaling Human-Centered Machine Translation
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | NLLB Team Costa-jussà, Marta R Cross, James Çelebi, Onur Elbayad, Maha Heafield, Kenneth Heffernan, Kevin Kalbassi, Elahe Lam, Janice Licht, Daniel Maillard, Jean Sun, Anna Wang, Skyler Wenzek, Guillaume Youngblood, Al Akula, Bapi Barrault, Loic Gonzalez, Gabriel Mejia Hansanti, Prangthip Hoffman, John Jarrett, Semarley Sadagopan, Kaushik Ram Rowe, Dirk Spruit, Shannon Tran, Chau Andrews, Pierre Ayan, Necip Fazil Bhosale, Shruti Edunov, Sergey Fan, Angela Gao, Cynthia Goswami, Vedanuj Guzmán, Francisco Koehn, Philipp Mourachko, Alexandre Ropers, Christophe Saleem, Safiyyah Schwenk, Holger Wang, Jeff |
description | Driven by the goal of eradicating language barriers on a global scale,
machine translation has solidified itself as a key focus of artificial
intelligence research today. However, such efforts have coalesced around a
small subset of languages, leaving behind the vast majority of mostly
low-resource languages. What does it take to break the 200 language barrier
while ensuring safe, high quality results, all while keeping ethical
considerations in mind? In No Language Left Behind, we took on this challenge
by first contextualizing the need for low-resource language translation support
through exploratory interviews with native speakers. Then, we created datasets
and models aimed at narrowing the performance gap between low and high-resource
languages. More specifically, we developed a conditional compute model based on
Sparsely Gated Mixture of Experts that is trained on data obtained with novel
and effective data mining techniques tailored for low-resource languages. We
propose multiple architectural and training improvements to counteract
overfitting while training on thousands of tasks. Critically, we evaluated the
performance of over 40,000 different translation directions using a
human-translated benchmark, Flores-200, and combined human evaluation with a
novel toxicity benchmark covering all languages in Flores-200 to assess
translation safety. Our model achieves an improvement of 44% BLEU relative to
the previous state-of-the-art, laying important groundwork towards realizing a
universal translation system. Finally, we open source all contributions
described in this work, accessible at
https://github.com/facebookresearch/fairseq/tree/nllb. |
doi_str_mv | 10.48550/arxiv.2207.04672 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2207_04672</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2207_04672</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1632-ccf15cc153c073bb737d2dbfe86651ebcb7606e03127569b5279e326d4fcf9503</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKhwAUz4BhL8E9sNG0RAkQIMlDk6Pj4OllIXuSmCuwdapm94pU96GLuQom6WxogrKF_ps1ZKuFo01qlT1j1veQ953MNIvKc481t6Tzlc81eEKeWRr_YbyFVHeaZCgT8B_nbi6wJ5N8GctvmMnUSYdnT-vwv2dn-37lZV__Lw2N30FUirVYUYpUGURqNw2nunXVDBR1paayR59M4KS0JL5YxtvVGuJa1saCLG1gi9YJfH3wNj-ChpA-V7-OMMB47-AZ9jRBw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>No Language Left Behind: Scaling Human-Centered Machine Translation</title><source>arXiv.org</source><creator>NLLB Team ; Costa-jussà, Marta R ; Cross, James ; Çelebi, Onur ; Elbayad, Maha ; Heafield, Kenneth ; Heffernan, Kevin ; Kalbassi, Elahe ; Lam, Janice ; Licht, Daniel ; Maillard, Jean ; Sun, Anna ; Wang, Skyler ; Wenzek, Guillaume ; Youngblood, Al ; Akula, Bapi ; Barrault, Loic ; Gonzalez, Gabriel Mejia ; Hansanti, Prangthip ; Hoffman, John ; Jarrett, Semarley ; Sadagopan, Kaushik Ram ; Rowe, Dirk ; Spruit, Shannon ; Tran, Chau ; Andrews, Pierre ; Ayan, Necip Fazil ; Bhosale, Shruti ; Edunov, Sergey ; Fan, Angela ; Gao, Cynthia ; Goswami, Vedanuj ; Guzmán, Francisco ; Koehn, Philipp ; Mourachko, Alexandre ; Ropers, Christophe ; Saleem, Safiyyah ; Schwenk, Holger ; Wang, Jeff</creator><creatorcontrib>NLLB Team ; Costa-jussà, Marta R ; Cross, James ; Çelebi, Onur ; Elbayad, Maha ; Heafield, Kenneth ; Heffernan, Kevin ; Kalbassi, Elahe ; Lam, Janice ; Licht, Daniel ; Maillard, Jean ; Sun, Anna ; Wang, Skyler ; Wenzek, Guillaume ; Youngblood, Al ; Akula, Bapi ; Barrault, Loic ; Gonzalez, Gabriel Mejia ; Hansanti, Prangthip ; Hoffman, John ; Jarrett, Semarley ; Sadagopan, Kaushik Ram ; Rowe, Dirk ; Spruit, Shannon ; Tran, Chau ; Andrews, Pierre ; Ayan, Necip Fazil ; Bhosale, Shruti ; Edunov, Sergey ; Fan, Angela ; Gao, Cynthia ; Goswami, Vedanuj ; Guzmán, Francisco ; Koehn, Philipp ; Mourachko, Alexandre ; Ropers, Christophe ; Saleem, Safiyyah ; Schwenk, Holger ; Wang, Jeff</creatorcontrib><description>Driven by the goal of eradicating language barriers on a global scale,
machine translation has solidified itself as a key focus of artificial
intelligence research today. However, such efforts have coalesced around a
small subset of languages, leaving behind the vast majority of mostly
low-resource languages. What does it take to break the 200 language barrier
while ensuring safe, high quality results, all while keeping ethical
considerations in mind? In No Language Left Behind, we took on this challenge
by first contextualizing the need for low-resource language translation support
through exploratory interviews with native speakers. Then, we created datasets
and models aimed at narrowing the performance gap between low and high-resource
languages. More specifically, we developed a conditional compute model based on
Sparsely Gated Mixture of Experts that is trained on data obtained with novel
and effective data mining techniques tailored for low-resource languages. We
propose multiple architectural and training improvements to counteract
overfitting while training on thousands of tasks. Critically, we evaluated the
performance of over 40,000 different translation directions using a
human-translated benchmark, Flores-200, and combined human evaluation with a
novel toxicity benchmark covering all languages in Flores-200 to assess
translation safety. Our model achieves an improvement of 44% BLEU relative to
the previous state-of-the-art, laying important groundwork towards realizing a
universal translation system. Finally, we open source all contributions
described in this work, accessible at
https://github.com/facebookresearch/fairseq/tree/nllb.</description><identifier>DOI: 10.48550/arxiv.2207.04672</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2022-07</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1632-ccf15cc153c073bb737d2dbfe86651ebcb7606e03127569b5279e326d4fcf9503</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2207.04672$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.04672$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>NLLB Team</creatorcontrib><creatorcontrib>Costa-jussà, Marta R</creatorcontrib><creatorcontrib>Cross, James</creatorcontrib><creatorcontrib>Çelebi, Onur</creatorcontrib><creatorcontrib>Elbayad, Maha</creatorcontrib><creatorcontrib>Heafield, Kenneth</creatorcontrib><creatorcontrib>Heffernan, Kevin</creatorcontrib><creatorcontrib>Kalbassi, Elahe</creatorcontrib><creatorcontrib>Lam, Janice</creatorcontrib><creatorcontrib>Licht, Daniel</creatorcontrib><creatorcontrib>Maillard, Jean</creatorcontrib><creatorcontrib>Sun, Anna</creatorcontrib><creatorcontrib>Wang, Skyler</creatorcontrib><creatorcontrib>Wenzek, Guillaume</creatorcontrib><creatorcontrib>Youngblood, Al</creatorcontrib><creatorcontrib>Akula, Bapi</creatorcontrib><creatorcontrib>Barrault, Loic</creatorcontrib><creatorcontrib>Gonzalez, Gabriel Mejia</creatorcontrib><creatorcontrib>Hansanti, Prangthip</creatorcontrib><creatorcontrib>Hoffman, John</creatorcontrib><creatorcontrib>Jarrett, Semarley</creatorcontrib><creatorcontrib>Sadagopan, Kaushik Ram</creatorcontrib><creatorcontrib>Rowe, Dirk</creatorcontrib><creatorcontrib>Spruit, Shannon</creatorcontrib><creatorcontrib>Tran, Chau</creatorcontrib><creatorcontrib>Andrews, Pierre</creatorcontrib><creatorcontrib>Ayan, Necip Fazil</creatorcontrib><creatorcontrib>Bhosale, Shruti</creatorcontrib><creatorcontrib>Edunov, Sergey</creatorcontrib><creatorcontrib>Fan, Angela</creatorcontrib><creatorcontrib>Gao, Cynthia</creatorcontrib><creatorcontrib>Goswami, Vedanuj</creatorcontrib><creatorcontrib>Guzmán, Francisco</creatorcontrib><creatorcontrib>Koehn, Philipp</creatorcontrib><creatorcontrib>Mourachko, Alexandre</creatorcontrib><creatorcontrib>Ropers, Christophe</creatorcontrib><creatorcontrib>Saleem, Safiyyah</creatorcontrib><creatorcontrib>Schwenk, Holger</creatorcontrib><creatorcontrib>Wang, Jeff</creatorcontrib><title>No Language Left Behind: Scaling Human-Centered Machine Translation</title><description>Driven by the goal of eradicating language barriers on a global scale,
machine translation has solidified itself as a key focus of artificial
intelligence research today. However, such efforts have coalesced around a
small subset of languages, leaving behind the vast majority of mostly
low-resource languages. What does it take to break the 200 language barrier
while ensuring safe, high quality results, all while keeping ethical
considerations in mind? In No Language Left Behind, we took on this challenge
by first contextualizing the need for low-resource language translation support
through exploratory interviews with native speakers. Then, we created datasets
and models aimed at narrowing the performance gap between low and high-resource
languages. More specifically, we developed a conditional compute model based on
Sparsely Gated Mixture of Experts that is trained on data obtained with novel
and effective data mining techniques tailored for low-resource languages. We
propose multiple architectural and training improvements to counteract
overfitting while training on thousands of tasks. Critically, we evaluated the
performance of over 40,000 different translation directions using a
human-translated benchmark, Flores-200, and combined human evaluation with a
novel toxicity benchmark covering all languages in Flores-200 to assess
translation safety. Our model achieves an improvement of 44% BLEU relative to
the previous state-of-the-art, laying important groundwork towards realizing a
universal translation system. Finally, we open source all contributions
described in this work, accessible at
https://github.com/facebookresearch/fairseq/tree/nllb.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKhwAUz4BhL8E9sNG0RAkQIMlDk6Pj4OllIXuSmCuwdapm94pU96GLuQom6WxogrKF_ps1ZKuFo01qlT1j1veQ953MNIvKc481t6Tzlc81eEKeWRr_YbyFVHeaZCgT8B_nbi6wJ5N8GctvmMnUSYdnT-vwv2dn-37lZV__Lw2N30FUirVYUYpUGURqNw2nunXVDBR1paayR59M4KS0JL5YxtvVGuJa1saCLG1gi9YJfH3wNj-ChpA-V7-OMMB47-AZ9jRBw</recordid><startdate>20220711</startdate><enddate>20220711</enddate><creator>NLLB Team</creator><creator>Costa-jussà, Marta R</creator><creator>Cross, James</creator><creator>Çelebi, Onur</creator><creator>Elbayad, Maha</creator><creator>Heafield, Kenneth</creator><creator>Heffernan, Kevin</creator><creator>Kalbassi, Elahe</creator><creator>Lam, Janice</creator><creator>Licht, Daniel</creator><creator>Maillard, Jean</creator><creator>Sun, Anna</creator><creator>Wang, Skyler</creator><creator>Wenzek, Guillaume</creator><creator>Youngblood, Al</creator><creator>Akula, Bapi</creator><creator>Barrault, Loic</creator><creator>Gonzalez, Gabriel Mejia</creator><creator>Hansanti, Prangthip</creator><creator>Hoffman, John</creator><creator>Jarrett, Semarley</creator><creator>Sadagopan, Kaushik Ram</creator><creator>Rowe, Dirk</creator><creator>Spruit, Shannon</creator><creator>Tran, Chau</creator><creator>Andrews, Pierre</creator><creator>Ayan, Necip Fazil</creator><creator>Bhosale, Shruti</creator><creator>Edunov, Sergey</creator><creator>Fan, Angela</creator><creator>Gao, Cynthia</creator><creator>Goswami, Vedanuj</creator><creator>Guzmán, Francisco</creator><creator>Koehn, Philipp</creator><creator>Mourachko, Alexandre</creator><creator>Ropers, Christophe</creator><creator>Saleem, Safiyyah</creator><creator>Schwenk, Holger</creator><creator>Wang, Jeff</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220711</creationdate><title>No Language Left Behind: Scaling Human-Centered Machine Translation</title><author>NLLB Team ; Costa-jussà, Marta R ; Cross, James ; Çelebi, Onur ; Elbayad, Maha ; Heafield, Kenneth ; Heffernan, Kevin ; Kalbassi, Elahe ; Lam, Janice ; Licht, Daniel ; Maillard, Jean ; Sun, Anna ; Wang, Skyler ; Wenzek, Guillaume ; Youngblood, Al ; Akula, Bapi ; Barrault, Loic ; Gonzalez, Gabriel Mejia ; Hansanti, Prangthip ; Hoffman, John ; Jarrett, Semarley ; Sadagopan, Kaushik Ram ; Rowe, Dirk ; Spruit, Shannon ; Tran, Chau ; Andrews, Pierre ; Ayan, Necip Fazil ; Bhosale, Shruti ; Edunov, Sergey ; Fan, Angela ; Gao, Cynthia ; Goswami, Vedanuj ; Guzmán, Francisco ; Koehn, Philipp ; Mourachko, Alexandre ; Ropers, Christophe ; Saleem, Safiyyah ; Schwenk, Holger ; Wang, Jeff</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1632-ccf15cc153c073bb737d2dbfe86651ebcb7606e03127569b5279e326d4fcf9503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>NLLB Team</creatorcontrib><creatorcontrib>Costa-jussà, Marta R</creatorcontrib><creatorcontrib>Cross, James</creatorcontrib><creatorcontrib>Çelebi, Onur</creatorcontrib><creatorcontrib>Elbayad, Maha</creatorcontrib><creatorcontrib>Heafield, Kenneth</creatorcontrib><creatorcontrib>Heffernan, Kevin</creatorcontrib><creatorcontrib>Kalbassi, Elahe</creatorcontrib><creatorcontrib>Lam, Janice</creatorcontrib><creatorcontrib>Licht, Daniel</creatorcontrib><creatorcontrib>Maillard, Jean</creatorcontrib><creatorcontrib>Sun, Anna</creatorcontrib><creatorcontrib>Wang, Skyler</creatorcontrib><creatorcontrib>Wenzek, Guillaume</creatorcontrib><creatorcontrib>Youngblood, Al</creatorcontrib><creatorcontrib>Akula, Bapi</creatorcontrib><creatorcontrib>Barrault, Loic</creatorcontrib><creatorcontrib>Gonzalez, Gabriel Mejia</creatorcontrib><creatorcontrib>Hansanti, Prangthip</creatorcontrib><creatorcontrib>Hoffman, John</creatorcontrib><creatorcontrib>Jarrett, Semarley</creatorcontrib><creatorcontrib>Sadagopan, Kaushik Ram</creatorcontrib><creatorcontrib>Rowe, Dirk</creatorcontrib><creatorcontrib>Spruit, Shannon</creatorcontrib><creatorcontrib>Tran, Chau</creatorcontrib><creatorcontrib>Andrews, Pierre</creatorcontrib><creatorcontrib>Ayan, Necip Fazil</creatorcontrib><creatorcontrib>Bhosale, Shruti</creatorcontrib><creatorcontrib>Edunov, Sergey</creatorcontrib><creatorcontrib>Fan, Angela</creatorcontrib><creatorcontrib>Gao, Cynthia</creatorcontrib><creatorcontrib>Goswami, Vedanuj</creatorcontrib><creatorcontrib>Guzmán, Francisco</creatorcontrib><creatorcontrib>Koehn, Philipp</creatorcontrib><creatorcontrib>Mourachko, Alexandre</creatorcontrib><creatorcontrib>Ropers, Christophe</creatorcontrib><creatorcontrib>Saleem, Safiyyah</creatorcontrib><creatorcontrib>Schwenk, Holger</creatorcontrib><creatorcontrib>Wang, Jeff</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>NLLB Team</au><au>Costa-jussà, Marta R</au><au>Cross, James</au><au>Çelebi, Onur</au><au>Elbayad, Maha</au><au>Heafield, Kenneth</au><au>Heffernan, Kevin</au><au>Kalbassi, Elahe</au><au>Lam, Janice</au><au>Licht, Daniel</au><au>Maillard, Jean</au><au>Sun, Anna</au><au>Wang, Skyler</au><au>Wenzek, Guillaume</au><au>Youngblood, Al</au><au>Akula, Bapi</au><au>Barrault, Loic</au><au>Gonzalez, Gabriel Mejia</au><au>Hansanti, Prangthip</au><au>Hoffman, John</au><au>Jarrett, Semarley</au><au>Sadagopan, Kaushik Ram</au><au>Rowe, Dirk</au><au>Spruit, Shannon</au><au>Tran, Chau</au><au>Andrews, Pierre</au><au>Ayan, Necip Fazil</au><au>Bhosale, Shruti</au><au>Edunov, Sergey</au><au>Fan, Angela</au><au>Gao, Cynthia</au><au>Goswami, Vedanuj</au><au>Guzmán, Francisco</au><au>Koehn, Philipp</au><au>Mourachko, Alexandre</au><au>Ropers, Christophe</au><au>Saleem, Safiyyah</au><au>Schwenk, Holger</au><au>Wang, Jeff</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>No Language Left Behind: Scaling Human-Centered Machine Translation</atitle><date>2022-07-11</date><risdate>2022</risdate><abstract>Driven by the goal of eradicating language barriers on a global scale,
machine translation has solidified itself as a key focus of artificial
intelligence research today. However, such efforts have coalesced around a
small subset of languages, leaving behind the vast majority of mostly
low-resource languages. What does it take to break the 200 language barrier
while ensuring safe, high quality results, all while keeping ethical
considerations in mind? In No Language Left Behind, we took on this challenge
by first contextualizing the need for low-resource language translation support
through exploratory interviews with native speakers. Then, we created datasets
and models aimed at narrowing the performance gap between low and high-resource
languages. More specifically, we developed a conditional compute model based on
Sparsely Gated Mixture of Experts that is trained on data obtained with novel
and effective data mining techniques tailored for low-resource languages. We
propose multiple architectural and training improvements to counteract
overfitting while training on thousands of tasks. Critically, we evaluated the
performance of over 40,000 different translation directions using a
human-translated benchmark, Flores-200, and combined human evaluation with a
novel toxicity benchmark covering all languages in Flores-200 to assess
translation safety. Our model achieves an improvement of 44% BLEU relative to
the previous state-of-the-art, laying important groundwork towards realizing a
universal translation system. Finally, we open source all contributions
described in this work, accessible at
https://github.com/facebookresearch/fairseq/tree/nllb.</abstract><doi>10.48550/arxiv.2207.04672</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2207.04672 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2207_04672 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | No Language Left Behind: Scaling Human-Centered Machine Translation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T00%3A59%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=No%20Language%20Left%20Behind:%20Scaling%20Human-Centered%20Machine%20Translation&rft.au=NLLB%20Team&rft.date=2022-07-11&rft_id=info:doi/10.48550/arxiv.2207.04672&rft_dat=%3Carxiv_GOX%3E2207_04672%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |