Universal Vector Neural Machine Translation With Effective Attention
Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever introduced a sequence to sequence based encoder-decoder model which became the standard for NMT based systems. Attention mechanisms were later introduced to address the issues wit...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-06 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Mylapore, Satish Ryan Quincy Paul Yi, Joshua Slater, Robert D |
description | Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever introduced a sequence to sequence based encoder-decoder model which became the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sentences and improving overall accuracy. In this paper, we propose a singular model for Neural Machine Translation based on encoder-decoder models. Most translation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an attention model by adding an overall learning vector to the multiplicative model. With these two changes, by using the novel universal model the number of models needed for multiple language translation applications are reduced. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2411955139</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411955139</sourcerecordid><originalsourceid>FETCH-proquest_journals_24119551393</originalsourceid><addsrcrecordid>eNqNi8sKwjAUBYMgWLT_EHBdyKNRuxStuNFV1WUJ5YamhETz8PuN4Ae4OgwzZ4YKxjmtdjVjC1SGMBFC2GbLhOAFOt6sfoMP0uA7DNF5fIXkM13kMGoLuPPSBiOjdhY_dBxxq1QO8wnvYwT7FSs0V9IEKH-7ROtT2x3O1dO7V4IQ-8klb7PqWU1pIwTlDf-v-gBOOjqz</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2411955139</pqid></control><display><type>article</type><title>Universal Vector Neural Machine Translation With Effective Attention</title><source>Free E- Journals</source><creator>Mylapore, Satish ; Ryan Quincy Paul ; Yi, Joshua ; Slater, Robert D</creator><creatorcontrib>Mylapore, Satish ; Ryan Quincy Paul ; Yi, Joshua ; Slater, Robert D</creatorcontrib><description>Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever introduced a sequence to sequence based encoder-decoder model which became the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sentences and improving overall accuracy. In this paper, we propose a singular model for Neural Machine Translation based on encoder-decoder models. Most translation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an attention model by adding an overall learning vector to the multiplicative model. With these two changes, by using the novel universal model the number of models needed for multiple language translation applications are reduced.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Coders ; Encoders-Decoders ; Language translation ; Machine translation ; Neural networks ; Sentences</subject><ispartof>arXiv.org, 2020-06</ispartof><rights>2020. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Mylapore, Satish</creatorcontrib><creatorcontrib>Ryan Quincy Paul</creatorcontrib><creatorcontrib>Yi, Joshua</creatorcontrib><creatorcontrib>Slater, Robert D</creatorcontrib><title>Universal Vector Neural Machine Translation With Effective Attention</title><title>arXiv.org</title><description>Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever introduced a sequence to sequence based encoder-decoder model which became the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sentences and improving overall accuracy. In this paper, we propose a singular model for Neural Machine Translation based on encoder-decoder models. Most translation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an attention model by adding an overall learning vector to the multiplicative model. With these two changes, by using the novel universal model the number of models needed for multiple language translation applications are reduced.</description><subject>Coders</subject><subject>Encoders-Decoders</subject><subject>Language translation</subject><subject>Machine translation</subject><subject>Neural networks</subject><subject>Sentences</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8sKwjAUBYMgWLT_EHBdyKNRuxStuNFV1WUJ5YamhETz8PuN4Ae4OgwzZ4YKxjmtdjVjC1SGMBFC2GbLhOAFOt6sfoMP0uA7DNF5fIXkM13kMGoLuPPSBiOjdhY_dBxxq1QO8wnvYwT7FSs0V9IEKH-7ROtT2x3O1dO7V4IQ-8klb7PqWU1pIwTlDf-v-gBOOjqz</recordid><startdate>20200609</startdate><enddate>20200609</enddate><creator>Mylapore, Satish</creator><creator>Ryan Quincy Paul</creator><creator>Yi, Joshua</creator><creator>Slater, Robert D</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200609</creationdate><title>Universal Vector Neural Machine Translation With Effective Attention</title><author>Mylapore, Satish ; Ryan Quincy Paul ; Yi, Joshua ; Slater, Robert D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24119551393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Coders</topic><topic>Encoders-Decoders</topic><topic>Language translation</topic><topic>Machine translation</topic><topic>Neural networks</topic><topic>Sentences</topic><toplevel>online_resources</toplevel><creatorcontrib>Mylapore, Satish</creatorcontrib><creatorcontrib>Ryan Quincy Paul</creatorcontrib><creatorcontrib>Yi, Joshua</creatorcontrib><creatorcontrib>Slater, Robert D</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mylapore, Satish</au><au>Ryan Quincy Paul</au><au>Yi, Joshua</au><au>Slater, Robert D</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Universal Vector Neural Machine Translation With Effective Attention</atitle><jtitle>arXiv.org</jtitle><date>2020-06-09</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever introduced a sequence to sequence based encoder-decoder model which became the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sentences and improving overall accuracy. In this paper, we propose a singular model for Neural Machine Translation based on encoder-decoder models. Most translation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an attention model by adding an overall learning vector to the multiplicative model. With these two changes, by using the novel universal model the number of models needed for multiple language translation applications are reduced.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2411955139 |
source | Free E- Journals |
subjects | Coders Encoders-Decoders Language translation Machine translation Neural networks Sentences |
title | Universal Vector Neural Machine Translation With Effective Attention |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T17%3A18%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Universal%20Vector%20Neural%20Machine%20Translation%20With%20Effective%20Attention&rft.jtitle=arXiv.org&rft.au=Mylapore,%20Satish&rft.date=2020-06-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2411955139%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2411955139&rft_id=info:pmid/&rfr_iscdi=true |