Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions

Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Jiahuan, Zhou, Hao, Huang, Shujian, Cheng, Shanbo, Chen, Jiajun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Jiahuan
Zhou, Hao
Huang, Shujian
Cheng, Shanbo
Chen, Jiajun
description Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.
doi_str_mv 10.48550/arxiv.2305.15083
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_15083</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_15083</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-dcaa1b44cabc418380ff5bf20cfcdcefcf8d9463225d074b76de6cc7cc7c5cbd3</originalsourceid><addsrcrecordid>eNpVj0FOwzAQRb1hgQoHYIUvkODEdhKWVdVCpVTdZB-Nx3ZqyTjIcQq9PU1hgzSa-dIfPekR8lSwXDRSsheI3-6cl5zJvJCs4fckbb1Dl1wYaDoZ2kUIk4fkxkDXynmXLnS0tIU4mOsOwwzXcBi18RM9O6CH2afr21J4unPBpDkssC-XTv9o-zClOOOSpwdyZ8FP5vHvrki323ab96w9vu036zaDquaZRoBCCYGgUBQNb5i1UtmSoUWNxqJt9KuoeFlKzWqh6kqbCrFeRqLSfEWef7E37f4zug-Il37R72_6_AeJRVnE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions</title><source>arXiv.org</source><creator>Li, Jiahuan ; Zhou, Hao ; Huang, Shujian ; Cheng, Shanbo ; Chen, Jiajun</creator><creatorcontrib>Li, Jiahuan ; Zhou, Hao ; Huang, Shujian ; Cheng, Shanbo ; Chen, Jiajun</creatorcontrib><description>Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.</description><identifier>DOI: 10.48550/arxiv.2305.15083</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.15083$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.15083$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Jiahuan</creatorcontrib><creatorcontrib>Zhou, Hao</creatorcontrib><creatorcontrib>Huang, Shujian</creatorcontrib><creatorcontrib>Cheng, Shanbo</creatorcontrib><creatorcontrib>Chen, Jiajun</creatorcontrib><title>Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions</title><description>Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpVj0FOwzAQRb1hgQoHYIUvkODEdhKWVdVCpVTdZB-Nx3ZqyTjIcQq9PU1hgzSa-dIfPekR8lSwXDRSsheI3-6cl5zJvJCs4fckbb1Dl1wYaDoZ2kUIk4fkxkDXynmXLnS0tIU4mOsOwwzXcBi18RM9O6CH2afr21J4unPBpDkssC-XTv9o-zClOOOSpwdyZ8FP5vHvrki323ab96w9vu036zaDquaZRoBCCYGgUBQNb5i1UtmSoUWNxqJt9KuoeFlKzWqh6kqbCrFeRqLSfEWef7E37f4zug-Il37R72_6_AeJRVnE</recordid><startdate>20230524</startdate><enddate>20230524</enddate><creator>Li, Jiahuan</creator><creator>Zhou, Hao</creator><creator>Huang, Shujian</creator><creator>Cheng, Shanbo</creator><creator>Chen, Jiajun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230524</creationdate><title>Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions</title><author>Li, Jiahuan ; Zhou, Hao ; Huang, Shujian ; Cheng, Shanbo ; Chen, Jiajun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-dcaa1b44cabc418380ff5bf20cfcdcefcf8d9463225d074b76de6cc7cc7c5cbd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Jiahuan</creatorcontrib><creatorcontrib>Zhou, Hao</creatorcontrib><creatorcontrib>Huang, Shujian</creatorcontrib><creatorcontrib>Cheng, Shanbo</creatorcontrib><creatorcontrib>Chen, Jiajun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Jiahuan</au><au>Zhou, Hao</au><au>Huang, Shujian</au><au>Cheng, Shanbo</au><au>Chen, Jiajun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions</atitle><date>2023-05-24</date><risdate>2023</risdate><abstract>Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.</abstract><doi>10.48550/arxiv.2305.15083</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.15083
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_15083
source arXiv.org
subjects Computer Science - Computation and Language
title Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T23%3A44%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Eliciting%20the%20Translation%20Ability%20of%20Large%20Language%20Models%20via%20Multilingual%20Finetuning%20with%20Translation%20Instructions&rft.au=Li,%20Jiahuan&rft.date=2023-05-24&rft_id=info:doi/10.48550/arxiv.2305.15083&rft_dat=%3Carxiv_GOX%3E2305_15083%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true