WeLM: A Well-Read Pre-trained Language Model for Chinese

Large Language Models pre-trained with self-supervised learning have demonstrated impressive zero-shot generalization capabilities on a wide spectrum of tasks. In this work, we present WeLM: a well-read pre-trained language model for Chinese that is able to seamlessly perform different types of task...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-05
Hauptverfasser: Su, Hui, Zhou, Xiao, Yu, Houjin, Shen, Xiaoyu, Chen, Yuwen, Zhu, Zilin, Yang, Yu, Zhou, Jie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Su, Hui
Zhou, Xiao
Yu, Houjin
Shen, Xiaoyu
Chen, Yuwen
Zhu, Zilin
Yang, Yu
Zhou, Jie
description Large Language Models pre-trained with self-supervised learning have demonstrated impressive zero-shot generalization capabilities on a wide spectrum of tasks. In this work, we present WeLM: a well-read pre-trained language model for Chinese that is able to seamlessly perform different types of tasks with zero or few-shot demonstrations. WeLM is trained with 10B parameters by "reading" a curated high-quality corpus covering a wide range of topics. We show that WeLM is equipped with broad knowledge on various domains and languages. On 18 monolingual (Chinese) tasks, WeLM can significantly outperform existing pre-trained models with similar sizes and match the performance of models up to 25 times larger. WeLM also exhibits strong capabilities in multi-lingual and code-switching understanding, outperforming existing multilingual language models pre-trained on 30 languages. Furthermore, We collected human-written prompts for a large set of supervised datasets in Chinese and fine-tuned WeLM with multi-prompted training. The resulting model can attain strong generalization on unseen types of tasks and outperform the unsupervised WeLM in zero-shot learning. Finally, we demonstrate that WeLM has basic skills at explaining and calibrating the decisions from itself, which can be promising directions for future research. Our models can be applied from https://welm.weixin.qq.com/docs/api/.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2717198545</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2717198545</sourcerecordid><originalsourceid>FETCH-proquest_journals_27171985453</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwCE_18bVScFQIT83J0Q1KTUxRCChK1S0pSszMS01R8EnMSy9NTE9V8M1PSc1RSMsvUnDOAMoUp_IwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRuaG5oaWFqYmpMXGqAF8vNGI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2717198545</pqid></control><display><type>article</type><title>WeLM: A Well-Read Pre-trained Language Model for Chinese</title><source>Free E- Journals</source><creator>Su, Hui ; Zhou, Xiao ; Yu, Houjin ; Shen, Xiaoyu ; Chen, Yuwen ; Zhu, Zilin ; Yang, Yu ; Zhou, Jie</creator><creatorcontrib>Su, Hui ; Zhou, Xiao ; Yu, Houjin ; Shen, Xiaoyu ; Chen, Yuwen ; Zhu, Zilin ; Yang, Yu ; Zhou, Jie</creatorcontrib><description>Large Language Models pre-trained with self-supervised learning have demonstrated impressive zero-shot generalization capabilities on a wide spectrum of tasks. In this work, we present WeLM: a well-read pre-trained language model for Chinese that is able to seamlessly perform different types of tasks with zero or few-shot demonstrations. WeLM is trained with 10B parameters by "reading" a curated high-quality corpus covering a wide range of topics. We show that WeLM is equipped with broad knowledge on various domains and languages. On 18 monolingual (Chinese) tasks, WeLM can significantly outperform existing pre-trained models with similar sizes and match the performance of models up to 25 times larger. WeLM also exhibits strong capabilities in multi-lingual and code-switching understanding, outperforming existing multilingual language models pre-trained on 30 languages. Furthermore, We collected human-written prompts for a large set of supervised datasets in Chinese and fine-tuned WeLM with multi-prompted training. The resulting model can attain strong generalization on unseen types of tasks and outperform the unsupervised WeLM in zero-shot learning. Finally, we demonstrate that WeLM has basic skills at explaining and calibrating the decisions from itself, which can be promising directions for future research. Our models can be applied from https://welm.weixin.qq.com/docs/api/.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Languages ; Supervised learning</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Su, Hui</creatorcontrib><creatorcontrib>Zhou, Xiao</creatorcontrib><creatorcontrib>Yu, Houjin</creatorcontrib><creatorcontrib>Shen, Xiaoyu</creatorcontrib><creatorcontrib>Chen, Yuwen</creatorcontrib><creatorcontrib>Zhu, Zilin</creatorcontrib><creatorcontrib>Yang, Yu</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><title>WeLM: A Well-Read Pre-trained Language Model for Chinese</title><title>arXiv.org</title><description>Large Language Models pre-trained with self-supervised learning have demonstrated impressive zero-shot generalization capabilities on a wide spectrum of tasks. In this work, we present WeLM: a well-read pre-trained language model for Chinese that is able to seamlessly perform different types of tasks with zero or few-shot demonstrations. WeLM is trained with 10B parameters by "reading" a curated high-quality corpus covering a wide range of topics. We show that WeLM is equipped with broad knowledge on various domains and languages. On 18 monolingual (Chinese) tasks, WeLM can significantly outperform existing pre-trained models with similar sizes and match the performance of models up to 25 times larger. WeLM also exhibits strong capabilities in multi-lingual and code-switching understanding, outperforming existing multilingual language models pre-trained on 30 languages. Furthermore, We collected human-written prompts for a large set of supervised datasets in Chinese and fine-tuned WeLM with multi-prompted training. The resulting model can attain strong generalization on unseen types of tasks and outperform the unsupervised WeLM in zero-shot learning. Finally, we demonstrate that WeLM has basic skills at explaining and calibrating the decisions from itself, which can be promising directions for future research. Our models can be applied from https://welm.weixin.qq.com/docs/api/.</description><subject>Languages</subject><subject>Supervised learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwCE_18bVScFQIT83J0Q1KTUxRCChK1S0pSszMS01R8EnMSy9NTE9V8M1PSc1RSMsvUnDOAMoUp_IwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRuaG5oaWFqYmpMXGqAF8vNGI</recordid><startdate>20230516</startdate><enddate>20230516</enddate><creator>Su, Hui</creator><creator>Zhou, Xiao</creator><creator>Yu, Houjin</creator><creator>Shen, Xiaoyu</creator><creator>Chen, Yuwen</creator><creator>Zhu, Zilin</creator><creator>Yang, Yu</creator><creator>Zhou, Jie</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230516</creationdate><title>WeLM: A Well-Read Pre-trained Language Model for Chinese</title><author>Su, Hui ; Zhou, Xiao ; Yu, Houjin ; Shen, Xiaoyu ; Chen, Yuwen ; Zhu, Zilin ; Yang, Yu ; Zhou, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27171985453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Languages</topic><topic>Supervised learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Su, Hui</creatorcontrib><creatorcontrib>Zhou, Xiao</creatorcontrib><creatorcontrib>Yu, Houjin</creatorcontrib><creatorcontrib>Shen, Xiaoyu</creatorcontrib><creatorcontrib>Chen, Yuwen</creatorcontrib><creatorcontrib>Zhu, Zilin</creatorcontrib><creatorcontrib>Yang, Yu</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Su, Hui</au><au>Zhou, Xiao</au><au>Yu, Houjin</au><au>Shen, Xiaoyu</au><au>Chen, Yuwen</au><au>Zhu, Zilin</au><au>Yang, Yu</au><au>Zhou, Jie</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>WeLM: A Well-Read Pre-trained Language Model for Chinese</atitle><jtitle>arXiv.org</jtitle><date>2023-05-16</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Large Language Models pre-trained with self-supervised learning have demonstrated impressive zero-shot generalization capabilities on a wide spectrum of tasks. In this work, we present WeLM: a well-read pre-trained language model for Chinese that is able to seamlessly perform different types of tasks with zero or few-shot demonstrations. WeLM is trained with 10B parameters by "reading" a curated high-quality corpus covering a wide range of topics. We show that WeLM is equipped with broad knowledge on various domains and languages. On 18 monolingual (Chinese) tasks, WeLM can significantly outperform existing pre-trained models with similar sizes and match the performance of models up to 25 times larger. WeLM also exhibits strong capabilities in multi-lingual and code-switching understanding, outperforming existing multilingual language models pre-trained on 30 languages. Furthermore, We collected human-written prompts for a large set of supervised datasets in Chinese and fine-tuned WeLM with multi-prompted training. The resulting model can attain strong generalization on unseen types of tasks and outperform the unsupervised WeLM in zero-shot learning. Finally, we demonstrate that WeLM has basic skills at explaining and calibrating the decisions from itself, which can be promising directions for future research. Our models can be applied from https://welm.weixin.qq.com/docs/api/.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2717198545
source Free E- Journals
subjects Languages
Supervised learning
title WeLM: A Well-Read Pre-trained Language Model for Chinese
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T20%3A03%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=WeLM:%20A%20Well-Read%20Pre-trained%20Language%20Model%20for%20Chinese&rft.jtitle=arXiv.org&rft.au=Su,%20Hui&rft.date=2023-05-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2717198545%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2717198545&rft_id=info:pmid/&rfr_iscdi=true