YuLan: An Open-source Large Language Model
Large language models (LLMs) have become the foundation of many applications, leveraging their extensive capabilities in processing and understanding natural language. While many open-source LLMs have been released with technical reports, the lack of training details hinders further research and dev...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhu, Yutao Zhou, Kun Mao, Kelong Chen, Wentong Sun, Yiding Chen, Zhipeng Cao, Qian Wu, Yihan Chen, Yushuo Wang, Feng Zhang, Lei Li, Junyi Wang, Xiaolei Wang, Lei Zhang, Beichen Dong, Zican Cheng, Xiaoxue Chen, Yuhan Tang, Xinyu Hou, Yupeng Ren, Qiangqiang Pang, Xincheng Xie, Shufang Zhao, Wayne Xin Dou, Zhicheng Mao, Jiaxin Lin, Yankai Song, Ruihua Xu, Jun Chen, Xu Yan, Rui Wei, Zhewei Hu, Di Huang, Wenbing Gao, Ze-Feng Chen, Yueguo Lu, Weizheng Wen, Ji-Rong |
description | Large language models (LLMs) have become the foundation of many applications,
leveraging their extensive capabilities in processing and understanding natural
language. While many open-source LLMs have been released with technical
reports, the lack of training details hinders further research and development.
This paper presents the development of YuLan, a series of open-source LLMs with
$12$ billion parameters. The base model of YuLan is pre-trained on
approximately $1.7$T tokens derived from a diverse corpus, including massive
English, Chinese, and multilingual texts. We design a three-stage pre-training
method to enhance YuLan's overall capabilities. Subsequent phases of training
incorporate instruction-tuning and human alignment, employing a substantial
volume of high-quality synthesized data. To facilitate the learning of complex
and long-tail knowledge, we devise a curriculum-learning framework throughout
across these stages, which helps LLMs learn knowledge in an easy-to-hard
manner. YuLan's training is finished on Jan, 2024 and has achieved performance
on par with state-of-the-art LLMs across various English and Chinese
benchmarks. This paper outlines a comprehensive technical roadmap for
developing LLMs from scratch. Our model and codes are available at
https://github.com/RUC-GSAI/YuLan-Chat. |
doi_str_mv | 10.48550/arxiv.2406.19853 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_19853</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_19853</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_198533</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zO0tDA15mTQiiz1ScyzUnDMU_AvSM3TLc4vLUpOVfBJLEoHkXnppYlAhm9-SmoODwNrWmJOcSovlOZmkHdzDXH20AWbGl9QlJmbWFQZDzI9Hmy6MWEVAJBbLZI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>YuLan: An Open-source Large Language Model</title><source>arXiv.org</source><creator>Zhu, Yutao ; Zhou, Kun ; Mao, Kelong ; Chen, Wentong ; Sun, Yiding ; Chen, Zhipeng ; Cao, Qian ; Wu, Yihan ; Chen, Yushuo ; Wang, Feng ; Zhang, Lei ; Li, Junyi ; Wang, Xiaolei ; Wang, Lei ; Zhang, Beichen ; Dong, Zican ; Cheng, Xiaoxue ; Chen, Yuhan ; Tang, Xinyu ; Hou, Yupeng ; Ren, Qiangqiang ; Pang, Xincheng ; Xie, Shufang ; Zhao, Wayne Xin ; Dou, Zhicheng ; Mao, Jiaxin ; Lin, Yankai ; Song, Ruihua ; Xu, Jun ; Chen, Xu ; Yan, Rui ; Wei, Zhewei ; Hu, Di ; Huang, Wenbing ; Gao, Ze-Feng ; Chen, Yueguo ; Lu, Weizheng ; Wen, Ji-Rong</creator><creatorcontrib>Zhu, Yutao ; Zhou, Kun ; Mao, Kelong ; Chen, Wentong ; Sun, Yiding ; Chen, Zhipeng ; Cao, Qian ; Wu, Yihan ; Chen, Yushuo ; Wang, Feng ; Zhang, Lei ; Li, Junyi ; Wang, Xiaolei ; Wang, Lei ; Zhang, Beichen ; Dong, Zican ; Cheng, Xiaoxue ; Chen, Yuhan ; Tang, Xinyu ; Hou, Yupeng ; Ren, Qiangqiang ; Pang, Xincheng ; Xie, Shufang ; Zhao, Wayne Xin ; Dou, Zhicheng ; Mao, Jiaxin ; Lin, Yankai ; Song, Ruihua ; Xu, Jun ; Chen, Xu ; Yan, Rui ; Wei, Zhewei ; Hu, Di ; Huang, Wenbing ; Gao, Ze-Feng ; Chen, Yueguo ; Lu, Weizheng ; Wen, Ji-Rong</creatorcontrib><description>Large language models (LLMs) have become the foundation of many applications,
leveraging their extensive capabilities in processing and understanding natural
language. While many open-source LLMs have been released with technical
reports, the lack of training details hinders further research and development.
This paper presents the development of YuLan, a series of open-source LLMs with
$12$ billion parameters. The base model of YuLan is pre-trained on
approximately $1.7$T tokens derived from a diverse corpus, including massive
English, Chinese, and multilingual texts. We design a three-stage pre-training
method to enhance YuLan's overall capabilities. Subsequent phases of training
incorporate instruction-tuning and human alignment, employing a substantial
volume of high-quality synthesized data. To facilitate the learning of complex
and long-tail knowledge, we devise a curriculum-learning framework throughout
across these stages, which helps LLMs learn knowledge in an easy-to-hard
manner. YuLan's training is finished on Jan, 2024 and has achieved performance
on par with state-of-the-art LLMs across various English and Chinese
benchmarks. This paper outlines a comprehensive technical roadmap for
developing LLMs from scratch. Our model and codes are available at
https://github.com/RUC-GSAI/YuLan-Chat.</description><identifier>DOI: 10.48550/arxiv.2406.19853</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.19853$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.19853$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhu, Yutao</creatorcontrib><creatorcontrib>Zhou, Kun</creatorcontrib><creatorcontrib>Mao, Kelong</creatorcontrib><creatorcontrib>Chen, Wentong</creatorcontrib><creatorcontrib>Sun, Yiding</creatorcontrib><creatorcontrib>Chen, Zhipeng</creatorcontrib><creatorcontrib>Cao, Qian</creatorcontrib><creatorcontrib>Wu, Yihan</creatorcontrib><creatorcontrib>Chen, Yushuo</creatorcontrib><creatorcontrib>Wang, Feng</creatorcontrib><creatorcontrib>Zhang, Lei</creatorcontrib><creatorcontrib>Li, Junyi</creatorcontrib><creatorcontrib>Wang, Xiaolei</creatorcontrib><creatorcontrib>Wang, Lei</creatorcontrib><creatorcontrib>Zhang, Beichen</creatorcontrib><creatorcontrib>Dong, Zican</creatorcontrib><creatorcontrib>Cheng, Xiaoxue</creatorcontrib><creatorcontrib>Chen, Yuhan</creatorcontrib><creatorcontrib>Tang, Xinyu</creatorcontrib><creatorcontrib>Hou, Yupeng</creatorcontrib><creatorcontrib>Ren, Qiangqiang</creatorcontrib><creatorcontrib>Pang, Xincheng</creatorcontrib><creatorcontrib>Xie, Shufang</creatorcontrib><creatorcontrib>Zhao, Wayne Xin</creatorcontrib><creatorcontrib>Dou, Zhicheng</creatorcontrib><creatorcontrib>Mao, Jiaxin</creatorcontrib><creatorcontrib>Lin, Yankai</creatorcontrib><creatorcontrib>Song, Ruihua</creatorcontrib><creatorcontrib>Xu, Jun</creatorcontrib><creatorcontrib>Chen, Xu</creatorcontrib><creatorcontrib>Yan, Rui</creatorcontrib><creatorcontrib>Wei, Zhewei</creatorcontrib><creatorcontrib>Hu, Di</creatorcontrib><creatorcontrib>Huang, Wenbing</creatorcontrib><creatorcontrib>Gao, Ze-Feng</creatorcontrib><creatorcontrib>Chen, Yueguo</creatorcontrib><creatorcontrib>Lu, Weizheng</creatorcontrib><creatorcontrib>Wen, Ji-Rong</creatorcontrib><title>YuLan: An Open-source Large Language Model</title><description>Large language models (LLMs) have become the foundation of many applications,
leveraging their extensive capabilities in processing and understanding natural
language. While many open-source LLMs have been released with technical
reports, the lack of training details hinders further research and development.
This paper presents the development of YuLan, a series of open-source LLMs with
$12$ billion parameters. The base model of YuLan is pre-trained on
approximately $1.7$T tokens derived from a diverse corpus, including massive
English, Chinese, and multilingual texts. We design a three-stage pre-training
method to enhance YuLan's overall capabilities. Subsequent phases of training
incorporate instruction-tuning and human alignment, employing a substantial
volume of high-quality synthesized data. To facilitate the learning of complex
and long-tail knowledge, we devise a curriculum-learning framework throughout
across these stages, which helps LLMs learn knowledge in an easy-to-hard
manner. YuLan's training is finished on Jan, 2024 and has achieved performance
on par with state-of-the-art LLMs across various English and Chinese
benchmarks. This paper outlines a comprehensive technical roadmap for
developing LLMs from scratch. Our model and codes are available at
https://github.com/RUC-GSAI/YuLan-Chat.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zO0tDA15mTQiiz1ScyzUnDMU_AvSM3TLc4vLUpOVfBJLEoHkXnppYlAhm9-SmoODwNrWmJOcSovlOZmkHdzDXH20AWbGl9QlJmbWFQZDzI9Hmy6MWEVAJBbLZI</recordid><startdate>20240628</startdate><enddate>20240628</enddate><creator>Zhu, Yutao</creator><creator>Zhou, Kun</creator><creator>Mao, Kelong</creator><creator>Chen, Wentong</creator><creator>Sun, Yiding</creator><creator>Chen, Zhipeng</creator><creator>Cao, Qian</creator><creator>Wu, Yihan</creator><creator>Chen, Yushuo</creator><creator>Wang, Feng</creator><creator>Zhang, Lei</creator><creator>Li, Junyi</creator><creator>Wang, Xiaolei</creator><creator>Wang, Lei</creator><creator>Zhang, Beichen</creator><creator>Dong, Zican</creator><creator>Cheng, Xiaoxue</creator><creator>Chen, Yuhan</creator><creator>Tang, Xinyu</creator><creator>Hou, Yupeng</creator><creator>Ren, Qiangqiang</creator><creator>Pang, Xincheng</creator><creator>Xie, Shufang</creator><creator>Zhao, Wayne Xin</creator><creator>Dou, Zhicheng</creator><creator>Mao, Jiaxin</creator><creator>Lin, Yankai</creator><creator>Song, Ruihua</creator><creator>Xu, Jun</creator><creator>Chen, Xu</creator><creator>Yan, Rui</creator><creator>Wei, Zhewei</creator><creator>Hu, Di</creator><creator>Huang, Wenbing</creator><creator>Gao, Ze-Feng</creator><creator>Chen, Yueguo</creator><creator>Lu, Weizheng</creator><creator>Wen, Ji-Rong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240628</creationdate><title>YuLan: An Open-source Large Language Model</title><author>Zhu, Yutao ; Zhou, Kun ; Mao, Kelong ; Chen, Wentong ; Sun, Yiding ; Chen, Zhipeng ; Cao, Qian ; Wu, Yihan ; Chen, Yushuo ; Wang, Feng ; Zhang, Lei ; Li, Junyi ; Wang, Xiaolei ; Wang, Lei ; Zhang, Beichen ; Dong, Zican ; Cheng, Xiaoxue ; Chen, Yuhan ; Tang, Xinyu ; Hou, Yupeng ; Ren, Qiangqiang ; Pang, Xincheng ; Xie, Shufang ; Zhao, Wayne Xin ; Dou, Zhicheng ; Mao, Jiaxin ; Lin, Yankai ; Song, Ruihua ; Xu, Jun ; Chen, Xu ; Yan, Rui ; Wei, Zhewei ; Hu, Di ; Huang, Wenbing ; Gao, Ze-Feng ; Chen, Yueguo ; Lu, Weizheng ; Wen, Ji-Rong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_198533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Yutao</creatorcontrib><creatorcontrib>Zhou, Kun</creatorcontrib><creatorcontrib>Mao, Kelong</creatorcontrib><creatorcontrib>Chen, Wentong</creatorcontrib><creatorcontrib>Sun, Yiding</creatorcontrib><creatorcontrib>Chen, Zhipeng</creatorcontrib><creatorcontrib>Cao, Qian</creatorcontrib><creatorcontrib>Wu, Yihan</creatorcontrib><creatorcontrib>Chen, Yushuo</creatorcontrib><creatorcontrib>Wang, Feng</creatorcontrib><creatorcontrib>Zhang, Lei</creatorcontrib><creatorcontrib>Li, Junyi</creatorcontrib><creatorcontrib>Wang, Xiaolei</creatorcontrib><creatorcontrib>Wang, Lei</creatorcontrib><creatorcontrib>Zhang, Beichen</creatorcontrib><creatorcontrib>Dong, Zican</creatorcontrib><creatorcontrib>Cheng, Xiaoxue</creatorcontrib><creatorcontrib>Chen, Yuhan</creatorcontrib><creatorcontrib>Tang, Xinyu</creatorcontrib><creatorcontrib>Hou, Yupeng</creatorcontrib><creatorcontrib>Ren, Qiangqiang</creatorcontrib><creatorcontrib>Pang, Xincheng</creatorcontrib><creatorcontrib>Xie, Shufang</creatorcontrib><creatorcontrib>Zhao, Wayne Xin</creatorcontrib><creatorcontrib>Dou, Zhicheng</creatorcontrib><creatorcontrib>Mao, Jiaxin</creatorcontrib><creatorcontrib>Lin, Yankai</creatorcontrib><creatorcontrib>Song, Ruihua</creatorcontrib><creatorcontrib>Xu, Jun</creatorcontrib><creatorcontrib>Chen, Xu</creatorcontrib><creatorcontrib>Yan, Rui</creatorcontrib><creatorcontrib>Wei, Zhewei</creatorcontrib><creatorcontrib>Hu, Di</creatorcontrib><creatorcontrib>Huang, Wenbing</creatorcontrib><creatorcontrib>Gao, Ze-Feng</creatorcontrib><creatorcontrib>Chen, Yueguo</creatorcontrib><creatorcontrib>Lu, Weizheng</creatorcontrib><creatorcontrib>Wen, Ji-Rong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhu, Yutao</au><au>Zhou, Kun</au><au>Mao, Kelong</au><au>Chen, Wentong</au><au>Sun, Yiding</au><au>Chen, Zhipeng</au><au>Cao, Qian</au><au>Wu, Yihan</au><au>Chen, Yushuo</au><au>Wang, Feng</au><au>Zhang, Lei</au><au>Li, Junyi</au><au>Wang, Xiaolei</au><au>Wang, Lei</au><au>Zhang, Beichen</au><au>Dong, Zican</au><au>Cheng, Xiaoxue</au><au>Chen, Yuhan</au><au>Tang, Xinyu</au><au>Hou, Yupeng</au><au>Ren, Qiangqiang</au><au>Pang, Xincheng</au><au>Xie, Shufang</au><au>Zhao, Wayne Xin</au><au>Dou, Zhicheng</au><au>Mao, Jiaxin</au><au>Lin, Yankai</au><au>Song, Ruihua</au><au>Xu, Jun</au><au>Chen, Xu</au><au>Yan, Rui</au><au>Wei, Zhewei</au><au>Hu, Di</au><au>Huang, Wenbing</au><au>Gao, Ze-Feng</au><au>Chen, Yueguo</au><au>Lu, Weizheng</au><au>Wen, Ji-Rong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>YuLan: An Open-source Large Language Model</atitle><date>2024-06-28</date><risdate>2024</risdate><abstract>Large language models (LLMs) have become the foundation of many applications,
leveraging their extensive capabilities in processing and understanding natural
language. While many open-source LLMs have been released with technical
reports, the lack of training details hinders further research and development.
This paper presents the development of YuLan, a series of open-source LLMs with
$12$ billion parameters. The base model of YuLan is pre-trained on
approximately $1.7$T tokens derived from a diverse corpus, including massive
English, Chinese, and multilingual texts. We design a three-stage pre-training
method to enhance YuLan's overall capabilities. Subsequent phases of training
incorporate instruction-tuning and human alignment, employing a substantial
volume of high-quality synthesized data. To facilitate the learning of complex
and long-tail knowledge, we devise a curriculum-learning framework throughout
across these stages, which helps LLMs learn knowledge in an easy-to-hard
manner. YuLan's training is finished on Jan, 2024 and has achieved performance
on par with state-of-the-art LLMs across various English and Chinese
benchmarks. This paper outlines a comprehensive technical roadmap for
developing LLMs from scratch. Our model and codes are available at
https://github.com/RUC-GSAI/YuLan-Chat.</abstract><doi>10.48550/arxiv.2406.19853</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2406.19853 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2406_19853 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | YuLan: An Open-source Large Language Model |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T13%3A22%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=YuLan:%20An%20Open-source%20Large%20Language%20Model&rft.au=Zhu,%20Yutao&rft.date=2024-06-28&rft_id=info:doi/10.48550/arxiv.2406.19853&rft_dat=%3Carxiv_GOX%3E2406_19853%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |