Chinese Open Instruction Generalist: A Preliminary Release
Instruction tuning is widely recognized as a key technique for building generalist language models, which has attracted the attention of researchers and the public with the release of InstructGPT~\citep{ouyang2022training} and ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive prog...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instruction tuning is widely recognized as a key technique for building
generalist language models, which has attracted the attention of researchers
and the public with the release of InstructGPT~\citep{ouyang2022training} and
ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive progress
in English-oriented large-scale language models (LLMs), it is still
under-explored whether English-based foundation LLMs can perform similarly on
multilingual tasks compared to English tasks with well-designed instruction
tuning and how we can construct the corpora needed for the tuning. To remedy
this gap, we propose the project as an attempt to create a Chinese instruction
dataset by various methods adapted to the intrinsic characteristics of 4
sub-tasks. We collect around 200k Chinese instruction tuning samples, which
have been manually checked to guarantee high quality. We also summarize the
existing English and Chinese instruction corpora and briefly describe some
potential applications of the newly constructed Chinese instruction corpora.
The resulting \textbf{C}hinese \textbf{O}pen \textbf{I}nstruction
\textbf{G}eneralist (\textbf{COIG}) corpora are available in
Huggingface\footnote{\url{https://huggingface.co/datasets/BAAI/COIG}} and
Github\footnote{\url{https://github.com/BAAI-Zlab/COIG}}, and will be
continuously updated. |
---|---|
DOI: | 10.48550/arxiv.2304.07987 |