Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
This report introduces \texttt{EEVE-Korean-v1.0}, a Korean adaptation of large language models that exhibit remarkable capabilities across English and Korean text understanding. Building on recent highly capable but English-centric LLMs, such as SOLAR-10.7B and Phi-2, where non-English texts are ine...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This report introduces \texttt{EEVE-Korean-v1.0}, a Korean adaptation of
large language models that exhibit remarkable capabilities across English and
Korean text understanding. Building on recent highly capable but
English-centric LLMs, such as SOLAR-10.7B and Phi-2, where non-English texts
are inefficiently processed with English-centric tokenizers, we present an
efficient and effective vocabulary expansion (EEVE) method, which encompasses
parameter freezing and subword initialization. In contrast to previous efforts
that believe new embeddings require trillions of training tokens, we show that
our method can significantly boost non-English proficiency within just 2
billion tokens. Surpassing most instruction-tuned LLMs on the Open Ko-LLM
Leaderboard, as of January 2024, our model \texttt{EEVE-Korean-10.8B-v1.0}
ranks as the leading Korean pre-trained model in the open-source community,
according to Hugging Face's leaderboard. We open-source our models on
Huggingface to empower the open research community in various languages. |
---|---|
DOI: | 10.48550/arxiv.2402.14714 |