Too Late to Train, Too Early To Use? A Study on Necessity and Viability of Low-Resource Bengali LLMs
Each new generation of English-oriented Large Language Models (LLMs) exhibits enhanced cross-lingual transfer capabilities and significantly outperforms older LLMs on low-resource languages. This prompts the question: Is there a need for LLMs dedicated to a particular low-resource language? We aim t...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Each new generation of English-oriented Large Language Models (LLMs) exhibits
enhanced cross-lingual transfer capabilities and significantly outperforms
older LLMs on low-resource languages. This prompts the question: Is there a
need for LLMs dedicated to a particular low-resource language? We aim to
explore this question for Bengali, a low-to-moderate resource Indo-Aryan
language native to the Bengal region of South Asia.
We compare the performance of open-weight and closed-source LLMs such as
LLaMA-3 and GPT-4 against fine-tuned encoder-decoder models across a diverse
set of Bengali downstream tasks, including translation, summarization,
paraphrasing, question-answering, and natural language inference. Our findings
reveal that while LLMs generally excel in reasoning tasks, their performance in
tasks requiring Bengali script generation is inconsistent. Key challenges
include inefficient tokenization of Bengali script by existing LLMs, leading to
increased computational costs and potential performance degradation.
Additionally, we highlight biases in machine-translated datasets commonly used
for Bengali NLP tasks. We conclude that there is a significant need for a
Bengali-oriented LLM, but the field currently lacks the high-quality
pretraining and instruction-tuning datasets necessary to develop a highly
effective model. |
---|---|
DOI: | 10.48550/arxiv.2407.00416 |