Efficient Continual Pre-training of LLMs for Low-resource Languages
Open-source Large Language models (OsLLMs) propel the democratization of natural language research by giving the flexibility to augment or update model parameters for performance improvement. Nevertheless, like proprietary LLMs, Os-LLMs offer poorer performance on low-resource languages (LRLs) than...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Nag, Arijit Chakrabarti, Soumen Mukherjee, Animesh Ganguly, Niloy |
description | Open-source Large Language models (OsLLMs) propel the democratization of
natural language research by giving the flexibility to augment or update model
parameters for performance improvement. Nevertheless, like proprietary LLMs,
Os-LLMs offer poorer performance on low-resource languages (LRLs) than
high-resource languages (HRLs), owing to smaller amounts of training data and
underrepresented vocabulary. On the other hand, continual pre-training (CPT)
with large amounts of language-specific data is a costly proposition in terms
of data acquisition and computational resources. Our goal is to drastically
reduce CPT cost. To that end, we first develop a new algorithm to select a
subset of texts from a larger corpus. We show the effectiveness of our
technique using very little CPT data. In search of further improvement, we
design a new algorithm to select tokens to include in the LLM vocabulary. We
experiment with the recent Llama-3 model and nine Indian languages with diverse
scripts and extent of resource availability. For evaluation, we use
IndicGenBench, a generation task benchmark dataset for Indic languages. We
experiment with various CPT corpora and augmented vocabulary size and offer
insights across language families. |
doi_str_mv | 10.48550/arxiv.2412.10244 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_10244</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_10244</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_102443</originalsourceid><addsrcrecordid>eNqFzbsKwjAUgOEsDqI-gJPnBVp7ScG9VBwiOLiXQzkJB2oiJ6mXtxeLu9O__PAptS2LXB-aptijvPiRV7qs8rKotF6qtrOWByafoA0-sZ9whItQlgTZs3cQLBhzjmCDgAnPTCiGSQYCg95N6Ciu1cLiGGnz60rtjt21PWUz19-Fbyjv_sv2M1v_Pz55XDdP</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Efficient Continual Pre-training of LLMs for Low-resource Languages</title><source>arXiv.org</source><creator>Nag, Arijit ; Chakrabarti, Soumen ; Mukherjee, Animesh ; Ganguly, Niloy</creator><creatorcontrib>Nag, Arijit ; Chakrabarti, Soumen ; Mukherjee, Animesh ; Ganguly, Niloy</creatorcontrib><description>Open-source Large Language models (OsLLMs) propel the democratization of
natural language research by giving the flexibility to augment or update model
parameters for performance improvement. Nevertheless, like proprietary LLMs,
Os-LLMs offer poorer performance on low-resource languages (LRLs) than
high-resource languages (HRLs), owing to smaller amounts of training data and
underrepresented vocabulary. On the other hand, continual pre-training (CPT)
with large amounts of language-specific data is a costly proposition in terms
of data acquisition and computational resources. Our goal is to drastically
reduce CPT cost. To that end, we first develop a new algorithm to select a
subset of texts from a larger corpus. We show the effectiveness of our
technique using very little CPT data. In search of further improvement, we
design a new algorithm to select tokens to include in the LLM vocabulary. We
experiment with the recent Llama-3 model and nine Indian languages with diverse
scripts and extent of resource availability. For evaluation, we use
IndicGenBench, a generation task benchmark dataset for Indic languages. We
experiment with various CPT corpora and augmented vocabulary size and offer
insights across language families.</description><identifier>DOI: 10.48550/arxiv.2412.10244</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.10244$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.10244$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nag, Arijit</creatorcontrib><creatorcontrib>Chakrabarti, Soumen</creatorcontrib><creatorcontrib>Mukherjee, Animesh</creatorcontrib><creatorcontrib>Ganguly, Niloy</creatorcontrib><title>Efficient Continual Pre-training of LLMs for Low-resource Languages</title><description>Open-source Large Language models (OsLLMs) propel the democratization of
natural language research by giving the flexibility to augment or update model
parameters for performance improvement. Nevertheless, like proprietary LLMs,
Os-LLMs offer poorer performance on low-resource languages (LRLs) than
high-resource languages (HRLs), owing to smaller amounts of training data and
underrepresented vocabulary. On the other hand, continual pre-training (CPT)
with large amounts of language-specific data is a costly proposition in terms
of data acquisition and computational resources. Our goal is to drastically
reduce CPT cost. To that end, we first develop a new algorithm to select a
subset of texts from a larger corpus. We show the effectiveness of our
technique using very little CPT data. In search of further improvement, we
design a new algorithm to select tokens to include in the LLM vocabulary. We
experiment with the recent Llama-3 model and nine Indian languages with diverse
scripts and extent of resource availability. For evaluation, we use
IndicGenBench, a generation task benchmark dataset for Indic languages. We
experiment with various CPT corpora and augmented vocabulary size and offer
insights across language families.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFzbsKwjAUgOEsDqI-gJPnBVp7ScG9VBwiOLiXQzkJB2oiJ6mXtxeLu9O__PAptS2LXB-aptijvPiRV7qs8rKotF6qtrOWByafoA0-sZ9whItQlgTZs3cQLBhzjmCDgAnPTCiGSQYCg95N6Ciu1cLiGGnz60rtjt21PWUz19-Fbyjv_sv2M1v_Pz55XDdP</recordid><startdate>20241213</startdate><enddate>20241213</enddate><creator>Nag, Arijit</creator><creator>Chakrabarti, Soumen</creator><creator>Mukherjee, Animesh</creator><creator>Ganguly, Niloy</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241213</creationdate><title>Efficient Continual Pre-training of LLMs for Low-resource Languages</title><author>Nag, Arijit ; Chakrabarti, Soumen ; Mukherjee, Animesh ; Ganguly, Niloy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_102443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Nag, Arijit</creatorcontrib><creatorcontrib>Chakrabarti, Soumen</creatorcontrib><creatorcontrib>Mukherjee, Animesh</creatorcontrib><creatorcontrib>Ganguly, Niloy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nag, Arijit</au><au>Chakrabarti, Soumen</au><au>Mukherjee, Animesh</au><au>Ganguly, Niloy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient Continual Pre-training of LLMs for Low-resource Languages</atitle><date>2024-12-13</date><risdate>2024</risdate><abstract>Open-source Large Language models (OsLLMs) propel the democratization of
natural language research by giving the flexibility to augment or update model
parameters for performance improvement. Nevertheless, like proprietary LLMs,
Os-LLMs offer poorer performance on low-resource languages (LRLs) than
high-resource languages (HRLs), owing to smaller amounts of training data and
underrepresented vocabulary. On the other hand, continual pre-training (CPT)
with large amounts of language-specific data is a costly proposition in terms
of data acquisition and computational resources. Our goal is to drastically
reduce CPT cost. To that end, we first develop a new algorithm to select a
subset of texts from a larger corpus. We show the effectiveness of our
technique using very little CPT data. In search of further improvement, we
design a new algorithm to select tokens to include in the LLM vocabulary. We
experiment with the recent Llama-3 model and nine Indian languages with diverse
scripts and extent of resource availability. For evaluation, we use
IndicGenBench, a generation task benchmark dataset for Indic languages. We
experiment with various CPT corpora and augmented vocabulary size and offer
insights across language families.</abstract><doi>10.48550/arxiv.2412.10244</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2412.10244 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2412_10244 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | Efficient Continual Pre-training of LLMs for Low-resource Languages |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T14%3A46%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20Continual%20Pre-training%20of%20LLMs%20for%20Low-resource%20Languages&rft.au=Nag,%20Arijit&rft.date=2024-12-13&rft_id=info:doi/10.48550/arxiv.2412.10244&rft_dat=%3Carxiv_GOX%3E2412_10244%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |