Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning
Large Language Models (LLMs) have demonstrated remarkable multilingual capabilities, yet challenges persist in adapting these models for low-resource languages. In this study, we investigate the effects of Low-Rank Adaptation (LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Khade, Omkar Jagdale, Shruti Phaltankar, Abhishek Takalikar, Gauri Joshi, Raviraj |
description | Large Language Models (LLMs) have demonstrated remarkable multilingual
capabilities, yet challenges persist in adapting these models for low-resource
languages. In this study, we investigate the effects of Low-Rank Adaptation
(LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models for
Marathi, a language with limited resources. Using a translated Alpaca dataset
with 52,000 instruction-response pairs, our findings reveal that while
evaluation metrics often show a performance decline post-fine-tuning, manual
assessments frequently suggest that the fine-tuned models outperform their
original counterparts. The observations indicate improvements in target
language generation capabilities but a reduction in reasoning abilities
following language adaptation. These results underscore the need for improved
evaluation methodologies and the creation of high-quality native datasets to
accurately assess language-specific model performance in low-resource settings. |
doi_str_mv | 10.48550/arxiv.2411.18571 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_18571</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_18571</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_185713</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DO0MDU35GSIdM5IzMlJzUtPLVbIzFNwTEksKMnMS1fwLc0pycwBskoTcxR8fHyLFUryFXzyy3WDUovzS4uSUxV8EkGSIH2lxSAdPvlBjgoBrm4hCiGleUABHgbWtMSc4lReKM3NIO_mGuLsoQt2RHxBUWZuYlFlPMgx8WDHGBNWAQDvrT7E</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning</title><source>arXiv.org</source><creator>Khade, Omkar ; Jagdale, Shruti ; Phaltankar, Abhishek ; Takalikar, Gauri ; Joshi, Raviraj</creator><creatorcontrib>Khade, Omkar ; Jagdale, Shruti ; Phaltankar, Abhishek ; Takalikar, Gauri ; Joshi, Raviraj</creatorcontrib><description>Large Language Models (LLMs) have demonstrated remarkable multilingual
capabilities, yet challenges persist in adapting these models for low-resource
languages. In this study, we investigate the effects of Low-Rank Adaptation
(LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models for
Marathi, a language with limited resources. Using a translated Alpaca dataset
with 52,000 instruction-response pairs, our findings reveal that while
evaluation metrics often show a performance decline post-fine-tuning, manual
assessments frequently suggest that the fine-tuned models outperform their
original counterparts. The observations indicate improvements in target
language generation capabilities but a reduction in reasoning abilities
following language adaptation. These results underscore the need for improved
evaluation methodologies and the creation of high-quality native datasets to
accurately assess language-specific model performance in low-resource settings.</description><identifier>DOI: 10.48550/arxiv.2411.18571</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.18571$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.18571$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Khade, Omkar</creatorcontrib><creatorcontrib>Jagdale, Shruti</creatorcontrib><creatorcontrib>Phaltankar, Abhishek</creatorcontrib><creatorcontrib>Takalikar, Gauri</creatorcontrib><creatorcontrib>Joshi, Raviraj</creatorcontrib><title>Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning</title><description>Large Language Models (LLMs) have demonstrated remarkable multilingual
capabilities, yet challenges persist in adapting these models for low-resource
languages. In this study, we investigate the effects of Low-Rank Adaptation
(LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models for
Marathi, a language with limited resources. Using a translated Alpaca dataset
with 52,000 instruction-response pairs, our findings reveal that while
evaluation metrics often show a performance decline post-fine-tuning, manual
assessments frequently suggest that the fine-tuned models outperform their
original counterparts. The observations indicate improvements in target
language generation capabilities but a reduction in reasoning abilities
following language adaptation. These results underscore the need for improved
evaluation methodologies and the creation of high-quality native datasets to
accurately assess language-specific model performance in low-resource settings.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DO0MDU35GSIdM5IzMlJzUtPLVbIzFNwTEksKMnMS1fwLc0pycwBskoTcxR8fHyLFUryFXzyy3WDUovzS4uSUxV8EkGSIH2lxSAdPvlBjgoBrm4hCiGleUABHgbWtMSc4lReKM3NIO_mGuLsoQt2RHxBUWZuYlFlPMgx8WDHGBNWAQDvrT7E</recordid><startdate>20241127</startdate><enddate>20241127</enddate><creator>Khade, Omkar</creator><creator>Jagdale, Shruti</creator><creator>Phaltankar, Abhishek</creator><creator>Takalikar, Gauri</creator><creator>Joshi, Raviraj</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241127</creationdate><title>Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning</title><author>Khade, Omkar ; Jagdale, Shruti ; Phaltankar, Abhishek ; Takalikar, Gauri ; Joshi, Raviraj</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_185713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Khade, Omkar</creatorcontrib><creatorcontrib>Jagdale, Shruti</creatorcontrib><creatorcontrib>Phaltankar, Abhishek</creatorcontrib><creatorcontrib>Takalikar, Gauri</creatorcontrib><creatorcontrib>Joshi, Raviraj</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Khade, Omkar</au><au>Jagdale, Shruti</au><au>Phaltankar, Abhishek</au><au>Takalikar, Gauri</au><au>Joshi, Raviraj</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning</atitle><date>2024-11-27</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have demonstrated remarkable multilingual
capabilities, yet challenges persist in adapting these models for low-resource
languages. In this study, we investigate the effects of Low-Rank Adaptation
(LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models for
Marathi, a language with limited resources. Using a translated Alpaca dataset
with 52,000 instruction-response pairs, our findings reveal that while
evaluation metrics often show a performance decline post-fine-tuning, manual
assessments frequently suggest that the fine-tuned models outperform their
original counterparts. The observations indicate improvements in target
language generation capabilities but a reduction in reasoning abilities
following language adaptation. These results underscore the need for improved
evaluation methodologies and the creation of high-quality native datasets to
accurately assess language-specific model performance in low-resource settings.</abstract><doi>10.48550/arxiv.2411.18571</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2411.18571 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2411_18571 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T12%3A35%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Challenges%20in%20Adapting%20Multilingual%20LLMs%20to%20Low-Resource%20Languages%20using%20LoRA%20PEFT%20Tuning&rft.au=Khade,%20Omkar&rft.date=2024-11-27&rft_id=info:doi/10.48550/arxiv.2411.18571&rft_dat=%3Carxiv_GOX%3E2411_18571%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |