CultureLLM: Incorporating Cultural Differences into Large Language Models
Large language models (LLMs) are reported to be partial to certain cultures owing to the training data dominance from the English corpora. Since multilingual cultural data are often expensive to collect, existing efforts handle this by prompt engineering or culture-specific pre-training. However, th...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Li, Cheng Chen, Mengzhou Wang, Jindong Sitaram, Sunayana Xie, Xing |
description | Large language models (LLMs) are reported to be partial to certain cultures
owing to the training data dominance from the English corpora. Since
multilingual cultural data are often expensive to collect, existing efforts
handle this by prompt engineering or culture-specific pre-training. However,
they might overlook the knowledge deficiency of low-resource culture and
require extensive computing resources. In this paper, we propose CultureLLM, a
cost-effective solution to incorporate cultural differences into LLMs.
CultureLLM adopts World Value Survey (WVS) as seed data and generates
semantically equivalent training data via the proposed semantic data
augmentation. Using only 50 seed samples from WVS with augmented data, we
fine-tune culture-specific LLMs and one unified model (CultureLLM-One) for 9
cultures covering rich and low-resource languages. Extensive experiments on 60
culture-related datasets demonstrate that CultureLLM significantly outperforms
various counterparts such as GPT-3.5 (by 8.1%) and Gemini Pro (by 9.5%) with
comparable performance to GPT-4 or even better. Our human study shows that the
generated samples are semantically equivalent to the original samples,
providing an effective solution for LLMs augmentation. Code is released at
https://github.com/Scarelette/CultureLLM. |
doi_str_mv | 10.48550/arxiv.2402.10946 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_10946</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_10946</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-dd892f5359c7d548f5b1adbc135d5799f6a949864c048dcfd6091726e71745eb3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL3pArV8ACv8Awl24ic7FApEcsWm--jGj8hScConQeXvKS2bmZGONNJB6IGSkinOyRPkc_wuK0aqkhLNxB1qm3Vc1uyNOTzjNtkpn6YMS0wDvhEY8WsMwWefrJ9xTMuEDeTBXzINK1zGYXJ-nHdoE2Cc_f1_b9HxbX9sPgrz-d42L6YAIUXhnNJV4DXXVjrOVOA9BddbWnPHpdZBgGZaCWYJU84GJ4imshJeUsm47-sterzdXl26U45fkH-6P6fu6lT_AuCzRrI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CultureLLM: Incorporating Cultural Differences into Large Language Models</title><source>arXiv.org</source><creator>Li, Cheng ; Chen, Mengzhou ; Wang, Jindong ; Sitaram, Sunayana ; Xie, Xing</creator><creatorcontrib>Li, Cheng ; Chen, Mengzhou ; Wang, Jindong ; Sitaram, Sunayana ; Xie, Xing</creatorcontrib><description>Large language models (LLMs) are reported to be partial to certain cultures
owing to the training data dominance from the English corpora. Since
multilingual cultural data are often expensive to collect, existing efforts
handle this by prompt engineering or culture-specific pre-training. However,
they might overlook the knowledge deficiency of low-resource culture and
require extensive computing resources. In this paper, we propose CultureLLM, a
cost-effective solution to incorporate cultural differences into LLMs.
CultureLLM adopts World Value Survey (WVS) as seed data and generates
semantically equivalent training data via the proposed semantic data
augmentation. Using only 50 seed samples from WVS with augmented data, we
fine-tune culture-specific LLMs and one unified model (CultureLLM-One) for 9
cultures covering rich and low-resource languages. Extensive experiments on 60
culture-related datasets demonstrate that CultureLLM significantly outperforms
various counterparts such as GPT-3.5 (by 8.1%) and Gemini Pro (by 9.5%) with
comparable performance to GPT-4 or even better. Our human study shows that the
generated samples are semantically equivalent to the original samples,
providing an effective solution for LLMs augmentation. Code is released at
https://github.com/Scarelette/CultureLLM.</description><identifier>DOI: 10.48550/arxiv.2402.10946</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.10946$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.10946$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Cheng</creatorcontrib><creatorcontrib>Chen, Mengzhou</creatorcontrib><creatorcontrib>Wang, Jindong</creatorcontrib><creatorcontrib>Sitaram, Sunayana</creatorcontrib><creatorcontrib>Xie, Xing</creatorcontrib><title>CultureLLM: Incorporating Cultural Differences into Large Language Models</title><description>Large language models (LLMs) are reported to be partial to certain cultures
owing to the training data dominance from the English corpora. Since
multilingual cultural data are often expensive to collect, existing efforts
handle this by prompt engineering or culture-specific pre-training. However,
they might overlook the knowledge deficiency of low-resource culture and
require extensive computing resources. In this paper, we propose CultureLLM, a
cost-effective solution to incorporate cultural differences into LLMs.
CultureLLM adopts World Value Survey (WVS) as seed data and generates
semantically equivalent training data via the proposed semantic data
augmentation. Using only 50 seed samples from WVS with augmented data, we
fine-tune culture-specific LLMs and one unified model (CultureLLM-One) for 9
cultures covering rich and low-resource languages. Extensive experiments on 60
culture-related datasets demonstrate that CultureLLM significantly outperforms
various counterparts such as GPT-3.5 (by 8.1%) and Gemini Pro (by 9.5%) with
comparable performance to GPT-4 or even better. Our human study shows that the
generated samples are semantically equivalent to the original samples,
providing an effective solution for LLMs augmentation. Code is released at
https://github.com/Scarelette/CultureLLM.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL3pArV8ACv8Awl24ic7FApEcsWm--jGj8hScConQeXvKS2bmZGONNJB6IGSkinOyRPkc_wuK0aqkhLNxB1qm3Vc1uyNOTzjNtkpn6YMS0wDvhEY8WsMwWefrJ9xTMuEDeTBXzINK1zGYXJ-nHdoE2Cc_f1_b9HxbX9sPgrz-d42L6YAIUXhnNJV4DXXVjrOVOA9BddbWnPHpdZBgGZaCWYJU84GJ4imshJeUsm47-sterzdXl26U45fkH-6P6fu6lT_AuCzRrI</recordid><startdate>20240208</startdate><enddate>20240208</enddate><creator>Li, Cheng</creator><creator>Chen, Mengzhou</creator><creator>Wang, Jindong</creator><creator>Sitaram, Sunayana</creator><creator>Xie, Xing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240208</creationdate><title>CultureLLM: Incorporating Cultural Differences into Large Language Models</title><author>Li, Cheng ; Chen, Mengzhou ; Wang, Jindong ; Sitaram, Sunayana ; Xie, Xing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-dd892f5359c7d548f5b1adbc135d5799f6a949864c048dcfd6091726e71745eb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Cheng</creatorcontrib><creatorcontrib>Chen, Mengzhou</creatorcontrib><creatorcontrib>Wang, Jindong</creatorcontrib><creatorcontrib>Sitaram, Sunayana</creatorcontrib><creatorcontrib>Xie, Xing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Cheng</au><au>Chen, Mengzhou</au><au>Wang, Jindong</au><au>Sitaram, Sunayana</au><au>Xie, Xing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CultureLLM: Incorporating Cultural Differences into Large Language Models</atitle><date>2024-02-08</date><risdate>2024</risdate><abstract>Large language models (LLMs) are reported to be partial to certain cultures
owing to the training data dominance from the English corpora. Since
multilingual cultural data are often expensive to collect, existing efforts
handle this by prompt engineering or culture-specific pre-training. However,
they might overlook the knowledge deficiency of low-resource culture and
require extensive computing resources. In this paper, we propose CultureLLM, a
cost-effective solution to incorporate cultural differences into LLMs.
CultureLLM adopts World Value Survey (WVS) as seed data and generates
semantically equivalent training data via the proposed semantic data
augmentation. Using only 50 seed samples from WVS with augmented data, we
fine-tune culture-specific LLMs and one unified model (CultureLLM-One) for 9
cultures covering rich and low-resource languages. Extensive experiments on 60
culture-related datasets demonstrate that CultureLLM significantly outperforms
various counterparts such as GPT-3.5 (by 8.1%) and Gemini Pro (by 9.5%) with
comparable performance to GPT-4 or even better. Our human study shows that the
generated samples are semantically equivalent to the original samples,
providing an effective solution for LLMs augmentation. Code is released at
https://github.com/Scarelette/CultureLLM.</abstract><doi>10.48550/arxiv.2402.10946</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2402.10946 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2402_10946 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Learning |
title | CultureLLM: Incorporating Cultural Differences into Large Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T23%3A00%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CultureLLM:%20Incorporating%20Cultural%20Differences%20into%20Large%20Language%20Models&rft.au=Li,%20Cheng&rft.date=2024-02-08&rft_id=info:doi/10.48550/arxiv.2402.10946&rft_dat=%3Carxiv_GOX%3E2402_10946%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |