Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning
One-size-fits-all large language models (LLMs) are increasingly being used to help people with their writing. However, the style these models are trained to write in may not suit all users or use cases. LLMs would be more useful as writing assistants if their idiolect could be customized to match ea...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Liu, Xinyue Diddee, Harshita Ippolito, Daphne |
description | One-size-fits-all large language models (LLMs) are increasingly being used to
help people with their writing. However, the style these models are trained to
write in may not suit all users or use cases. LLMs would be more useful as
writing assistants if their idiolect could be customized to match each user. In
this paper, we explore whether parameter-efficient finetuning (PEFT) with
Low-Rank Adaptation can effectively guide the style of LLM generations. We use
this method to customize LLaMA-2 to ten different authors and show that the
generated text has lexical, syntactic, and surface alignment with the target
author but struggles with content memorization. Our findings highlight the
potential of PEFT to support efficient, user-level customization of LLMs. |
doi_str_mv | 10.48550/arxiv.2409.04574 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_04574</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_04574</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_045743</originalsourceid><addsrcrecordid>eNqFjrEKwkAQRK-xEPUDrLwfSDz1glqHRAsFQbENi27CQrKRzZ0Yv95E7G1mBuYVT6npwoR2E0VmDvKiZ7i0ZhsaG63tUF1j37i6ojdxoQ8gBXbJhYduHOs7lnqHjAKOatZn15aofdOzJxCo0KEESZ7TjZCdTonRee7usRrkUDY4-fVIzdLkEu-Dr0H2EKpA2qw3yb4mq__EB8h6P0Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning</title><source>arXiv.org</source><creator>Liu, Xinyue ; Diddee, Harshita ; Ippolito, Daphne</creator><creatorcontrib>Liu, Xinyue ; Diddee, Harshita ; Ippolito, Daphne</creatorcontrib><description>One-size-fits-all large language models (LLMs) are increasingly being used to
help people with their writing. However, the style these models are trained to
write in may not suit all users or use cases. LLMs would be more useful as
writing assistants if their idiolect could be customized to match each user. In
this paper, we explore whether parameter-efficient finetuning (PEFT) with
Low-Rank Adaptation can effectively guide the style of LLM generations. We use
this method to customize LLaMA-2 to ten different authors and show that the
generated text has lexical, syntactic, and surface alignment with the target
author but struggles with content memorization. Our findings highlight the
potential of PEFT to support efficient, user-level customization of LLMs.</description><identifier>DOI: 10.48550/arxiv.2409.04574</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.04574$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.04574$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Xinyue</creatorcontrib><creatorcontrib>Diddee, Harshita</creatorcontrib><creatorcontrib>Ippolito, Daphne</creatorcontrib><title>Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning</title><description>One-size-fits-all large language models (LLMs) are increasingly being used to
help people with their writing. However, the style these models are trained to
write in may not suit all users or use cases. LLMs would be more useful as
writing assistants if their idiolect could be customized to match each user. In
this paper, we explore whether parameter-efficient finetuning (PEFT) with
Low-Rank Adaptation can effectively guide the style of LLM generations. We use
this method to customize LLaMA-2 to ten different authors and show that the
generated text has lexical, syntactic, and surface alignment with the target
author but struggles with content memorization. Our findings highlight the
potential of PEFT to support efficient, user-level customization of LLMs.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrEKwkAQRK-xEPUDrLwfSDz1glqHRAsFQbENi27CQrKRzZ0Yv95E7G1mBuYVT6npwoR2E0VmDvKiZ7i0ZhsaG63tUF1j37i6ojdxoQ8gBXbJhYduHOs7lnqHjAKOatZn15aofdOzJxCo0KEESZ7TjZCdTonRee7usRrkUDY4-fVIzdLkEu-Dr0H2EKpA2qw3yb4mq__EB8h6P0Q</recordid><startdate>20240906</startdate><enddate>20240906</enddate><creator>Liu, Xinyue</creator><creator>Diddee, Harshita</creator><creator>Ippolito, Daphne</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240906</creationdate><title>Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning</title><author>Liu, Xinyue ; Diddee, Harshita ; Ippolito, Daphne</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_045743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Xinyue</creatorcontrib><creatorcontrib>Diddee, Harshita</creatorcontrib><creatorcontrib>Ippolito, Daphne</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Xinyue</au><au>Diddee, Harshita</au><au>Ippolito, Daphne</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning</atitle><date>2024-09-06</date><risdate>2024</risdate><abstract>One-size-fits-all large language models (LLMs) are increasingly being used to
help people with their writing. However, the style these models are trained to
write in may not suit all users or use cases. LLMs would be more useful as
writing assistants if their idiolect could be customized to match each user. In
this paper, we explore whether parameter-efficient finetuning (PEFT) with
Low-Rank Adaptation can effectively guide the style of LLM generations. We use
this method to customize LLaMA-2 to ten different authors and show that the
generated text has lexical, syntactic, and surface alignment with the target
author but struggles with content memorization. Our findings highlight the
potential of PEFT to support efficient, user-level customization of LLMs.</abstract><doi>10.48550/arxiv.2409.04574</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2409.04574 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2409_04574 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-02T13%3A58%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Customizing%20Large%20Language%20Model%20Generation%20Style%20using%20Parameter-Efficient%20Finetuning&rft.au=Liu,%20Xinyue&rft.date=2024-09-06&rft_id=info:doi/10.48550/arxiv.2409.04574&rft_dat=%3Carxiv_GOX%3E2409_04574%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |