L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs
Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-04 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Kowsher, Md Md Shohanur Islam Sobuj Mahmud, Asif Nusrat Jahan Prottasha Bhat, Prakash |
description | Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use across various class labels. To address these issues, this paper introduces L-Tuning, an efficient fine-tuning approach designed for classification tasks within the Natural Language Inference (NLI) framework. Diverging from conventional methods, L-Tuning focuses on the fine-tuning of label tokens processed through a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This technique not only improves the fine-tuning accuracy and efficiency but also facilitates the generation of distinct label embeddings for each class, enhancing the model's training nuance. Our experimental results indicate a significant improvement in training efficiency and classification accuracy with L-Tuning compared to traditional approaches, marking a promising advancement in fine-tuning LLMs for complex language tasks. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2922662119</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2922662119</sourcerecordid><originalsourceid>FETCH-proquest_journals_29226621193</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw9NENCfXz9HO3UgiuzEvOKMrPy6xKTVHwSUxKzVEIKc3LzEtXSMsvUggoys8tKFFIzEsBMlPTMisUMvMUfHx8i3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7I0sjIzMzI0NDSmDhVAJh7N6w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2922662119</pqid></control><display><type>article</type><title>L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs</title><source>Free E- Journals</source><creator>Kowsher, Md ; Md Shohanur Islam Sobuj ; Mahmud, Asif ; Nusrat Jahan Prottasha ; Bhat, Prakash</creator><creatorcontrib>Kowsher, Md ; Md Shohanur Islam Sobuj ; Mahmud, Asif ; Nusrat Jahan Prottasha ; Bhat, Prakash</creatorcontrib><description>Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use across various class labels. To address these issues, this paper introduces L-Tuning, an efficient fine-tuning approach designed for classification tasks within the Natural Language Inference (NLI) framework. Diverging from conventional methods, L-Tuning focuses on the fine-tuning of label tokens processed through a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This technique not only improves the fine-tuning accuracy and efficiency but also facilitates the generation of distinct label embeddings for each class, enhancing the model's training nuance. Our experimental results indicate a significant improvement in training efficiency and classification accuracy with L-Tuning compared to traditional approaches, marking a promising advancement in fine-tuning LLMs for complex language tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Classification ; Labels ; Large language models ; Natural language ; Natural language processing ; Task complexity ; Training</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Kowsher, Md</creatorcontrib><creatorcontrib>Md Shohanur Islam Sobuj</creatorcontrib><creatorcontrib>Mahmud, Asif</creatorcontrib><creatorcontrib>Nusrat Jahan Prottasha</creatorcontrib><creatorcontrib>Bhat, Prakash</creatorcontrib><title>L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs</title><title>arXiv.org</title><description>Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use across various class labels. To address these issues, this paper introduces L-Tuning, an efficient fine-tuning approach designed for classification tasks within the Natural Language Inference (NLI) framework. Diverging from conventional methods, L-Tuning focuses on the fine-tuning of label tokens processed through a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This technique not only improves the fine-tuning accuracy and efficiency but also facilitates the generation of distinct label embeddings for each class, enhancing the model's training nuance. Our experimental results indicate a significant improvement in training efficiency and classification accuracy with L-Tuning compared to traditional approaches, marking a promising advancement in fine-tuning LLMs for complex language tasks.</description><subject>Accuracy</subject><subject>Classification</subject><subject>Labels</subject><subject>Large language models</subject><subject>Natural language</subject><subject>Natural language processing</subject><subject>Task complexity</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRw9NENCfXz9HO3UgiuzEvOKMrPy6xKTVHwSUxKzVEIKc3LzEtXSMsvUggoys8tKFFIzEsBMlPTMisUMvMUfHx8i3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7I0sjIzMzI0NDSmDhVAJh7N6w</recordid><startdate>20240413</startdate><enddate>20240413</enddate><creator>Kowsher, Md</creator><creator>Md Shohanur Islam Sobuj</creator><creator>Mahmud, Asif</creator><creator>Nusrat Jahan Prottasha</creator><creator>Bhat, Prakash</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240413</creationdate><title>L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs</title><author>Kowsher, Md ; Md Shohanur Islam Sobuj ; Mahmud, Asif ; Nusrat Jahan Prottasha ; Bhat, Prakash</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29226621193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Classification</topic><topic>Labels</topic><topic>Large language models</topic><topic>Natural language</topic><topic>Natural language processing</topic><topic>Task complexity</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Kowsher, Md</creatorcontrib><creatorcontrib>Md Shohanur Islam Sobuj</creatorcontrib><creatorcontrib>Mahmud, Asif</creatorcontrib><creatorcontrib>Nusrat Jahan Prottasha</creatorcontrib><creatorcontrib>Bhat, Prakash</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kowsher, Md</au><au>Md Shohanur Islam Sobuj</au><au>Mahmud, Asif</au><au>Nusrat Jahan Prottasha</au><au>Bhat, Prakash</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs</atitle><jtitle>arXiv.org</jtitle><date>2024-04-13</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use across various class labels. To address these issues, this paper introduces L-Tuning, an efficient fine-tuning approach designed for classification tasks within the Natural Language Inference (NLI) framework. Diverging from conventional methods, L-Tuning focuses on the fine-tuning of label tokens processed through a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This technique not only improves the fine-tuning accuracy and efficiency but also facilitates the generation of distinct label embeddings for each class, enhancing the model's training nuance. Our experimental results indicate a significant improvement in training efficiency and classification accuracy with L-Tuning compared to traditional approaches, marking a promising advancement in fine-tuning LLMs for complex language tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2922662119 |
source | Free E- Journals |
subjects | Accuracy Classification Labels Large language models Natural language Natural language processing Task complexity Training |
title | L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T09%3A18%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=L-TUNING:%20Synchronized%20Label%20Tuning%20for%20Prompt%20and%20Prefix%20in%20LLMs&rft.jtitle=arXiv.org&rft.au=Kowsher,%20Md&rft.date=2024-04-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2922662119%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2922662119&rft_id=info:pmid/&rfr_iscdi=true |