LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification

With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Chun, Zhang, Hongguang, Zhao, Kainan, Ju, Xinghai, Yang, Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Chun
Zhang, Hongguang
Zhao, Kainan
Ju, Xinghai
Yang, Lin
description With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on heuristic Chain-of-Thought (CoT), and tend to be more complex but less efficient. In this paper, we rethink the LLM-based text classification methodology, propose a simple and effective transfer learning strategy, namely LLMEmbed, to address this classical but challenging task. To illustrate, we first study how to properly extract and fuse the text embeddings via various lightweight LLMs at different network depths to improve their robustness and discrimination, then adapt such embeddings to train the classifier. We perform extensive experiments on publicly available datasets, and the results show that LLMEmbed achieves strong performance while enjoys low training overhead using lightweight LLM backbones compared to recent methods based on larger LLMs, i.e. GPT-3, and sophisticated prompt-based strategies. Our LLMEmbed achieves adequate accuracy on publicly available benchmarks without any fine-tuning while merely use 4% model parameters, 1.8% electricity consumption and 1.5% runtime compared to its counterparts. Code is available at: https://github.com/ChunLiu-cs/LLMEmbed-ACL2024.
doi_str_mv 10.48550/arxiv.2406.03725
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_03725</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_03725</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-5b1305f3dd26d01096fa25ab8c2cbf443cb96754eb4ba48d46321ea766dad0bd3</originalsourceid><addsrcrecordid>eNotj81OwkAYRWfjwqAP4MrZuWqZzl-LO9MAmgyYmO6bb_7gizCStii-vRTZ3JvcnNzkEPJQsFxWSrEpdCf8zrlkOmei5OqWrI1Zzfc2-Gf6EYYtpk9MG2pwsx1-wpj0DDz1dBnSEVOgi2NyA34liok24TTQegd9jxEdjPMduYmw68P9tSekWcyb-jUz78u3-sVkoEuVKVsIpqLwnmvPCjbTEbgCWznubJRSODs7czJYaUFWXmrBiwCl1h48s15MyOP_7UWoPXS4h-63HcXai5j4AxUsSJA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification</title><source>arXiv.org</source><creator>Liu, Chun ; Zhang, Hongguang ; Zhao, Kainan ; Ju, Xinghai ; Yang, Lin</creator><creatorcontrib>Liu, Chun ; Zhang, Hongguang ; Zhao, Kainan ; Ju, Xinghai ; Yang, Lin</creatorcontrib><description>With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on heuristic Chain-of-Thought (CoT), and tend to be more complex but less efficient. In this paper, we rethink the LLM-based text classification methodology, propose a simple and effective transfer learning strategy, namely LLMEmbed, to address this classical but challenging task. To illustrate, we first study how to properly extract and fuse the text embeddings via various lightweight LLMs at different network depths to improve their robustness and discrimination, then adapt such embeddings to train the classifier. We perform extensive experiments on publicly available datasets, and the results show that LLMEmbed achieves strong performance while enjoys low training overhead using lightweight LLM backbones compared to recent methods based on larger LLMs, i.e. GPT-3, and sophisticated prompt-based strategies. Our LLMEmbed achieves adequate accuracy on publicly available benchmarks without any fine-tuning while merely use 4% model parameters, 1.8% electricity consumption and 1.5% runtime compared to its counterparts. Code is available at: https://github.com/ChunLiu-cs/LLMEmbed-ACL2024.</description><identifier>DOI: 10.48550/arxiv.2406.03725</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.03725$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.03725$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Chun</creatorcontrib><creatorcontrib>Zhang, Hongguang</creatorcontrib><creatorcontrib>Zhao, Kainan</creatorcontrib><creatorcontrib>Ju, Xinghai</creatorcontrib><creatorcontrib>Yang, Lin</creatorcontrib><title>LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification</title><description>With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on heuristic Chain-of-Thought (CoT), and tend to be more complex but less efficient. In this paper, we rethink the LLM-based text classification methodology, propose a simple and effective transfer learning strategy, namely LLMEmbed, to address this classical but challenging task. To illustrate, we first study how to properly extract and fuse the text embeddings via various lightweight LLMs at different network depths to improve their robustness and discrimination, then adapt such embeddings to train the classifier. We perform extensive experiments on publicly available datasets, and the results show that LLMEmbed achieves strong performance while enjoys low training overhead using lightweight LLM backbones compared to recent methods based on larger LLMs, i.e. GPT-3, and sophisticated prompt-based strategies. Our LLMEmbed achieves adequate accuracy on publicly available benchmarks without any fine-tuning while merely use 4% model parameters, 1.8% electricity consumption and 1.5% runtime compared to its counterparts. Code is available at: https://github.com/ChunLiu-cs/LLMEmbed-ACL2024.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OwkAYRWfjwqAP4MrZuWqZzl-LO9MAmgyYmO6bb_7gizCStii-vRTZ3JvcnNzkEPJQsFxWSrEpdCf8zrlkOmei5OqWrI1Zzfc2-Gf6EYYtpk9MG2pwsx1-wpj0DDz1dBnSEVOgi2NyA34liok24TTQegd9jxEdjPMduYmw68P9tSekWcyb-jUz78u3-sVkoEuVKVsIpqLwnmvPCjbTEbgCWznubJRSODs7czJYaUFWXmrBiwCl1h48s15MyOP_7UWoPXS4h-63HcXai5j4AxUsSJA</recordid><startdate>20240605</startdate><enddate>20240605</enddate><creator>Liu, Chun</creator><creator>Zhang, Hongguang</creator><creator>Zhao, Kainan</creator><creator>Ju, Xinghai</creator><creator>Yang, Lin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240605</creationdate><title>LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification</title><author>Liu, Chun ; Zhang, Hongguang ; Zhao, Kainan ; Ju, Xinghai ; Yang, Lin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-5b1305f3dd26d01096fa25ab8c2cbf443cb96754eb4ba48d46321ea766dad0bd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Chun</creatorcontrib><creatorcontrib>Zhang, Hongguang</creatorcontrib><creatorcontrib>Zhao, Kainan</creatorcontrib><creatorcontrib>Ju, Xinghai</creatorcontrib><creatorcontrib>Yang, Lin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Chun</au><au>Zhang, Hongguang</au><au>Zhao, Kainan</au><au>Ju, Xinghai</au><au>Yang, Lin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification</atitle><date>2024-06-05</date><risdate>2024</risdate><abstract>With the booming of Large Language Models (LLMs), prompt-learning has become a promising method mainly researched in various research areas. Recently, many attempts based on prompt-learning have been made to improve the performance of text classification. However, most of these methods are based on heuristic Chain-of-Thought (CoT), and tend to be more complex but less efficient. In this paper, we rethink the LLM-based text classification methodology, propose a simple and effective transfer learning strategy, namely LLMEmbed, to address this classical but challenging task. To illustrate, we first study how to properly extract and fuse the text embeddings via various lightweight LLMs at different network depths to improve their robustness and discrimination, then adapt such embeddings to train the classifier. We perform extensive experiments on publicly available datasets, and the results show that LLMEmbed achieves strong performance while enjoys low training overhead using lightweight LLM backbones compared to recent methods based on larger LLMs, i.e. GPT-3, and sophisticated prompt-based strategies. Our LLMEmbed achieves adequate accuracy on publicly available benchmarks without any fine-tuning while merely use 4% model parameters, 1.8% electricity consumption and 1.5% runtime compared to its counterparts. Code is available at: https://github.com/ChunLiu-cs/LLMEmbed-ACL2024.</abstract><doi>10.48550/arxiv.2406.03725</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.03725
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_03725
source arXiv.org
subjects Computer Science - Computation and Language
title LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T07%3A20%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LLMEmbed:%20Rethinking%20Lightweight%20LLM's%20Genuine%20Function%20in%20Text%20Classification&rft.au=Liu,%20Chun&rft.date=2024-06-05&rft_id=info:doi/10.48550/arxiv.2406.03725&rft_dat=%3Carxiv_GOX%3E2406_03725%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true