CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt

Text classification is a fundamental task in natural language processing (NLP), and large language models (LLMs) have demonstrated their capability to perform this task across various domains. However, the performance of LLMs heavily depends on the quality of their input prompts. Recent studies have...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Mohammad Mahdi Mohajeri, Dousti, Mohammad Javad, Majid Nili Ahmadabadi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Mohammad Mahdi Mohajeri
Dousti, Mohammad Javad
Majid Nili Ahmadabadi
description Text classification is a fundamental task in natural language processing (NLP), and large language models (LLMs) have demonstrated their capability to perform this task across various domains. However, the performance of LLMs heavily depends on the quality of their input prompts. Recent studies have also shown that LLMs exhibit remarkable results in code-related tasks. To leverage the capabilities of LLMs in text classification, we propose the Code Completion Prompt (CoCoP) method, which transforms the text classification problem into a code completion task. CoCoP significantly improves text classification performance across diverse datasets by utilizing LLMs' code-completion capability. For instance, CoCoP enhances the accuracy of the SST2 dataset by more than 20%. Moreover, when CoCoP integrated with LLMs specifically designed for code-related tasks (code models), such as CodeLLaMA, this method demonstrates better or comparable performance to few-shot learning techniques while using only one-tenth of the model size. The source code of our proposed method will be available to the public upon the acceptance of the paper.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3128885142</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3128885142</sourcerecordid><originalsourceid>FETCH-proquest_journals_31288851423</originalsourceid><addsrcrecordid>eNqNitEKgjAYRkcQJOU7_NC1oP-0RrfD6MLAC-9j2HQT22yb1OMn0QN0c74D31mRCCnNEpYjbkjs_ZCmKR6OWBQ0IhW33NYnKI0SptWmh0a-A_BReK873YqgrYGXDgqq6gpBOTv3Cri9ywWPaZTfoHaLhx1Zd2L0Mv7tluzPZcMvyeTsc5Y-3AY7O7NcN5ohY6zIcqT_VR_eZzzI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3128885142</pqid></control><display><type>article</type><title>CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt</title><source>Free E- Journals</source><creator>Mohammad Mahdi Mohajeri ; Dousti, Mohammad Javad ; Majid Nili Ahmadabadi</creator><creatorcontrib>Mohammad Mahdi Mohajeri ; Dousti, Mohammad Javad ; Majid Nili Ahmadabadi</creatorcontrib><description>Text classification is a fundamental task in natural language processing (NLP), and large language models (LLMs) have demonstrated their capability to perform this task across various domains. However, the performance of LLMs heavily depends on the quality of their input prompts. Recent studies have also shown that LLMs exhibit remarkable results in code-related tasks. To leverage the capabilities of LLMs in text classification, we propose the Code Completion Prompt (CoCoP) method, which transforms the text classification problem into a code completion task. CoCoP significantly improves text classification performance across diverse datasets by utilizing LLMs' code-completion capability. For instance, CoCoP enhances the accuracy of the SST2 dataset by more than 20%. Moreover, when CoCoP integrated with LLMs specifically designed for code-related tasks (code models), such as CodeLLaMA, this method demonstrates better or comparable performance to few-shot learning techniques while using only one-tenth of the model size. The source code of our proposed method will be available to the public upon the acceptance of the paper.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Classification ; Datasets ; Large language models ; Machine learning ; Natural language processing ; Source code ; Text categorization</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Mohammad Mahdi Mohajeri</creatorcontrib><creatorcontrib>Dousti, Mohammad Javad</creatorcontrib><creatorcontrib>Majid Nili Ahmadabadi</creatorcontrib><title>CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt</title><title>arXiv.org</title><description>Text classification is a fundamental task in natural language processing (NLP), and large language models (LLMs) have demonstrated their capability to perform this task across various domains. However, the performance of LLMs heavily depends on the quality of their input prompts. Recent studies have also shown that LLMs exhibit remarkable results in code-related tasks. To leverage the capabilities of LLMs in text classification, we propose the Code Completion Prompt (CoCoP) method, which transforms the text classification problem into a code completion task. CoCoP significantly improves text classification performance across diverse datasets by utilizing LLMs' code-completion capability. For instance, CoCoP enhances the accuracy of the SST2 dataset by more than 20%. Moreover, when CoCoP integrated with LLMs specifically designed for code-related tasks (code models), such as CodeLLaMA, this method demonstrates better or comparable performance to few-shot learning techniques while using only one-tenth of the model size. The source code of our proposed method will be available to the public upon the acceptance of the paper.</description><subject>Classification</subject><subject>Datasets</subject><subject>Large language models</subject><subject>Machine learning</subject><subject>Natural language processing</subject><subject>Source code</subject><subject>Text categorization</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNitEKgjAYRkcQJOU7_NC1oP-0RrfD6MLAC-9j2HQT22yb1OMn0QN0c74D31mRCCnNEpYjbkjs_ZCmKR6OWBQ0IhW33NYnKI0SptWmh0a-A_BReK873YqgrYGXDgqq6gpBOTv3Cri9ywWPaZTfoHaLhx1Zd2L0Mv7tluzPZcMvyeTsc5Y-3AY7O7NcN5ohY6zIcqT_VR_eZzzI</recordid><startdate>20241113</startdate><enddate>20241113</enddate><creator>Mohammad Mahdi Mohajeri</creator><creator>Dousti, Mohammad Javad</creator><creator>Majid Nili Ahmadabadi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241113</creationdate><title>CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt</title><author>Mohammad Mahdi Mohajeri ; Dousti, Mohammad Javad ; Majid Nili Ahmadabadi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31288851423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Classification</topic><topic>Datasets</topic><topic>Large language models</topic><topic>Machine learning</topic><topic>Natural language processing</topic><topic>Source code</topic><topic>Text categorization</topic><toplevel>online_resources</toplevel><creatorcontrib>Mohammad Mahdi Mohajeri</creatorcontrib><creatorcontrib>Dousti, Mohammad Javad</creatorcontrib><creatorcontrib>Majid Nili Ahmadabadi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mohammad Mahdi Mohajeri</au><au>Dousti, Mohammad Javad</au><au>Majid Nili Ahmadabadi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt</atitle><jtitle>arXiv.org</jtitle><date>2024-11-13</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Text classification is a fundamental task in natural language processing (NLP), and large language models (LLMs) have demonstrated their capability to perform this task across various domains. However, the performance of LLMs heavily depends on the quality of their input prompts. Recent studies have also shown that LLMs exhibit remarkable results in code-related tasks. To leverage the capabilities of LLMs in text classification, we propose the Code Completion Prompt (CoCoP) method, which transforms the text classification problem into a code completion task. CoCoP significantly improves text classification performance across diverse datasets by utilizing LLMs' code-completion capability. For instance, CoCoP enhances the accuracy of the SST2 dataset by more than 20%. Moreover, when CoCoP integrated with LLMs specifically designed for code-related tasks (code models), such as CodeLLaMA, this method demonstrates better or comparable performance to few-shot learning techniques while using only one-tenth of the model size. The source code of our proposed method will be available to the public upon the acceptance of the paper.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3128885142
source Free E- Journals
subjects Classification
Datasets
Large language models
Machine learning
Natural language processing
Source code
Text categorization
title CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T07%3A39%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=CoCoP:%20Enhancing%20Text%20Classification%20with%20LLM%20through%20Code%20Completion%20Prompt&rft.jtitle=arXiv.org&rft.au=Mohammad%20Mahdi%20Mohajeri&rft.date=2024-11-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3128885142%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3128885142&rft_id=info:pmid/&rfr_iscdi=true