CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt
Text classification is a fundamental task in natural language processing (NLP), and large language models (LLMs) have demonstrated their capability to perform this task across various domains. However, the performance of LLMs heavily depends on the quality of their input prompts. Recent studies have...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Text classification is a fundamental task in natural language processing
(NLP), and large language models (LLMs) have demonstrated their capability to
perform this task across various domains. However, the performance of LLMs
heavily depends on the quality of their input prompts. Recent studies have also
shown that LLMs exhibit remarkable results in code-related tasks. To leverage
the capabilities of LLMs in text classification, we propose the Code Completion
Prompt (CoCoP) method, which transforms the text classification problem into a
code completion task. CoCoP significantly improves text classification
performance across diverse datasets by utilizing LLMs' code-completion
capability. For instance, CoCoP enhances the accuracy of the SST2 dataset by
more than 20%. Moreover, when CoCoP integrated with LLMs specifically designed
for code-related tasks (code models), such as CodeLLaMA, this method
demonstrates better or comparable performance to few-shot learning techniques
while using only one-tenth of the model size. The source code of our proposed
method will be available to the public upon the acceptance of the paper. |
---|---|
DOI: | 10.48550/arxiv.2411.08979 |