Enhancing Zero-Shot Crypto Sentiment With Fine-Tuned Language Model and Prompt Engineering

Blockchain technology has revolutionized the financial landscape, witnessing widespread adoption of cryptocurrencies due to their decentralized and transparent nature. As sentiments expressed on social media platforms wield substantial influence over cryptocurrency market dynamics, sentiment analysi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.10146-10159
Hauptverfasser: Wahidur, Rahman S. M., Tashdeed, Ishmam, Kaur, Manjit, Lee, Heung-No
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Blockchain technology has revolutionized the financial landscape, witnessing widespread adoption of cryptocurrencies due to their decentralized and transparent nature. As sentiments expressed on social media platforms wield substantial influence over cryptocurrency market dynamics, sentiment analysis has emerged as a crucial tool for gauging public opinion and predicting market trends. This paper explores fine-tuning techniques for large language models to enhance sentiment analysis performance. Experimental results demonstrate a significant average zero-shot performance gain of 40% on unseen tasks after fine-tuning, highlighting its potential. Additionally, the impact of instruction-based fine-tuning on models of varying scales is examined, revealing that larger models benefit from instruction tuning, achieving the highest average accuracy score of 75.16%. In contrast, smaller-scale models may experience reduced generalization due to complete model capacity utilization. To gain deeper insight into instruction effectiveness, the paper presents experimental investigations under different instruction tuning setups. Results show the model achieves an average accuracy score of 72.38% for short and simple instructions, outperforming long and complex instructions by over 12%. Finally, the paper explores the relationship between fine-tuning corpus size and model performance, identifying an optimal corpus size of 6,000 data points for the highest performance across different models. Microsoft’s MiniLM, a distilled version of BERT, excels in efficient data use and performance optimization, while Google’s FLAN-T5 demonstrates consistent and reliable performance across diverse datasets.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3350638