Harnessing domain insights: A prompt knowledge tuning method for aspect-based sentiment analysis

Aspect-based sentiment analysis (ABSA) endeavours predict the sentiment polarity of specific aspects of a given review. Recently, prompt tuning has been widely explored and has achieved remarkable success in improving semantic comprehension in several NLP tasks. However, most existing methods consid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Knowledge-based systems 2024-08, Vol.298, p.111975, Article 111975
Hauptverfasser: Sun, Xinjie, Zhang, Kai, Liu, Qi, Bao, Meikai, Chen, Yanjiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Aspect-based sentiment analysis (ABSA) endeavours predict the sentiment polarity of specific aspects of a given review. Recently, prompt tuning has been widely explored and has achieved remarkable success in improving semantic comprehension in several NLP tasks. However, most existing methods consider semantic tuning for various tasks and overlook domain knowledge, such as common-sense background knowledge. This not only limits the model’s ability to understand and apply domain knowledge but also often leads to the model’s inability to fully utilise domain-specific information, resulting in poor semantic quality and inferior model performance. To bridge this gap, we conducted a systematic study of Prompt Tuning with Domain Knowledge (PTDK) for ABSA, which aimed to design efficient prompts that guide the model to learn the knowledge of specific aspects in ABSA. Specifically, we first fine-tune the Large Language Models (LLMs) using hard prompts, which enhance the ability to extract enriched domain insights from the knowledge base. Additionally, we employed a Co-occurrence Gate to meticulously filter and refine the domain knowledge. This mechanism enhances the domain representation capability of the prompt template by selecting the most similar parts that are independently generated from a vast amount of domain knowledge for each comment. Simultaneously, a emphhybrid prompt templatewas constructed. This template integrates hard prompts and trainable soft prompts to compensate for the lack of specificity in hard prompts and to facilitate the integration of specific masks in various domain vectors. This hybrid strategy further enhances our ability to utilise domain-specific knowledge when performing ABSA. Experimental results on three public datasets – Restaurant, Laptop, and Twitter – demonstrate that our method consistently outperforms the current state-of-the-art baselines in all cases. The accuracies were 88.63%, 82.65%, and 81.65%, respectively, and F1-scores were 83.38%, 79.68%, and 80.36%, respectively. This translates into an average increase in accuracy of 0.97% and an enhancement in the F1-score of 1.03%. These enhancements not only validate the efficacy of our approach but also have substantial practical implications for real-world scenarios that require sophisticated sentiment analysis, such as the evaluation of customer feedback on e-commerce platforms.
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2024.111975