Affection Enhanced Relational Graph Attention Network for Sarcasm Detection
Sarcasm detection remains a challenge for numerous Natural Language Processing (NLP) tasks, such as sentiment classification or stance prediction. Existing sarcasm detection studies attempt to capture the subtle semantic incongruity patterns by using contextual information and graph information thro...
Gespeichert in:
Veröffentlicht in: | Applied sciences 2022-04, Vol.12 (7), p.3639 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sarcasm detection remains a challenge for numerous Natural Language Processing (NLP) tasks, such as sentiment classification or stance prediction. Existing sarcasm detection studies attempt to capture the subtle semantic incongruity patterns by using contextual information and graph information through Graph Convolutional Networks (GCN). However, direct application of dependence may inevitably introduce noisy information and inferiorly in modeling long-distance or disconnected words in the dependency tree. To better learn the sentiment inconsistencies between terms, we propose an Affection Enhanced Relational Graph Attention network (ARGAT) by jointly considering the affective information and the dependency information. Specifically, we use Relational Graph Attention Networks (RGAT) to integrate relation information guided by a trainable matrix of relation types and synchronously use GCNs to integrate affection information explicitly donated by affective adjacency matrixes. The employment of RGAT contributes to information interaction of structural relevant word pairs with a long distance. With the enhancement of affective information, the proposed model can capture complex forms of sarcastic expressions. Experimental results on six benchmark datasets show that our proposed approach outperforms state-of-the-art sarcasm detection methods. The best-improved results of accuracy and F1 are 4.19% and 4.33%, respectively. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app12073639 |