Bert-based graph unlinked embedding for sentiment analysis

Numerous graph neural network (GNN) models have been used for sentiment analysis in recent years. Nevertheless, addressing the issue of over-smoothing in GNNs for node representation and finding more effective ways to learn both global and local information within the graph structure, while improvin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Complex & Intelligent Systems 2024-04, Vol.10 (2), p.2627-2638
Hauptverfasser: Jin, Youkai, Zhao, Anping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Numerous graph neural network (GNN) models have been used for sentiment analysis in recent years. Nevertheless, addressing the issue of over-smoothing in GNNs for node representation and finding more effective ways to learn both global and local information within the graph structure, while improving model efficiency for scalability to large text sentiment corpora, remains a challenge. To tackle these issues, we propose a novel Bert-based unlinked graph embedding (BUGE) model for sentiment analysis. Initially, the model constructs a comprehensive text sentiment heterogeneous graph that more effectively captures global co-occurrence information between words. Next, by using specific sampling strategies, it efficiently preserves both global and local information within the graph structure, enabling nodes to receive more feature information. During the representation learning process, BUGE relies solely on attention mechanisms, without using graph convolutions or aggregation operators, thus avoiding the over-smoothing problem associated with node aggregation. This enhances model training efficiency and reduces memory storage requirements. Extensive experimental results and evaluations demonstrate that the adopted Bert-based unlinked graph embedding method is highly effective for sentiment analysis, especially when applied to large text sentiment corpora.
ISSN:2199-4536
2198-6053
DOI:10.1007/s40747-023-01289-9