Multi-view knowledge graph fusion via knowledge-aware attentional graph neural network

Knowledge graphs (KGs) play a vital role in natural language processing (NLP), which can serve several downstream tasks. Because different views of KGs are usually constructed independently, the multi-view knowledge graph fusion (MVKGF) becomes a hotspot. Although multi-view learning studied very we...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-02, Vol.53 (4), p.3652-3671
Hauptverfasser: Huang, Zhichao, Li, Xutao, Ye, Yunming, Zhang, Baoquan, Xu, Guangning, Gan, Wensheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Knowledge graphs (KGs) play a vital role in natural language processing (NLP), which can serve several downstream tasks. Because different views of KGs are usually constructed independently, the multi-view knowledge graph fusion (MVKGF) becomes a hotspot. Although multi-view learning studied very well in past decades, MVKGF is still not well tackled because of the heterogeneous relations and the multi-view KGs. To overcome MVKGF, entity alignment is the most studied. Existing entity alignment methods are dominated by embedding based methods, such as TransE and Graph Neural Networks (GNNs), where the alignment is achieved by measuring the similarities between entity embeddings. However, most previous approaches suffer from the issues of the diverse knowledge facts and the complex neighboring structures. In this paper, we propose a novel K nowledge-aware A ttentional G raph N eural N etwork ( KAGNN ) model to carefully incorporate both knowledge facts and neighboring structures. In particular, a knowledge-aware attention mechanism is designed to preserve the original semantics and determine the importance of each knowledge fact. Furthermore, a three-layered GCN with highway gates is adopted to learn better entity representations from the neighboring structure information. Thus, our model can be regarded as a multi-view extension of GNN. We validate our model on three cross-lingual datasets and the results show our model beats the state-of-the-art baselines by a large margin.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-022-03667-1