Perform Like an Engine: A Closed-Loop Neural-Symbolic Learning Framework for Knowledge Graph Inference
Knowledge graph (KG) inference aims to address the natural incompleteness of KGs, including rule learning-based and KG embedding (KGE) models. However, the rule learning-based models suffer from low efficiency and generalization while KGE models lack interpretability. To address these challenges, we...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge graph (KG) inference aims to address the natural incompleteness of
KGs, including rule learning-based and KG embedding (KGE) models. However, the
rule learning-based models suffer from low efficiency and generalization while
KGE models lack interpretability. To address these challenges, we propose a
novel and effective closed-loop neural-symbolic learning framework EngineKG via
incorporating our developed KGE and rule learning modules. KGE module exploits
symbolic rules and paths to enhance the semantic association between entities
and relations for improving KG embeddings and interpretability. A novel rule
pruning mechanism is proposed in the rule learning module by leveraging paths
as initial candidate rules and employing KG embeddings together with concepts
for extracting more high-quality rules. Experimental results on four real-world
datasets show that our model outperforms the relevant baselines on link
prediction tasks, demonstrating the superiority of our KG inference model in a
neural-symbolic learning fashion. |
---|---|
DOI: | 10.48550/arxiv.2112.01040 |