Neural-Symbolic Entangled Framework for Complex Query Answering
Answering complex queries over knowledge graphs (KG) is an important yet challenging task because of the KG incompleteness issue and cascading errors during reasoning. Recent query embedding (QE) approaches to embed the entities and relations in a KG and the first-order logic (FOL) queries into a lo...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Answering complex queries over knowledge graphs (KG) is an important yet
challenging task because of the KG incompleteness issue and cascading errors
during reasoning. Recent query embedding (QE) approaches to embed the entities
and relations in a KG and the first-order logic (FOL) queries into a low
dimensional space, answering queries by dense similarity search. However,
previous works mainly concentrate on the target answers, ignoring intermediate
entities' usefulness, which is essential for relieving the cascading error
problem in logical query answering. In addition, these methods are usually
designed with their own geometric or distributional embeddings to handle
logical operators like union, intersection, and negation, with the sacrifice of
the accuracy of the basic operator - projection, and they could not absorb
other embedding methods to their models. In this work, we propose a Neural and
Symbolic Entangled framework (ENeSy) for complex query answering, which enables
the neural and symbolic reasoning to enhance each other to alleviate the
cascading error and KG incompleteness. The projection operator in ENeSy could
be any embedding method with the capability of link prediction, and the other
FOL operators are handled without parameters. With both neural and symbolic
reasoning results contained, ENeSy answers queries in ensembles. ENeSy achieves
the SOTA performance on several benchmarks, especially in the setting of the
training model only with the link prediction task. |
---|---|
DOI: | 10.48550/arxiv.2209.08779 |