Learning to activate logic rules for textual reasoning

Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Me...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2018-10, Vol.106, p.42-49
Hauptverfasser: Yao, Yiqun, Xu, Jiaming, Shi, Jing, Xu, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2018.06.012