Proving Theorems using Incremental Learning and Hindsight Experience Replay
Traditional automated theorem provers for first-order logic depend on speed-optimized search and many handcrafted heuristics that are designed to work best over a wide range of domains. Machine learning approaches in literature either depend on these traditional provers to bootstrap themselves or fa...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Traditional automated theorem provers for first-order logic depend on
speed-optimized search and many handcrafted heuristics that are designed to
work best over a wide range of domains. Machine learning approaches in
literature either depend on these traditional provers to bootstrap themselves
or fall short on reaching comparable performance. In this paper, we propose a
general incremental learning algorithm for training domain specific provers for
first-order logic without equality, based only on a basic given-clause
algorithm, but using a learned clause-scoring function. Clauses are represented
as graphs and presented to transformer networks with spectral features. To
address the sparsity and the initial lack of training data as well as the lack
of a natural curriculum, we adapt hindsight experience replay to theorem
proving, so as to be able to learn even when no proof can be found. We show
that provers trained this way can match and sometimes surpass state-of-the-art
traditional provers on the TPTP dataset in terms of both quantity and quality
of the proofs. |
---|---|
DOI: | 10.48550/arxiv.2112.10664 |