Proof Artifact Co-training for Theorem Proving with Language Models
Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built. This is particularly challenging when applying large Transformer language models to tactic prediction...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Labeled data for imitation learning of theorem proving in large libraries of
formalized mathematics is scarce as such libraries require years of
concentrated effort by human specialists to be built. This is particularly
challenging when applying large Transformer language models to tactic
prediction, because the scaling of performance with respect to model size is
quickly disrupted in the data-scarce, easily-overfitted regime. We propose PACT
({\bf P}roof {\bf A}rtifact {\bf C}o-{\bf T}raining), a general methodology for
extracting abundant self-supervised data from kernel-level proof terms for
co-training alongside the usual tactic prediction objective. We apply this
methodology to Lean, an interactive proof assistant which hosts some of the
most sophisticated formalized mathematics to date. We instrument Lean with a
neural theorem prover driven by a Transformer language model and show that PACT
improves theorem proving success rate on a held-out suite of test theorems from
32\% to 48\%. |
---|---|
DOI: | 10.48550/arxiv.2102.06203 |