Improving Autoformalization using Type Checking
Large language models show promise for autoformalization, the task of automatically translating natural language into formal languages. However, current autoformalization methods remain limited. The last reported state-of-the-art performance on the ProofNet formalization benchmark for the Lean proof...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models show promise for autoformalization, the task of
automatically translating natural language into formal languages. However,
current autoformalization methods remain limited. The last reported
state-of-the-art performance on the ProofNet formalization benchmark for the
Lean proof assistant, achieved using Codex for Lean 3, only showed successful
formalization of 16.1% of informal statements. Similarly, our evaluation of
GPT-4o for Lean 4 only produces successful translations 34.9% of the time. Our
analysis shows that the performance of these models is largely limited by their
inability to generate formal statements that successfully type-check (i.e., are
syntactically correct and consistent with types) - with a whopping 86.6% of
GPT-4o errors starting from a type-check failure. In this work, we propose a
method to fix this issue through decoding with type-check filtering, where we
initially sample a diverse set of candidate formalizations for an informal
statement, then use the Lean proof assistant to filter out candidates that do
not type-check. Using GPT-4o as a base model, and combining our method with
self-consistency, we obtain a +18.3% absolute increase in formalization
accuracy, and achieve a new state-of-the-art of 53.2% on ProofNet with Lean 4. |
---|---|
DOI: | 10.48550/arxiv.2406.07222 |