Compact Proofs of Model Performance via Mechanistic Interpretability
We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving accuracy lower bounds for a small transformer...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose using mechanistic interpretability -- techniques for reverse
engineering model weights into human-interpretable algorithms -- to derive and
compactly prove formal guarantees on model performance. We prototype this
approach by formally proving accuracy lower bounds for a small transformer
trained on Max-of-K, validating proof transferability across 151 random seeds
and four values of K. We create 102 different computer-assisted proof
strategies and assess their length and tightness of bound on each of our
models. Using quantitative metrics, we find that shorter proofs seem to require
and provide more mechanistic understanding. Moreover, we find that more
faithful mechanistic understanding leads to tighter performance bounds. We
confirm these connections by qualitatively examining a subset of our proofs.
Finally, we identify compounding structureless errors as a key challenge for
using mechanistic interpretability to generate compact proofs on model
performance. |
---|---|
DOI: | 10.48550/arxiv.2406.11779 |