Unlocking Deterministic Robustness Certification on ImageNet
Despite the promise of Lipschitz-based methods for provably-robust deep learning with deterministic guarantees, current state-of-the-art results are limited to feed-forward Convolutional Networks (ConvNets) on low-dimensional data, such as CIFAR-10. This paper investigates strategies for expanding c...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite the promise of Lipschitz-based methods for provably-robust deep
learning with deterministic guarantees, current state-of-the-art results are
limited to feed-forward Convolutional Networks (ConvNets) on low-dimensional
data, such as CIFAR-10. This paper investigates strategies for expanding
certifiably robust training to larger, deeper models. A key challenge in
certifying deep networks is efficient calculation of the Lipschitz bound for
residual blocks found in ResNet and ViT architectures. We show that fast ways
of bounding the Lipschitz constant for conventional ResNets are loose, and show
how to address this by designing a new residual block, leading to the
\emph{Linear ResNet} (LiResNet) architecture. We then introduce \emph{Efficient
Margin MAximization} (EMMA), a loss function that stabilizes robust training by
simultaneously penalizing worst-case adversarial examples from \emph{all}
classes. Together, these contributions yield new \emph{state-of-the-art} robust
accuracy on CIFAR-10/100 and Tiny-ImageNet under $\ell_2$ perturbations.
Moreover, for the first time, we are able to scale up fast deterministic
robustness guarantees to ImageNet, demonstrating that this approach to robust
learning can be applied to real-world applications.
We release our code on Github: \url{https://github.com/klasleino/gloro}. |
---|---|
DOI: | 10.48550/arxiv.2301.12549 |