Scalable Quantitative Verification For Deep Neural Networks
Despite the functional success of deep neural networks (DNNs), their trustworthiness remains a crucial open challenge. To address this challenge, both testing and verification techniques have been proposed. But these existing techniques provide either scalability to large networks or formal guarante...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite the functional success of deep neural networks (DNNs), their
trustworthiness remains a crucial open challenge. To address this challenge,
both testing and verification techniques have been proposed. But these existing
techniques provide either scalability to large networks or formal guarantees,
not both. In this paper, we propose a scalable quantitative verification
framework for deep neural networks, i.e., a test-driven approach that comes
with formal guarantees that a desired probabilistic property is satisfied. Our
technique performs enough tests until soundness of a formal probabilistic
property can be proven. It can be used to certify properties of both
deterministic and randomized DNNs. We implement our approach in a tool called
PROVERO and apply it in the context of certifying adversarial robustness of
DNNs. In this context, we first show a new attack-agnostic measure of
robustness which offers an alternative to purely attack-based methodology of
evaluating robustness being reported today. Second, PROVERO provides
certificates of robustness for large DNNs, where existing state-of-the-art
verification tools fail to produce conclusive results. Our work paves the way
forward for verifying properties of distributions captured by real-world deep
neural networks, with provable guarantees, even where testers only have
black-box access to the neural network. |
---|---|
DOI: | 10.48550/arxiv.2002.06864 |