Toward Scalable Verification for Safety-Critical Deep Networks
The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the sta...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The increasing use of deep neural networks for safety-critical applications,
such as autonomous driving and flight control, raises concerns about their
safety and reliability. Formal verification can address these concerns by
guaranteeing that a deep learning system operates as intended, but the state of
the art is limited to small systems. In this work-in-progress report we give an
overview of our work on mitigating this difficulty, by pursuing two
complementary directions: devising scalable verification techniques, and
identifying design choices that result in deep learning systems that are more
amenable to verification. |
---|---|
DOI: | 10.48550/arxiv.1801.05950 |