Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning

High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Y...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nado, Zachary, Band, Neil, Collier, Mark, Djolonga, Josip, Dusenberry, Michael W, Farquhar, Sebastian, Feng, Qixuan, Filos, Angelos, Havasi, Marton, Jenatton, Rodolphe, Jerfel, Ghassen, Liu, Jeremiah, Mariet, Zelda, Nixon, Jeremy, Padhy, Shreyas, Ren, Jie, Rudner, Tim G. J, Sbahi, Faris, Wen, Yeming, Wenzel, Florian, Murphy, Kevin, Sculley, D, Lakshminarayanan, Balaji, Snoek, Jasper, Gal, Yarin, Tran, Dustin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Yet, competitive comparisons of methods are often lacking due to a range of reasons, including: compute availability for extensive tuning, incorporation of sufficiently many baselines, and concrete documentation for reproducibility. In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks. As of this writing, the collection spans 19 methods across 9 tasks, each with at least 5 metrics. Each baseline is a self-contained experiment pipeline with easily reusable and extendable components. Our goal is to provide immediate starting points for experimentation with new methods or applications. Additionally we provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results. Code available at https://github.com/google/uncertainty-baselines.
DOI:10.48550/arxiv.2106.04015