Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning

High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Y...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nado, Zachary, Band, Neil, Collier, Mark, Djolonga, Josip, Dusenberry, Michael W, Farquhar, Sebastian, Feng, Qixuan, Filos, Angelos, Havasi, Marton, Jenatton, Rodolphe, Jerfel, Ghassen, Liu, Jeremiah, Mariet, Zelda, Nixon, Jeremy, Padhy, Shreyas, Ren, Jie, Rudner, Tim G. J, Sbahi, Faris, Wen, Yeming, Wenzel, Florian, Murphy, Kevin, Sculley, D, Lakshminarayanan, Balaji, Snoek, Jasper, Gal, Yarin, Tran, Dustin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Nado, Zachary
Band, Neil
Collier, Mark
Djolonga, Josip
Dusenberry, Michael W
Farquhar, Sebastian
Feng, Qixuan
Filos, Angelos
Havasi, Marton
Jenatton, Rodolphe
Jerfel, Ghassen
Liu, Jeremiah
Mariet, Zelda
Nixon, Jeremy
Padhy, Shreyas
Ren, Jie
Rudner, Tim G. J
Sbahi, Faris
Wen, Yeming
Wenzel, Florian
Murphy, Kevin
Sculley, D
Lakshminarayanan, Balaji
Snoek, Jasper
Gal, Yarin
Tran, Dustin
description High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Yet, competitive comparisons of methods are often lacking due to a range of reasons, including: compute availability for extensive tuning, incorporation of sufficiently many baselines, and concrete documentation for reproducibility. In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks. As of this writing, the collection spans 19 methods across 9 tasks, each with at least 5 metrics. Each baseline is a self-contained experiment pipeline with easily reusable and extendable components. Our goal is to provide immediate starting points for experimentation with new methods or applications. Additionally we provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results. Code available at https://github.com/google/uncertainty-baselines.
doi_str_mv 10.48550/arxiv.2106.04015
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2106_04015</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2106_04015</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-2a1f42582e2b6655d6fc1a80110616f9b6a8d7f4f043cac9a78829b5f692dd1c3</originalsourceid><addsrcrecordid>eNpNj7FOwzAURb0wVIEP6ISnbgm2YzsOGy0UkCJVQu0cvTjPYBHcyg6I_j2hZWC6y9HVOYTMOSukUYrdQPz2X4XgTBdMMq5mZLMLFuMIPoxHuoSEgw-YbukSg337gPieqNtH-p9a0Jd995nGiUvUB3qPeKANQgw-vF6SCwdDwqu_zch2_bBdPeXN5vF5ddfkoCuVC-BOCmUEik5rpXrtLAfD-GTGtas7DaavnHRMlhZsDZUxou6U07Xoe27LjFyfb09F7SH6yfXY_pa1p7LyB8uASNg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Uncertainty Baselines: Benchmarks for Uncertainty &amp; Robustness in Deep Learning</title><source>arXiv.org</source><creator>Nado, Zachary ; Band, Neil ; Collier, Mark ; Djolonga, Josip ; Dusenberry, Michael W ; Farquhar, Sebastian ; Feng, Qixuan ; Filos, Angelos ; Havasi, Marton ; Jenatton, Rodolphe ; Jerfel, Ghassen ; Liu, Jeremiah ; Mariet, Zelda ; Nixon, Jeremy ; Padhy, Shreyas ; Ren, Jie ; Rudner, Tim G. J ; Sbahi, Faris ; Wen, Yeming ; Wenzel, Florian ; Murphy, Kevin ; Sculley, D ; Lakshminarayanan, Balaji ; Snoek, Jasper ; Gal, Yarin ; Tran, Dustin</creator><creatorcontrib>Nado, Zachary ; Band, Neil ; Collier, Mark ; Djolonga, Josip ; Dusenberry, Michael W ; Farquhar, Sebastian ; Feng, Qixuan ; Filos, Angelos ; Havasi, Marton ; Jenatton, Rodolphe ; Jerfel, Ghassen ; Liu, Jeremiah ; Mariet, Zelda ; Nixon, Jeremy ; Padhy, Shreyas ; Ren, Jie ; Rudner, Tim G. J ; Sbahi, Faris ; Wen, Yeming ; Wenzel, Florian ; Murphy, Kevin ; Sculley, D ; Lakshminarayanan, Balaji ; Snoek, Jasper ; Gal, Yarin ; Tran, Dustin</creatorcontrib><description>High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Yet, competitive comparisons of methods are often lacking due to a range of reasons, including: compute availability for extensive tuning, incorporation of sufficiently many baselines, and concrete documentation for reproducibility. In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks. As of this writing, the collection spans 19 methods across 9 tasks, each with at least 5 metrics. Each baseline is a self-contained experiment pipeline with easily reusable and extendable components. Our goal is to provide immediate starting points for experimentation with new methods or applications. Additionally we provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results. Code available at https://github.com/google/uncertainty-baselines.</description><identifier>DOI: 10.48550/arxiv.2106.04015</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2106.04015$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2106.04015$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nado, Zachary</creatorcontrib><creatorcontrib>Band, Neil</creatorcontrib><creatorcontrib>Collier, Mark</creatorcontrib><creatorcontrib>Djolonga, Josip</creatorcontrib><creatorcontrib>Dusenberry, Michael W</creatorcontrib><creatorcontrib>Farquhar, Sebastian</creatorcontrib><creatorcontrib>Feng, Qixuan</creatorcontrib><creatorcontrib>Filos, Angelos</creatorcontrib><creatorcontrib>Havasi, Marton</creatorcontrib><creatorcontrib>Jenatton, Rodolphe</creatorcontrib><creatorcontrib>Jerfel, Ghassen</creatorcontrib><creatorcontrib>Liu, Jeremiah</creatorcontrib><creatorcontrib>Mariet, Zelda</creatorcontrib><creatorcontrib>Nixon, Jeremy</creatorcontrib><creatorcontrib>Padhy, Shreyas</creatorcontrib><creatorcontrib>Ren, Jie</creatorcontrib><creatorcontrib>Rudner, Tim G. J</creatorcontrib><creatorcontrib>Sbahi, Faris</creatorcontrib><creatorcontrib>Wen, Yeming</creatorcontrib><creatorcontrib>Wenzel, Florian</creatorcontrib><creatorcontrib>Murphy, Kevin</creatorcontrib><creatorcontrib>Sculley, D</creatorcontrib><creatorcontrib>Lakshminarayanan, Balaji</creatorcontrib><creatorcontrib>Snoek, Jasper</creatorcontrib><creatorcontrib>Gal, Yarin</creatorcontrib><creatorcontrib>Tran, Dustin</creatorcontrib><title>Uncertainty Baselines: Benchmarks for Uncertainty &amp; Robustness in Deep Learning</title><description>High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Yet, competitive comparisons of methods are often lacking due to a range of reasons, including: compute availability for extensive tuning, incorporation of sufficiently many baselines, and concrete documentation for reproducibility. In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks. As of this writing, the collection spans 19 methods across 9 tasks, each with at least 5 metrics. Each baseline is a self-contained experiment pipeline with easily reusable and extendable components. Our goal is to provide immediate starting points for experimentation with new methods or applications. Additionally we provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results. Code available at https://github.com/google/uncertainty-baselines.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpNj7FOwzAURb0wVIEP6ISnbgm2YzsOGy0UkCJVQu0cvTjPYBHcyg6I_j2hZWC6y9HVOYTMOSukUYrdQPz2X4XgTBdMMq5mZLMLFuMIPoxHuoSEgw-YbukSg337gPieqNtH-p9a0Jd995nGiUvUB3qPeKANQgw-vF6SCwdDwqu_zch2_bBdPeXN5vF5ddfkoCuVC-BOCmUEik5rpXrtLAfD-GTGtas7DaavnHRMlhZsDZUxou6U07Xoe27LjFyfb09F7SH6yfXY_pa1p7LyB8uASNg</recordid><startdate>20210607</startdate><enddate>20210607</enddate><creator>Nado, Zachary</creator><creator>Band, Neil</creator><creator>Collier, Mark</creator><creator>Djolonga, Josip</creator><creator>Dusenberry, Michael W</creator><creator>Farquhar, Sebastian</creator><creator>Feng, Qixuan</creator><creator>Filos, Angelos</creator><creator>Havasi, Marton</creator><creator>Jenatton, Rodolphe</creator><creator>Jerfel, Ghassen</creator><creator>Liu, Jeremiah</creator><creator>Mariet, Zelda</creator><creator>Nixon, Jeremy</creator><creator>Padhy, Shreyas</creator><creator>Ren, Jie</creator><creator>Rudner, Tim G. J</creator><creator>Sbahi, Faris</creator><creator>Wen, Yeming</creator><creator>Wenzel, Florian</creator><creator>Murphy, Kevin</creator><creator>Sculley, D</creator><creator>Lakshminarayanan, Balaji</creator><creator>Snoek, Jasper</creator><creator>Gal, Yarin</creator><creator>Tran, Dustin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210607</creationdate><title>Uncertainty Baselines: Benchmarks for Uncertainty &amp; Robustness in Deep Learning</title><author>Nado, Zachary ; Band, Neil ; Collier, Mark ; Djolonga, Josip ; Dusenberry, Michael W ; Farquhar, Sebastian ; Feng, Qixuan ; Filos, Angelos ; Havasi, Marton ; Jenatton, Rodolphe ; Jerfel, Ghassen ; Liu, Jeremiah ; Mariet, Zelda ; Nixon, Jeremy ; Padhy, Shreyas ; Ren, Jie ; Rudner, Tim G. J ; Sbahi, Faris ; Wen, Yeming ; Wenzel, Florian ; Murphy, Kevin ; Sculley, D ; Lakshminarayanan, Balaji ; Snoek, Jasper ; Gal, Yarin ; Tran, Dustin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-2a1f42582e2b6655d6fc1a80110616f9b6a8d7f4f043cac9a78829b5f692dd1c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Nado, Zachary</creatorcontrib><creatorcontrib>Band, Neil</creatorcontrib><creatorcontrib>Collier, Mark</creatorcontrib><creatorcontrib>Djolonga, Josip</creatorcontrib><creatorcontrib>Dusenberry, Michael W</creatorcontrib><creatorcontrib>Farquhar, Sebastian</creatorcontrib><creatorcontrib>Feng, Qixuan</creatorcontrib><creatorcontrib>Filos, Angelos</creatorcontrib><creatorcontrib>Havasi, Marton</creatorcontrib><creatorcontrib>Jenatton, Rodolphe</creatorcontrib><creatorcontrib>Jerfel, Ghassen</creatorcontrib><creatorcontrib>Liu, Jeremiah</creatorcontrib><creatorcontrib>Mariet, Zelda</creatorcontrib><creatorcontrib>Nixon, Jeremy</creatorcontrib><creatorcontrib>Padhy, Shreyas</creatorcontrib><creatorcontrib>Ren, Jie</creatorcontrib><creatorcontrib>Rudner, Tim G. J</creatorcontrib><creatorcontrib>Sbahi, Faris</creatorcontrib><creatorcontrib>Wen, Yeming</creatorcontrib><creatorcontrib>Wenzel, Florian</creatorcontrib><creatorcontrib>Murphy, Kevin</creatorcontrib><creatorcontrib>Sculley, D</creatorcontrib><creatorcontrib>Lakshminarayanan, Balaji</creatorcontrib><creatorcontrib>Snoek, Jasper</creatorcontrib><creatorcontrib>Gal, Yarin</creatorcontrib><creatorcontrib>Tran, Dustin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nado, Zachary</au><au>Band, Neil</au><au>Collier, Mark</au><au>Djolonga, Josip</au><au>Dusenberry, Michael W</au><au>Farquhar, Sebastian</au><au>Feng, Qixuan</au><au>Filos, Angelos</au><au>Havasi, Marton</au><au>Jenatton, Rodolphe</au><au>Jerfel, Ghassen</au><au>Liu, Jeremiah</au><au>Mariet, Zelda</au><au>Nixon, Jeremy</au><au>Padhy, Shreyas</au><au>Ren, Jie</au><au>Rudner, Tim G. J</au><au>Sbahi, Faris</au><au>Wen, Yeming</au><au>Wenzel, Florian</au><au>Murphy, Kevin</au><au>Sculley, D</au><au>Lakshminarayanan, Balaji</au><au>Snoek, Jasper</au><au>Gal, Yarin</au><au>Tran, Dustin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Uncertainty Baselines: Benchmarks for Uncertainty &amp; Robustness in Deep Learning</atitle><date>2021-06-07</date><risdate>2021</risdate><abstract>High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Yet, competitive comparisons of methods are often lacking due to a range of reasons, including: compute availability for extensive tuning, incorporation of sufficiently many baselines, and concrete documentation for reproducibility. In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks. As of this writing, the collection spans 19 methods across 9 tasks, each with at least 5 metrics. Each baseline is a self-contained experiment pipeline with easily reusable and extendable components. Our goal is to provide immediate starting points for experimentation with new methods or applications. Additionally we provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results. Code available at https://github.com/google/uncertainty-baselines.</abstract><doi>10.48550/arxiv.2106.04015</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2106.04015
ispartof
issn
language eng
recordid cdi_arxiv_primary_2106_04015
source arXiv.org
subjects Computer Science - Learning
title Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-16T04%3A09%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Uncertainty%20Baselines:%20Benchmarks%20for%20Uncertainty%20&%20Robustness%20in%20Deep%20Learning&rft.au=Nado,%20Zachary&rft.date=2021-06-07&rft_id=info:doi/10.48550/arxiv.2106.04015&rft_dat=%3Carxiv_GOX%3E2106_04015%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true