WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models
WeatherBench is a benchmark dataset for medium-range weather forecasting of geopotential, temperature and precipitation, consisting of preprocessed data, predefined evaluation metrics and a number of baseline models. WeatherBench Probability extends this to probabilistic forecasting by adding a set...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-05 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Garg, Sagar Rasp, Stephan Thuerey, Nils |
description | WeatherBench is a benchmark dataset for medium-range weather forecasting of geopotential, temperature and precipitation, consisting of preprocessed data, predefined evaluation metrics and a number of baseline models. WeatherBench Probability extends this to probabilistic forecasting by adding a set of established probabilistic verification metrics (continuous ranked probability score, spread-skill ratio and rank histograms) and a state-of-the-art operational baseline using the ECWMF IFS ensemble forecast. In addition, we test three different probabilistic machine learning methods -- Monte Carlo dropout, parametric prediction and categorical prediction, in which the probability distribution is discretized. We find that plain Monte Carlo dropout severely underestimates uncertainty. The parametric and categorical models both produce fairly reliable forecasts of similar quality. The parametric models have fewer degrees of freedom while the categorical model is more flexible when it comes to predicting non-Gaussian distributions. None of the models are able to match the skill of the operational IFS model. We hope that this benchmark will enable other researchers to evaluate their probabilistic approaches. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2658984622</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2658984622</sourcerecordid><originalsourceid>FETCH-proquest_journals_26589846223</originalsourceid><addsrcrecordid>eNqNjUEKwjAQRYMgKOodBlwXamprdaeiuHQhuCzTdrTRNNEkRTyCtzZF3bv5A_Me_3dYn0fRJEinnPfYyNpLGIY8mfE4jvrsdSR0FZkVqaKCvdE55kIK91zAEvL2WaO5QokOLTk4aQO3n2SdKKCmUjR1YFCdCR6fslajAj1XZ0CpfT6Eq6AkuoEkNKoFuW-UQhHUuiRph6x7Qmlp9L0DNt5uDutd4PfuDVmXXXRjlEcZT-J0nk4TzqP_rDfCDFUD</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2658984622</pqid></control><display><type>article</type><title>WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models</title><source>Free E- Journals</source><creator>Garg, Sagar ; Rasp, Stephan ; Thuerey, Nils</creator><creatorcontrib>Garg, Sagar ; Rasp, Stephan ; Thuerey, Nils</creatorcontrib><description>WeatherBench is a benchmark dataset for medium-range weather forecasting of geopotential, temperature and precipitation, consisting of preprocessed data, predefined evaluation metrics and a number of baseline models. WeatherBench Probability extends this to probabilistic forecasting by adding a set of established probabilistic verification metrics (continuous ranked probability score, spread-skill ratio and rank histograms) and a state-of-the-art operational baseline using the ECWMF IFS ensemble forecast. In addition, we test three different probabilistic machine learning methods -- Monte Carlo dropout, parametric prediction and categorical prediction, in which the probability distribution is discretized. We find that plain Monte Carlo dropout severely underestimates uncertainty. The parametric and categorical models both produce fairly reliable forecasts of similar quality. The parametric models have fewer degrees of freedom while the categorical model is more flexible when it comes to predicting non-Gaussian distributions. None of the models are able to match the skill of the operational IFS model. We hope that this benchmark will enable other researchers to evaluate their probabilistic approaches.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Datasets ; Deep learning ; Geopotential ; Histograms ; Machine learning ; Monte Carlo simulation ; Statistical analysis ; Weather forecasting</subject><ispartof>arXiv.org, 2022-05</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Garg, Sagar</creatorcontrib><creatorcontrib>Rasp, Stephan</creatorcontrib><creatorcontrib>Thuerey, Nils</creatorcontrib><title>WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models</title><title>arXiv.org</title><description>WeatherBench is a benchmark dataset for medium-range weather forecasting of geopotential, temperature and precipitation, consisting of preprocessed data, predefined evaluation metrics and a number of baseline models. WeatherBench Probability extends this to probabilistic forecasting by adding a set of established probabilistic verification metrics (continuous ranked probability score, spread-skill ratio and rank histograms) and a state-of-the-art operational baseline using the ECWMF IFS ensemble forecast. In addition, we test three different probabilistic machine learning methods -- Monte Carlo dropout, parametric prediction and categorical prediction, in which the probability distribution is discretized. We find that plain Monte Carlo dropout severely underestimates uncertainty. The parametric and categorical models both produce fairly reliable forecasts of similar quality. The parametric models have fewer degrees of freedom while the categorical model is more flexible when it comes to predicting non-Gaussian distributions. None of the models are able to match the skill of the operational IFS model. We hope that this benchmark will enable other researchers to evaluate their probabilistic approaches.</description><subject>Benchmarks</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Geopotential</subject><subject>Histograms</subject><subject>Machine learning</subject><subject>Monte Carlo simulation</subject><subject>Statistical analysis</subject><subject>Weather forecasting</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjUEKwjAQRYMgKOodBlwXamprdaeiuHQhuCzTdrTRNNEkRTyCtzZF3bv5A_Me_3dYn0fRJEinnPfYyNpLGIY8mfE4jvrsdSR0FZkVqaKCvdE55kIK91zAEvL2WaO5QokOLTk4aQO3n2SdKKCmUjR1YFCdCR6fslajAj1XZ0CpfT6Eq6AkuoEkNKoFuW-UQhHUuiRph6x7Qmlp9L0DNt5uDutd4PfuDVmXXXRjlEcZT-J0nk4TzqP_rDfCDFUD</recordid><startdate>20220502</startdate><enddate>20220502</enddate><creator>Garg, Sagar</creator><creator>Rasp, Stephan</creator><creator>Thuerey, Nils</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20220502</creationdate><title>WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models</title><author>Garg, Sagar ; Rasp, Stephan ; Thuerey, Nils</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26589846223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Benchmarks</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Geopotential</topic><topic>Histograms</topic><topic>Machine learning</topic><topic>Monte Carlo simulation</topic><topic>Statistical analysis</topic><topic>Weather forecasting</topic><toplevel>online_resources</toplevel><creatorcontrib>Garg, Sagar</creatorcontrib><creatorcontrib>Rasp, Stephan</creatorcontrib><creatorcontrib>Thuerey, Nils</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Garg, Sagar</au><au>Rasp, Stephan</au><au>Thuerey, Nils</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models</atitle><jtitle>arXiv.org</jtitle><date>2022-05-02</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>WeatherBench is a benchmark dataset for medium-range weather forecasting of geopotential, temperature and precipitation, consisting of preprocessed data, predefined evaluation metrics and a number of baseline models. WeatherBench Probability extends this to probabilistic forecasting by adding a set of established probabilistic verification metrics (continuous ranked probability score, spread-skill ratio and rank histograms) and a state-of-the-art operational baseline using the ECWMF IFS ensemble forecast. In addition, we test three different probabilistic machine learning methods -- Monte Carlo dropout, parametric prediction and categorical prediction, in which the probability distribution is discretized. We find that plain Monte Carlo dropout severely underestimates uncertainty. The parametric and categorical models both produce fairly reliable forecasts of similar quality. The parametric models have fewer degrees of freedom while the categorical model is more flexible when it comes to predicting non-Gaussian distributions. None of the models are able to match the skill of the operational IFS model. We hope that this benchmark will enable other researchers to evaluate their probabilistic approaches.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2658984622 |
source | Free E- Journals |
subjects | Benchmarks Datasets Deep learning Geopotential Histograms Machine learning Monte Carlo simulation Statistical analysis Weather forecasting |
title | WeatherBench Probability: A benchmark dataset for probabilistic medium-range weather forecasting along with deep learning baseline models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T10%3A51%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=WeatherBench%20Probability:%20A%20benchmark%20dataset%20for%20probabilistic%20medium-range%20weather%20forecasting%20along%20with%20deep%20learning%20baseline%20models&rft.jtitle=arXiv.org&rft.au=Garg,%20Sagar&rft.date=2022-05-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2658984622%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2658984622&rft_id=info:pmid/&rfr_iscdi=true |