Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations

Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NS...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Weather and forecasting 2003-10, Vol.18 (5), p.847-860
Hauptverfasser: Kain, J S, Baldwin, ME, Janish, PR, Weiss, S J, Kay, M P, Carbin, G W
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 860
container_issue 5
container_start_page 847
container_title Weather and forecasting
container_volume 18
creator Kain, J S
Baldwin, ME
Janish, PR
Weiss, S J
Kay, M P
Carbin, G W
description Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program. By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how 'good' individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the 'best' forecast for a given period. The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.
doi_str_mv 10.1175/1520-0434(2003)018(0847:SVONMA)2.0.CO;2
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_miscellaneous_18878079</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>18878079</sourcerecordid><originalsourceid>FETCH-proquest_miscellaneous_188780793</originalsourceid><addsrcrecordid>eNqNi8FOwzAQRH0AiUL5hz2h9pB07STEhRNERXBoI7Wo18p1NiJVYgfbgd8nRXwA0kgjzXvD2IJjzHmeLXgmMMI0SWcCMZkjlzOUaf6w25eb9dNcxBgX5aO4YBOUUkSSZ_dX7Nr7EyKKTCwnzO-G44l0aL4I9uSautEqNNaArWEzdOOiVQtrW1HrQY2Bwna9NWTCWVHw7KyqyMGbCeSU_v0eKXwTGdiSJ-X0ByhTQdmP_Iz9lF3WqvV0-9c37O5l9V68Rr2znwP5cOgar6ltlSE7-AOXMpeYL5N_iz-fqFiu</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>18878079</pqid></control><display><type>article</type><title>Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations</title><source>American Meteorological Society</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Alma/SFX Local Collection</source><creator>Kain, J S ; Baldwin, ME ; Janish, PR ; Weiss, S J ; Kay, M P ; Carbin, G W</creator><creatorcontrib>Kain, J S ; Baldwin, ME ; Janish, PR ; Weiss, S J ; Kay, M P ; Carbin, G W</creatorcontrib><description>Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program. By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how 'good' individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the 'best' forecast for a given period. The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.</description><identifier>ISSN: 0882-8156</identifier><identifier>DOI: 10.1175/1520-0434(2003)018(0847:SVONMA)2.0.CO;2</identifier><language>eng</language><ispartof>Weather and forecasting, 2003-10, Vol.18 (5), p.847-860</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Kain, J S</creatorcontrib><creatorcontrib>Baldwin, ME</creatorcontrib><creatorcontrib>Janish, PR</creatorcontrib><creatorcontrib>Weiss, S J</creatorcontrib><creatorcontrib>Kay, M P</creatorcontrib><creatorcontrib>Carbin, G W</creatorcontrib><title>Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations</title><title>Weather and forecasting</title><description>Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program. By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how 'good' individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the 'best' forecast for a given period. The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.</description><issn>0882-8156</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2003</creationdate><recordtype>article</recordtype><recordid>eNqNi8FOwzAQRH0AiUL5hz2h9pB07STEhRNERXBoI7Wo18p1NiJVYgfbgd8nRXwA0kgjzXvD2IJjzHmeLXgmMMI0SWcCMZkjlzOUaf6w25eb9dNcxBgX5aO4YBOUUkSSZ_dX7Nr7EyKKTCwnzO-G44l0aL4I9uSautEqNNaArWEzdOOiVQtrW1HrQY2Bwna9NWTCWVHw7KyqyMGbCeSU_v0eKXwTGdiSJ-X0ByhTQdmP_Iz9lF3WqvV0-9c37O5l9V68Rr2znwP5cOgar6ltlSE7-AOXMpeYL5N_iz-fqFiu</recordid><startdate>20031001</startdate><enddate>20031001</enddate><creator>Kain, J S</creator><creator>Baldwin, ME</creator><creator>Janish, PR</creator><creator>Weiss, S J</creator><creator>Kay, M P</creator><creator>Carbin, G W</creator><scope>7TG</scope><scope>KL.</scope></search><sort><creationdate>20031001</creationdate><title>Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations</title><author>Kain, J S ; Baldwin, ME ; Janish, PR ; Weiss, S J ; Kay, M P ; Carbin, G W</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_miscellaneous_188780793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2003</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kain, J S</creatorcontrib><creatorcontrib>Baldwin, ME</creatorcontrib><creatorcontrib>Janish, PR</creatorcontrib><creatorcontrib>Weiss, S J</creatorcontrib><creatorcontrib>Kay, M P</creatorcontrib><creatorcontrib>Carbin, G W</creatorcontrib><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><jtitle>Weather and forecasting</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kain, J S</au><au>Baldwin, ME</au><au>Janish, PR</au><au>Weiss, S J</au><au>Kay, M P</au><au>Carbin, G W</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations</atitle><jtitle>Weather and forecasting</jtitle><date>2003-10-01</date><risdate>2003</risdate><volume>18</volume><issue>5</issue><spage>847</spage><epage>860</epage><pages>847-860</pages><issn>0882-8156</issn><abstract>Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program. By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how 'good' individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the 'best' forecast for a given period. The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.</abstract><doi>10.1175/1520-0434(2003)018(0847:SVONMA)2.0.CO;2</doi></addata></record>
fulltext fulltext
identifier ISSN: 0882-8156
ispartof Weather and forecasting, 2003-10, Vol.18 (5), p.847-860
issn 0882-8156
language eng
recordid cdi_proquest_miscellaneous_18878079
source American Meteorological Society; EZB-FREE-00999 freely available EZB journals; Alma/SFX Local Collection
title Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T09%3A47%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Subjective%20Verification%20of%20Numerical%20Models%20as%20a%20Component%20of%20a%20Broader%20Interaction%20between%20Research%20and%20Operations&rft.jtitle=Weather%20and%20forecasting&rft.au=Kain,%20J%20S&rft.date=2003-10-01&rft.volume=18&rft.issue=5&rft.spage=847&rft.epage=860&rft.pages=847-860&rft.issn=0882-8156&rft_id=info:doi/10.1175/1520-0434(2003)018(0847:SVONMA)2.0.CO;2&rft_dat=%3Cproquest%3E18878079%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=18878079&rft_id=info:pmid/&rfr_iscdi=true