Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling

Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Water resources research 2011-06, Vol.47 (6), p.n/a
Hauptverfasser: Heistermann, Maik, Kneis, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page n/a
container_issue 6
container_start_page
container_title Water resources research
container_volume 47
creator Heistermann, Maik
Kneis, David
description Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness‐of‐fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest—the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real‐world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach. Key Points Hydrological verification of QPE adds information to point‐based verification Using calibrated models might conceal differences between QPE Monte Carlo simulations could reduce the problem, but not for all systems/models
doi_str_mv 10.1029/2010WR009153
format Article
fullrecord <record><control><sourceid>istex_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1029_2010WR009153</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>ark_67375_WNG_LSLRJ2TK_5</sourcerecordid><originalsourceid>FETCH-LOGICAL-a3406-cd57dfbd7b81620c50a78fb7cae59f0cc441f832e73cc4bb0965a4a6f4c94dc33</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqVw4wHyAATW8V99hApaIAIpFFWcLMexwZAmwUmBvj2BIMSJ086uZkarD6FDDMcYEnmSAIZlBiAxI1tohCWlsZCCbKMRACUxJlLsor22fQbAlHExQg9ntjJPKx1efPUYva511flOd_7NRk2wxjffW11Ftu38apD5JjJ1ZWzTrXUZBe0rp8syDuuqdi5a1YUt-7J9tNOfW3vwM8fo_uJ8MZ3H6e3scnqaxppQ4LEpmChcXoh8gnkChoEWE5cLoy2TDoyhFLsJSawgvc5zkJxpqrmjRtLCEDJGR0OvCXXbButUE_pPw0ZhUF9Y1F8svZ0M9ndf2s2_XrXMphlOOOV9Kh5Svu3sx2-qx6a4IIKp5c1MpXdpdpUsrhUjn8fIdlk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling</title><source>Wiley Online Library Journals Frontfile Complete</source><source>Wiley-Blackwell AGU Digital Library</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Heistermann, Maik ; Kneis, David</creator><creatorcontrib>Heistermann, Maik ; Kneis, David</creatorcontrib><description>Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness‐of‐fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest—the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real‐world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach. Key Points Hydrological verification of QPE adds information to point‐based verification Using calibrated models might conceal differences between QPE Monte Carlo simulations could reduce the problem, but not for all systems/models</description><identifier>ISSN: 0043-1397</identifier><identifier>EISSN: 1944-7973</identifier><identifier>DOI: 10.1029/2010WR009153</identifier><language>eng</language><publisher>Blackwell Publishing Ltd</publisher><subject>Monte Carlo simulation ; quantitative precipitation estimation ; rain gage ; rainfall-runoff modelling ; verification ; weather radar</subject><ispartof>Water resources research, 2011-06, Vol.47 (6), p.n/a</ispartof><rights>Copyright 2011 by the American Geophysical Union.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a3406-cd57dfbd7b81620c50a78fb7cae59f0cc441f832e73cc4bb0965a4a6f4c94dc33</citedby><cites>FETCH-LOGICAL-a3406-cd57dfbd7b81620c50a78fb7cae59f0cc441f832e73cc4bb0965a4a6f4c94dc33</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1029%2F2010WR009153$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1029%2F2010WR009153$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1416,11512,27922,27923,45572,45573,46466,46890</link.rule.ids></links><search><creatorcontrib>Heistermann, Maik</creatorcontrib><creatorcontrib>Kneis, David</creatorcontrib><title>Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling</title><title>Water resources research</title><addtitle>Water Resour. Res</addtitle><description>Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness‐of‐fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest—the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real‐world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach. Key Points Hydrological verification of QPE adds information to point‐based verification Using calibrated models might conceal differences between QPE Monte Carlo simulations could reduce the problem, but not for all systems/models</description><subject>Monte Carlo simulation</subject><subject>quantitative precipitation estimation</subject><subject>rain gage</subject><subject>rainfall-runoff modelling</subject><subject>verification</subject><subject>weather radar</subject><issn>0043-1397</issn><issn>1944-7973</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><recordid>eNp9kM1OwzAQhC0EEqVw4wHyAATW8V99hApaIAIpFFWcLMexwZAmwUmBvj2BIMSJ086uZkarD6FDDMcYEnmSAIZlBiAxI1tohCWlsZCCbKMRACUxJlLsor22fQbAlHExQg9ntjJPKx1efPUYva511flOd_7NRk2wxjffW11Ftu38apD5JjJ1ZWzTrXUZBe0rp8syDuuqdi5a1YUt-7J9tNOfW3vwM8fo_uJ8MZ3H6e3scnqaxppQ4LEpmChcXoh8gnkChoEWE5cLoy2TDoyhFLsJSawgvc5zkJxpqrmjRtLCEDJGR0OvCXXbButUE_pPw0ZhUF9Y1F8svZ0M9ndf2s2_XrXMphlOOOV9Kh5Svu3sx2-qx6a4IIKp5c1MpXdpdpUsrhUjn8fIdlk</recordid><startdate>201106</startdate><enddate>201106</enddate><creator>Heistermann, Maik</creator><creator>Kneis, David</creator><general>Blackwell Publishing Ltd</general><scope>BSCLL</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>201106</creationdate><title>Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling</title><author>Heistermann, Maik ; Kneis, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a3406-cd57dfbd7b81620c50a78fb7cae59f0cc441f832e73cc4bb0965a4a6f4c94dc33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Monte Carlo simulation</topic><topic>quantitative precipitation estimation</topic><topic>rain gage</topic><topic>rainfall-runoff modelling</topic><topic>verification</topic><topic>weather radar</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Heistermann, Maik</creatorcontrib><creatorcontrib>Kneis, David</creatorcontrib><collection>Istex</collection><collection>CrossRef</collection><jtitle>Water resources research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heistermann, Maik</au><au>Kneis, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling</atitle><jtitle>Water resources research</jtitle><addtitle>Water Resour. Res</addtitle><date>2011-06</date><risdate>2011</risdate><volume>47</volume><issue>6</issue><epage>n/a</epage><issn>0043-1397</issn><eissn>1944-7973</eissn><abstract>Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness‐of‐fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest—the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real‐world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach. Key Points Hydrological verification of QPE adds information to point‐based verification Using calibrated models might conceal differences between QPE Monte Carlo simulations could reduce the problem, but not for all systems/models</abstract><pub>Blackwell Publishing Ltd</pub><doi>10.1029/2010WR009153</doi><tpages>23</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0043-1397
ispartof Water resources research, 2011-06, Vol.47 (6), p.n/a
issn 0043-1397
1944-7973
language eng
recordid cdi_crossref_primary_10_1029_2010WR009153
source Wiley Online Library Journals Frontfile Complete; Wiley-Blackwell AGU Digital Library; EZB-FREE-00999 freely available EZB journals
subjects Monte Carlo simulation
quantitative precipitation estimation
rain gage
rainfall-runoff modelling
verification
weather radar
title Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T02%3A17%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-istex_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Benchmarking%20quantitative%20precipitation%20estimation%20by%20conceptual%20rainfall-runoff%20modeling&rft.jtitle=Water%20resources%20research&rft.au=Heistermann,%20Maik&rft.date=2011-06&rft.volume=47&rft.issue=6&rft.epage=n/a&rft.issn=0043-1397&rft.eissn=1944-7973&rft_id=info:doi/10.1029/2010WR009153&rft_dat=%3Cistex_cross%3Eark_67375_WNG_LSLRJ2TK_5%3C/istex_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true