Using Monte Carlo techniques to judge model prediction accuracy: Validation of the pesticide root zone model 3.12

Individuals from the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Environmental Model Validation Task Force (FEMVTF) Statistics Committee periodically met to discuss the mechanism for conducting an uncertainty analysis of Version 3.12 of the pesticide root zone model (PRZM 3.12) and t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Environmental toxicology and chemistry 2002-08, Vol.21 (8), p.1570-1577
Hauptverfasser: Warren-Hicks, William, Carbone, John P., Havens, Patrick L.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Individuals from the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Environmental Model Validation Task Force (FEMVTF) Statistics Committee periodically met to discuss the mechanism for conducting an uncertainty analysis of Version 3.12 of the pesticide root zone model (PRZM 3.12) and to identify those model input parameters that most contribute to model prediction error. This activity was part of a larger project evaluating PRZM 3.12. The goal of the uncertainty analysis was to compare site‐specific model predictions and field measurements using the variability in each as a basis of comparison. Monte Carlo analysis was used as an integral tool for judging the model's ability to predict accurately. The model was judged on how well it predicts measured values, taking into account the uncertainty in the model predictions. Monte Carlo analysis provides the tool for inferring model prediction uncertainty. We argue that this is a fairer test of the model than a simple one‐to‐one comparison between predictions and measurements. Because models are known to be imperfect predictors prior to running the model, the inaccuracy in model predictions should be considered when models are judged for their predictive ability. Otherwise, complex models can easily fail a validation test. Few complex models, such as PRZM 3.12, would pass a typical model validation exercise. This paper describes the approaches to the validation of PRZM 3.12 used by the committee and discusses issues in sampling distribution selection and appropriate statistics for interpreting the model validation results.
ISSN:0730-7268
1552-8618
DOI:10.1002/etc.5620210807