Forecasting disease with 10-year optimized models: Moving toward new digital datasets
As the pace of data availability and access to cyberinfrastructure increases, weather data inputs to practical application models have gone from point data to raster grids of varying spatial and temporal resolution. Certainly there is a benefit to widespread access of data, but transforming models d...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As the pace of data availability and access to cyberinfrastructure increases, weather data inputs to practical application models have gone from point data to raster grids of varying spatial and temporal resolution. Certainly there is a benefit to widespread access of data, but transforming models developed at point locations to raster datasets is not trivial. In addition, dramatic improvements can be made to models when an extended dataset is available for testing and validation, although this is not always possible in an era of quickly changing datasets and modeling techniques. This paper examines opportunities to decrease crop disease forecasting error with longer data archives. Potato late blight in the Great Lakes region of the US is used as a test case. Model accuracy increased dramatically, especially on days conducive to disease, as more data became available and a greater familiarity with the dataset was achieved. Training and validation error fluctuated as a greater data archive became available, reinforcing the need for forecasters to better understand intraseasonal and interannual cycles that impact the success of long term agroecosystem model implementations. |
---|---|
DOI: | 10.1109/Agro-Geoinformatics.2012.6311678 |