Twenty-five years of progress, problems, and conflicting evidence in econometric forecasting. What about the next 25 years?
In the early 1940s, the Cowles Commission for Research (later, the Cowles Foundation) fostered the development of statistical methodology for application in economics and paved the way for large-scale econometric models to be used for both structural estimation and forecasting. This approach stood f...
Gespeichert in:
Veröffentlicht in: | International journal of forecasting 2006, Vol.22 (3), p.475-492 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the early 1940s, the Cowles Commission for Research (later, the Cowles Foundation) fostered the development of statistical methodology for application in economics and paved the way for large-scale econometric models to be used for both structural estimation and forecasting. This approach stood for decades. Vector autoregression (VAR), appearing in the 1980s, was a clear improvement over early Cowles Foundation models, primarily because it paid attention to dynamic structure. As a way of imposing long-run equilibrium restrictions on sets of variables, cointegration and error-correction modeling (ECM) gained popularity in the 1980s and 1990s, though ECMs have so far failed to deliver on their early promise. ARCH and GARCH modeling have been used with great success in specialized financial areas to model dynamic heteroscedasticity, though in mainstream econometrics, evidence of their value is limited and conflicting. Concerning misspecification tests, any model will inevitably fail some of them for the simple reason that there are many possible tests. Which failures matter? The root of the difficulty regarding all issues related to modeling is that we can never know the true data generating process. In the next 25 years, what new avenues will open up? With ever greater computational capacity, more complex models with larger data sets seem the way to the future. Will they require the automatic model selection methods that have recently been introduced? Preliminary evidence suggests that these methods can do well. The quality of aggregate data is no better than it was. Will greater use of more disaggregated data be sufficient to provide better forecasts? That remains an open question. |
---|---|
ISSN: | 0169-2070 1872-8200 |
DOI: | 10.1016/j.ijforecast.2006.03.003 |