The Wisdom of Many in Few: Finding Individuals Who Are as Wise as the Crowd
Is forecasting ability a stable trait? While domain knowledge and reasoning abilities are necessary for making accurate forecasts, research shows that knowing how accurate forecasters have been in the past is the best predictor of future accuracy. However, unlike the measurement of other traits, eva...
Gespeichert in:
Veröffentlicht in: | Journal of experimental psychology. General 2023-05, Vol.152 (5), p.1223-1244 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Is forecasting ability a stable trait? While domain knowledge and reasoning abilities are necessary for making accurate forecasts, research shows that knowing how accurate forecasters have been in the past is the best predictor of future accuracy. However, unlike the measurement of other traits, evaluating forecasting skill requires substantial time investment. Forecasters must make predictions about events that may not resolve for many days, weeks, months, or even years into the future before their accuracy can be estimated. Our work builds upon methods such as cultural consensus theory and proxy scoring rules to show talented forecasters can be discriminated in real time, without requiring any event resolutions. We define a peer similarity-based intersubjective evaluation method and test its utility in a unique longitudinal forecasting experiment. Because forecasters predicted all events at the same points in time, many of the confounds common to forecasting tournaments or observational data were eliminated. This allowed us to demonstrate the effectiveness of our method in real time, as time progressed and more information about forecasters became available. Intersubjective accuracy scores, which can be obtained immediately after the forecasts are made, were both valid and reliable estimators of forecasting talent. We also found that asking forecasters to make meta-predictions about what they expect others to believe can serve as an incentive-compatible method of intersubjective evaluation. Our results indicate that selecting small groups of, or even single forecasters, based on intersubjective accuracy can yield subsequent forecasts that approximate the actual accuracy of much larger crowd aggregates.
Public Significance StatementForecasters are often tasked with making predictions about global health outcomes, such as the trajectory of the COVID-19 pandemic; severe weather events, such as hurricanes; geopolitical events, such as elections; economic trends, such as national or global recessions; and more. Our work represents the latest development in an important line of research on how we can identify the most skilled forecasters, so people can be better informed and prepared for the likelihood of such events and their consequences. We show that it is possible to identify highly skilled forecasters in real time, before knowing the outcomes of any of the events being forecasted. |
---|---|
ISSN: | 0096-3445 1939-2222 |
DOI: | 10.1037/xge0001340 |