Michael D. Ward and Nils Metternich examine Nate Silver’s success this past election — his state-by-state electoral predictions were 50-for-50 — and conclude that his methodology can be extended to political events more generally:
Forecasting models need reliable measures of “things that are usefully predictive,” Ulfelder notes. Well, sure. Does this mean that reliability is at issue? Or that we are using data that are not “usefully” predictive? This is a curious claim, especially in light of the controversial nature of polls. Indeed, there exists five decades worth of literature that grapples with exactly those issues in public opinion. Take the recent U.S. election as an example. In 2012 there were two types of models: one type based on fundamentals such as economic growth and unemployment and another based on public opinion surveys. Proponents of the former contend that that the fundamentals present a more salient picture of the election’s underlying dynamics and that polls are largely epiphenomenal. Proponents of the latter argue that public opinion polling reflects the real-time beliefs and future actions of voters.
As it turned out, in this month’s election public opinion polls were considerably more precise than the fundamentals. The fundamentals were not always providing bad predictions, but better is better. Plus there is no getting around the fact that the poll averaging models performed better. Admittedly, many of the polls were updated on the night before the election, though Drew Linzer’s prescientvotamatic.org posted predictions last June that held up this November. To assess the strength of poll aggregation, we might ask how the trajectory of Silver’s predictions over time compare with the results, and there are other quibbles to raise for sure. But better is better.
When it comes to the world, we have a lot of data on things that are important and usefully predictive, such as event data on conflicts and collaborations among different political groups within countries. Is it as reliable as poll data? Yes, just so, but not more. Would we like to have more precise data and be able to have real-time fMRIs of all political actors? Sure, but it is increasingly difficult to convincingly argue that we don’t have enough data.
The problem with this line of reasoning is a somewhat similar issue to that noted by other political writers regarding the potential for a proliferation of Nate Silver types: the more these people become seen as experts, the less incentive there is for firms to pay for, and produce, the underlying data the experts rely on.
In this case, the issue is that, if predictions of world political events begin to look anywhere near as prescient as Nate Silver’s election forecasts have so far, people will start to follow them rabidly, base personal and business decisions on the numbers, and so on. Given a large enough scale, this widespread behavioral reaction to the predictions could very well have a negative effect on the accuracy of the predictions themselves, especially if the forecasts have anything to do with market prices of assets or something similar. This, in turn, could eventually doom the forecasts to irrelevance over a long enough time period, or at the very least produce extremely uneven results.
Related articles
- Nate Silver: It’s the numbers, stupid (rawstory.com)
- War on Nate Silver: Final After-Action Report: The Flag of Reality Flies Uncontested Over Silvergrad Weblogging (delong.typepad.com)
- Nate Silver: it’s the numbers, stupid (guardian.co.uk)
- Has Nate Silver Ruined Politics? (outsidethebeltway.com)
- Nate Silver Brings the End of Pundit Prediction, Not Pundit Analysis (danielmiessler.com)
- Why Math is Like the Honey Badger: Nate Silver Ascendant (blogs.scientificamerican.com)