Tag Archives: Opinion poll

Polling our ignorance

Public Policy Polling had some fun with its first national post-election poll, released today:

As much of an obsession as Bowles/Simpson can be for the DC pundit class, most Americans don’t have an opinion about it. 23% support it, 16% oppose it, and 60% say they don’t have a take one way or the other.

The 39% of Americans with an opinion about Bowles/Simpson is only slightly higher than the 25% with one about Panetta/Burns, a mythical Clinton Chief of Staff/former western Republican Senator combo we conceived of to test how many people would say they had an opinion even about something that doesn’t exist.

Bowles/Simpson does have bipartisan support from the small swath of Americans with an opinion about it. Republicans support it 26/18, Democrats favor it 21/14, and independents are for it by a 24/18 margin. Panetta/Burns doesn’t fare as well with 8% support and 17% opposition.

Some reactions:

[tweet http://twitter.com/daveweigel/status/276084099070951424] [tweet http://twitter.com/Goldfarb/status/276076751036248066]

Why Democrats are so confident about the fiscal cliff

It’s all about the numbers:

A majority of Americans say that if the country goes over the fiscal cliff on Dec. 31, congressional Republicans should bear the brunt of the blame, according to a new Washington Post-Pew Research Center poll, the latest sign that the GOP faces a perilous path on the issue between now and the end of the year.

While 53 percent of those surveyed say the GOP would (and should) lose the fiscal cliff blame game, just 27 percent say President Obama would be deserving of more of the blame. Roughly one in 10 (12 percent) volunteer that both sides would be equally to blame.

Kevin Drum can’t get over how lopsided these figures are:

The Post site has a tool that lets you look at various demographic subgroups, and it turns out that everyone would blame Republicans. I figured maybe old people would blame Obama instead. Nope. Southerners? Nope. White people? Nope? High-income people? Nope. Literally the only group that didn’t blame Republicans was….Republicans.

Politically speaking, President Obama’s main job is to keep things this way. Republicans pay a price for their anti-tax jihad only if the public blames them for the ensuing catastrophe. But if Obama sticks to reasonable asks—modest tax increases, modest spending cuts, and a debt ceiling increase—and pounds away at Republican intransigence, these numbers aren’t likely to shift much.

The problem with predicting the future

Michael D. Ward and Nils Metternich examine Nate Silver’s success this past election — his state-by-state electoral predictions were 50-for-50 — and conclude that his methodology can be extended to political events more generally:

Forecasting models need reliable measures of “things that are usefully predictive,” Ulfelder notes. Well, sure.  Does this mean that reliability is at issue? Or that we are using data that are not “usefully” predictive? This is a curious claim, especially in light of the controversial nature of polls. Indeed, there exists five decades worth of literature that grapples with exactly those issues in public opinion.  Take the recent U.S. election as an example. In 2012 there were two types of models: one type based on fundamentals such as economic growth and unemployment and another based on public opinion surveys. Proponents of the former contend that that the fundamentals present a more salient picture of the election’s underlying dynamics and that polls are largely epiphenomenal. Proponents of the latter argue that public opinion polling reflects the real-time beliefs and future actions of voters.

As it turned out, in this month’s election public opinion polls were considerably more precise than the fundamentals. The fundamentals were not always providing bad predictions, but better is better. Plus there is no getting around the fact that the poll averaging models performed better. Admittedly, many of the polls were updated on the night before the election, though Drew Linzer’s prescientvotamatic.org posted predictions last June that held up this November. To assess the strength of poll aggregation, we might ask how the trajectory of Silver’s predictions over time compare with the results, and there are other quibbles to raise for sure. But better is better.

When it comes to the world, we have a lot of data on things that are important and usefully predictive, such as event data on conflicts and collaborations among different political groups within countries. Is it as reliable as poll data? Yes, just so, but not more. Would we like to have more precise data and be able to have real-time fMRIs of all political actors? Sure, but it is increasingly difficult to convincingly argue that we don’t have enough data.

The problem with this line of reasoning is a somewhat similar issue to that noted by other political writers regarding the potential for a proliferation of Nate Silver types: the more these people become seen as experts, the less incentive there is for firms to pay for, and produce, the underlying data the experts rely on.

In this case, the issue is that, if predictions of world political events begin to look anywhere near as prescient as Nate Silver’s election forecasts have so far, people will start to follow them rabidly, base personal and business decisions on the numbers, and so on. Given a large enough scale, this widespread behavioral reaction to the predictions could very well have a negative effect on the accuracy of the predictions themselves, especially if the forecasts have anything to do with market prices of assets or something similar. This, in turn, could eventually doom the forecasts to irrelevance over a long enough time period, or at the very least produce extremely uneven results.

Who do you think will win the election?

Justin Wolfers and David Rothschild are coming out with a paper called “Forecasting Elections: Voter Intentions versus Expectations.” In it, they explain that voters predict election outcomes more accurately when they are asked who they think will win, as opposed to who they intend to vote for. The authors’ takeaway?

The answers we receive from the expectation question are about as informative as if they were themselves based on a personal poll of approximately twenty friends, family, and coworkers. This “turbocharging” of the effective sample size makes the expectation question remarkably valuable with small sample sizes. Moreover, because our model gives insight into the correlation between voting expectations and intentions, even samples with a strong partisan bias can be used to generate useful forecasts.

The key insight from our study—that analysts pay greater attention to polls of voter forecasts—in fact represents a return to historical practice. In the decades prior to the advent of scientific polling, the standard approach to election forecasting involved both newspapers and business associations writing to correspondents around the country, asking who they thought would win. We are in many respects, recommending a similar practice. Having shown the usefulness of this approach for forecasting elections, we hope that future work will explore how similar questions can be used to provide better forecasts in a variety of market research contexts from forecasting product demand to predicting electoral outcomes, to better measuring consumer confidence.

I’ve noticed before that polling analysis usually fails to take into account the wording used in the question. Assuming Wolfers’ and Rothschild’s reasoning bears out, this will be a very useful start to rethinking exactly how pollsters should frame their questions if they want to achieve the most accurate results (which, quite frankly, is not the objective of many organizations that conduct polls).