« Twitter roundup | Main | The relative normalcy of Obama politics »

June 22, 2010

Comments

I'm fascinated by the effort to rate pollsters. Over the years, we casualty actuaries have pondered a comparable task of rating insurance companies' estimates of ultimate loss.

E.g., an actuary may predict that his/her company's ultimate loss to claims from the BP oil spill will be $X. How close is their actual loss to the estimate?

This is a difficult comparison, because the full ultimate loss won't be known for many years. Nevertheless, the IRS has used this sort of comparison to see if an insurance company has under-reported earnings by over-estimating ultimate loss.

To someone not in the field, your objections aren't at all convincing. Or if they are, they're an attack on the basic idea of measuring polling accuracy.

Say you can't figure out whether or not a certain polling organization is any good. Doesn't it follow that you also can't figure out if any of their polls were any good? If you can measure one, seems like you can get the other one too.

Or a different objection: if you can't measure how good a polling firm is in comparison to the competition, shouldn't the market be completely dominated by the lowest-cost players?

On point 1, we can measure, as Silver does, the extent to which a firm's pre-election polls accurately predicted the election outcome. That's one way to define if a poll is "any good." Unfortunately, combining those results into a systematic measure of skill is difficult given that pollsters are non-randomly assigned to campaigns, that we observe relatively few campaigns for most polling firms, and that their work is subject to sampling error,

On point 2, there are many areas in which people pay a premium despite a lack of evidence that it leads to improved performance -- CEOs, for instance. All it takes is a perception that some firms are better than others. (The market may also respond to other factors such as polling firm prestige, connections, service quality, etc.)

A part of Political Science involves sophisticated analysis of political polls. If the skill of the pollsters can't be measured, then for all we know most polls may be mediocre. In that case, mightn't Political Science suffer from the GIGO problem?

In other words, those who analyze polls professionally implicitly assume that the polls meet some standard of accuracy and usefulness.

You mention the differences that make pollster rating more difficult than batting averages, but there are some forces that work the other way. For example, at-bats produce binary values (hit vs non-hit), which decreases the informational value of each at-bat. But polls are continuous values with much less variance per sample, so each poll should be much more informative. I buy that non-random sampling can make it more difficult, but I'm not sure that, say, 20 polls by a small pollster would give less information than 100 at-bats by a called-up minor league player.

(Though, I imagine that academic circles tend to use much stricter confidence levels than baseball managers, which might be a significant increase in difficulty in itself... =P )

The comments to this entry are closed.