My new column at CJR challenges the recent wave of misguided attacks on Nate Silver, whose estimates for the presidential race are generally consistent with other forecasting models and betting/futures markets. Here's how it begins:
Who will win the presidential election next Tuesday? Until recently, the market for analysis of questions like these has been dominated by mainstream political reporters and commentators. Their style leans heavily on qualitative impressions and hazy narratives. But as the audience for quantitative analysis of politics has grown, the establishment analysts have become increasingly defensive about their status.
The most well-known quantitative analyst of politics is Nate Silver, whose FiveThirtyEight blog now appears on the New York Times website. Even though his black box statistical models have not been publicly disclosed or scientifically validated, Silver’s analyses have acquired a talismanic quality among political junkies, particularly with nervous liberals who are reassured by his forecast of a likely Obama victory (current estimate: 72.9%).
Unfortunately, Silver has become the target of a vitriolic backlash from innumerate pundits whose market dominance is under threat as well as ill-informed conservative commentators who think Silver is somehow skewing the polls for partisan reasons.
Read the whole thing for more.
1. Brendan admits that Silver's approach is a "black box statistical model" and "not...scientifically validated." Yet, he blasts those who criticizes that model. (In fairness to Brendan, he also implicitly criticizes those who give this model "talismanic quality".)
2. IMHO a poll is essentially a model predicting how the election would turn out if it were held during the sampling period. I call it a model because the surveys generally include various sampling and/or adjustment schemes. Adjustments are essential, since the response rate is so low. Simply contacting people at random and using raw results would likely introduce a bias.
So, which is more reliable -- Gallup showing Romney at +5 or an academic model showing an Obama victory? Brendan's a political scientist, so it's natural for him to prefer the political science studies. I'm not so sure. Gallup has been doing this for a long, long time....
3. As Brendan points out, the probabilistic nature of Silver's prediction means that a Romney victory would not prove that model is wrong. But, a theory that cannot be falsified isn't science.
In fact, if Silver or Gallup or any of the others miss badly, I'd expect them to modify their models and survey procedures. So, no matter what happens in 2012, in 2016 we'll be presented with new models and surveys that either succeeded in 2012 or that didn't fail in 2012. And, yet some of the 2016 models and surveys will get it wrong.
Posted by: David in Cal | October 30, 2012 at 06:13 PM
Great article! I remember a personal friend stating that Nate Silver got Question 1 in Maine (The gay marriage bill) wrong. I think Mr. Silver gave it around a 75% chance of failing and then it passed. He never claimed it WOULD pass he claimed 75% chance of pass.
Posted by: JP | October 31, 2012 at 08:07 PM
Nate actually predicted the Maine vote would go exactly how it went. He gave his stats but in finale said it would lose by almost exactly what it lost by.
Posted by: Nancy | November 02, 2012 at 11:03 AM
A Canadian's view of Silver's model at http://www2.macleans.ca/2012/11/04/tarnished-silver-assessing-the-new-king-of-stats/
Posted by: David in Cal | November 05, 2012 at 04:22 PM