In recent days, journalists, bloggers, and commentators have reared up to bash a fictitious conventional wisdom about election forecasting.
The premise for many of these statements is that political scientists believe that campaigns and other non-economic factors don't matter in presidential elections. For instance, The Daily Beast's Michael Tomasky describes "the political-science theory of presidential elections and economic determinism" as "pretty much strictly a function of economic conditions." At Real Clear Politics, Sean Trende states that Emory's Alan Abramowitz thinks "presidential elections can be reduced to a simple equation." And in a Bloomberg View column, Ronald Klain, the former chief of staff for Al Gore and Joe Biden, writes that "a group of political scientists, mathematicians and scholars have argued that a handful of factors determine of presidential elections, irrespective of the campaigns."
But as the political scientists John Sides and Seth Masket have already pointed out, these are straw men. Very few political scientists think campaigns don't matter or that elections can be perfectly forecast in advance. In an earlier post, Sides expressed this point well:
Because people continually overestimate the effect of campaigns, this blog holds up the other end of the dialectic by emphasizing the economy and defending those who do. But plenty of research has identified the effects of campaigns too... it's time to abandon this whole it's-either-the-economy-or-the-campaign dichotomy...
Even New York Times blogger Nate Silver, who has become something of a professional critic of political scientists, concedes the point in a post today, writing that the view Klain ascribes to forecasters "is certainly not the majority opinion within the discipline."
What's bizarre about this flurry of articles is that election forecasting is such an obscure topic in the political press. The conventional wisdom that presidential election outcomes are largely unpredictable in advance and that the outcomes we observe are primarily the result of campaign strategy is vastly more prominent. So why is everyone so worried about forecasting models?
A related straw man is the idea that political scientists think their models make perfectly precise predictions. Here, for instance, is what Silver wrote:
[P]olitical scientists as a group badly overestimate how accurately they can forecast elections from economic variables alone. I have written up lengthy critiques of several of these models in the past...
The three posts that Silver links to critique a historian's non-quantitative model which few political scientists would endorse, Ray Fair's forecasting model, and the "Bread and Peace" model of Douglas Hibbs. Only the last two are representative of the field, and political scientists have criticized Fair's model at length in the past (PDFs).
More generally, as Jacob Montgomery and I wrote last week, there is certainly reason to be concerned that these models are too confident about their predictions, but most sophisticated quantitative researchers in political science are aware of these concerns and do not interpret the forecasts so literally. Moreover, we can evaluate which models perform well in making forecasts beyond the data used in estimation and combine their predictions to create more accurate forecasts with appropriate estimates of uncertainty, as Montgomery and his co-authors do in their article (PDF). Silver dismisses this exercise as a "game show" and disparages any attempt to evaluate the models by their future performance -- "most of how they perform over the next few elections will be determined by luck" -- but we can and should aspire to better.
Here's a question: Consider two approaches:
1. Fit a model to the last 17 elections
2. Fit a group of models to the 16 elections prior to the most recent one. Then choose a weighted average of this group of models based on how well each one fits the most recent election.
If I understand correctly, #2 is Mongomery's approach (or some simplified version of it.) It seems to me, the combination model arising from #2 can be looked at as a model based on the 17 previous elections. So, in principle, is approach #2 fundamentally different from approach #1? I don't the answer.
I have no confidence on the "appropriate estimates of uncertainty." I think these models are all based on the convenient, but unrealistic, assumption that there's no change in the way various factors affect the election. IMHO the change in the underlying relationships is the greatest source of uncertainty. Unfortunately, there's no way to measure this figure.
Given the (presumed) changes in election causality, it would seem to make sense to give greater weight to more recent years. Maybe political scientists are already doing this.
Posted by: David in Cal | November 18, 2011 at 10:51 AM
In this post, Bob Somerby convincingly argues that Obama is getting much more favorable media treatment than Al Gore did. Somerby calls this bias against Gore; conservatives call it bias toward Obama. Either way, I think there is a glaring difference in the media treatment of these two men.
ISTM this difference in media bias is something that should be taken into account when using the 2000 results to predict 2012.
Posted by: David in Cal | November 18, 2011 at 12:21 PM