« Obama smear watch | Main | ABC: 82% say wrong track »

May 11, 2008

Comments

Didn't they try that with Head Start? If I recall correctly, the results of a longitudinal study showed no significant benefit from the Head Start program after even a couple of years had elapsed. Which resulted not in discontinuation or major redesign of the Head Start program but instead with calls in Congress and on the left to discontinue the testing. So much for the experimental revolution.

Rob -- You should stop reading so much Freakonomics.

Head Start has been retooled a whole bunch of times (most prominently/recently in 2006) to increase effectiveness. It's also super duper effective.

Also, I don't really get Brendan's point here; studies and experiments evaluating the effectiveness of public policy and social welfare programs are constantly being done.

Dave, the study you cited showed immediate results of the Early Head Start (not Head Start) program. But even as to Early Head Start, do you know of any longitudinal study that shows that the gains achieved in the program persisted after the passage of time; i.e., do Early Head Start participants do better in than non-participants by the second or third or fourth grade?

How effective Head Start or Early Head Start is isn't really the point, though. The point is that randomized evaluations of any controversial program are likely to run into the same obstacles that randomized evaluations of Head Start have encountered. Methodology will be challenged (and it's perfectly appropriate that it should be). Competing studies will be introduced. And program advocates and opponents will line up in their expected positions, battling over any results that are inconsistent with their original policy preferences. That's simply the way of the world, and while Brendan's Platonic desire for quantitative analyst-kings is understandable, it's not likely to achieve his desired rational result.

P.S. I'm afraid I haven't read Freakonomics. Sorry.

In my experience, Head Start haterade was first introduced to the mainstream discussion through Freakonomics, a crappy crappy book that was essentially a large collection of annoying pop-iconoclasm better suited for a column at Slate than anything else.

Here's a study on the long-term effects of head start. IQ boosts from the program dissipated relatively early. However, there's a strong correlation between attending Head Start and increased graduation rates, increased college attendance, and even decreased criminal activity. That's why the program was reauthorized with like 400+ votes in the house.

All that said, you're absolutely right about the obstacles encountered by evaluations of gov. programs, the challenges to methodology, etc. etc.

What I'm saying is we're already doing that, all the time, every day. These programs are evaluated, congress authorizes studies, politicians argue over these studies, object to the methodology of those that work against their ideology, people get papercuts, charts are laminated, card stock covers are printed. We already do this.

There's an entire community of Hollywood actors who first got their big breaks arguing over such studies in walk-on roles on The West Wing.

Just noticed that study I linked to is non-experimental (which is a little off from Brendan's original post).

That said, the larger point stands; Congress already authorizes lots and lots of randomized evaluations. They don't always do much, for those reasons Rob cites. But we're already on that bandwagon.

For more info, here's the Office of Management and Budget's Program Assessment Rating Tool, which, in the past four or five years, has essentially transformed the OMB into the Department of Randomized Evaluations of Public Policy Programs.

Guys, thanks for the useful debate. I'm aware that there have been randomized evaluations of a handful of social programs such as Head Start and labor training programs. But there's a lot more that hasn't been systematically evaluated. For instance, many health care programs and procedures have never been tested experimentally. There are ethical and logistical obstacles that need to be overcome, but it can be done.

Also, the government could require that grantees who provide social services be independently evaluated in random experiments, which would expand the scope of evaluation even further.

IMHO Brendan has put his finger on the reason why free enterprise economic systems outperform government owned or government controlled ones. The government will never join the experimental revolution. The key impediments are not technical. Basically, accurate program evaluation is not in the interest of most politicians nor most program beneficiaries. The beneficiaries want more goodies and the politicians want political support. These are best obtained by unrestricted grants of money -- the more the better.

For example, look at the grief George Bush took when he fought to have testing as a part of No Child Leftg Behind. Teachers unions hate testing. The Education Establishment doesn't want testing. They want as much federal money as possible, with no strings attached.

I wish that were true, David, but the reality is that the private sector almost never does randomized evaluations either and for similar reasons -- powerful constituencies don't want to find out that what they do doesn't work.

Brendan,

I was talking this over with a coworker this morning, who pointed me to this study:

Congress and Program Evaluation: An Overview
of Randomized Controlled Trials (RCTs) and
Related Issues

A pretty interesting look into the ways in which the federal government has turned to randomized evaluations for program evaluation, benefits and limitations, etc. etc.

A look at ExpectMore.gov is a sobering experience. More than a thousand programs are listed there. A thousand programs. One gets a sense of the mischief that can result when well-intentioned and/or highly accommodating legislators are given the opportunity to show their generosity by spending Other People's Money.

While the program evaluations at the site are probably a useful first step, the criteria are often questionable (for example, the Head Start criteria address only immediate improvement, not improvement that is measurable two years or more after the student has left the program) and the evaluation is rarely the sort of randomized study Brendan advocates.

Returning to Head Start, I was too cheap to spend the five bucks to download the report cited by Dave, but I surmise that as a non-randomized study it's subject to the flaw that the more goal-oriented and supportive families who enroll their children in Head Start are more likely to see those children do well in school than the less goal-oriented and supportive families whose children do not attend Head Start. I also wonder whether the subjects' participation in Head Start was verified or was simply determined by the subjects' recollection, which tends to be imperfect and could easily be more frequently recollected by the more successful subjects.

Illustrating Brendan's point, in my career I have worked for two large companies where powerful internal constituancies prevented them from finding out that what did and didn't work. However, after many years of bad results, these two companies were eventually taken over by other companies in their field.

Also, randomized evaluations are a great method of finding out what works, but it's not the only method. Some organizations do a better job than others of paying attention to what is or isn't effective.

The comments to this entry are closed.