« On the Media interview on presidential scandal | Main | Obama's second scandal: Secret Service »

April 16, 2012

Comments

Well said, Professor Nyhan, and kudos on considering and writing about these issues.

With respect to pass/fail, let me add the suggestion that students be permitted to take a limited number of courses pass/fail even after the first semester (perhaps one per semester or one per year), as at Princeton (my alma mater). The pass/fail option opens the door for people to take courses they otherwise might not. Upperclassmen especially benefit, because they can take an entry-level course without fear that the notoriously more stringent entry-level grading will hurt their GPA's. I took Architecture 101 pass/fail as a junior; a roommate majoring in electrical engineering took Japanese Literature in Translation pass/fail. Neither of those were courses we'd have considered without the pass/fail option.

With respect to academic journals, your suggestions are innovative and useful. I'd like to go further. Journals are paid for now by subscriptions that are funded by universities. Why not eliminate the pretense of subscriptions? Let the universities fund the journals directly, and have their content be available for free, ungated, on the Internet. Charge to provide printed copies, in an amount that will cover the cost of printing.

Your suggestions about confirmation bias and replications are great, but there also needs to be a change in attitudes about granting tenure. Tenured faculty should regard studies that fail to prove a hypothesis no less tenure-worthy than studies that do. And performing replication analyses and repeating others' experiments should be considered as much a part of faculty members' professional responsibilities as are membership on departmental committees and advising students. Make the replication of research findings part of the job description of a quantitative analyst. Tenured faculty should not only credit such efforts in their judgments about granting tenure, they should demand them--and not only should they demand such research by junior faculty, they should lead by example and do replication studies themselves.

Finally, as long as we're making academic reforms, let's cast a glance at the elephant in the room. Not always, but far too often, quantitative social science research applies exquisitely sophisticated statistical techniques to data of questionable validity (e.g., questionnaires with poorly phrased questions or unintentional framing, survey participants chosen more for convenience than representation of the general public) or draws conclusions that are logically flawed—or both. The precision of the statistics tends to distract attention from the underlying weaknesses and lack of rigor. Then the conclusions, typically well-hedged by caveats and acknowledgments of the need for further research, take on a life of their own. They’re cited as general propositions, ignoring the caveats, the questionable data, the logical leaps. And as Brendan implicitly acknowledges, rarely is the research ever replicated.

What results is a fine intellectual exercise, a Glasperlenspiel (h/t Hermann Hesse), but one whose ability to garner meaningful insights about the real world is in the gravest doubt. And the academic reform to deal with this problem? More rigor, more skepticism, less professional courtesy toward others' research flaws. Without that, it's not social science, it's social "science."

Fascinating series of suggestions from Brendan and Rob. Here are some "modest proposals" of my own.

1. IMHO the biggest problem with higher education today is that it's too expensive. So,
-- Restructure how teaching si done, taking advantage of computers and recorded lectures.
-- Reduce the faculty's research load while increasing their teaching load.
-- Take advantage of low-cost adjunct faculty to do some teaching.
-- Eliminate tenure, and get rid of deadwood.

These changes should allow full-time faculty size to be cut to less than a quarter of what it is today. Also, make a similar reduction in administrative staff.

2. The primary basis for evaluating faculty should be teaching, rather than research. Today, faculty are substantially evaluated based on publications and grants. This practice outsources personnel decisions that are the responsibility of the institution itself. Instead, let each Department do its own performance review, and base it primarily on how effectively a faculty member teaches.

3. Eliminate affirmative action. That will help reduce the number of under-prepared students and will help reduce the need for grade inflation.

4. Drop courses that are primarily political indoctrination or fluff.

I think the University of Phoenix more-or-less operates along the lines of these suggestions, although they'd be unthinkably radical for a traditional college or university.

I share your notion of accepting "proposals to run an experiment" and fleshed out some additional aspects in this blog post some months ago:
http://groups.csail.mit.edu/haystack/blog/2011/02/17/a-proposal-for-increasing-evaluation-in-cs-research-publication/

I don't know if I agree with the pre-trial acceptance. This could potentially tilt researcher incentives towards focusing on trial design rather than on implementation and data quality control. You could then end up with a lot of very well designed, but poorly implemented, trials.

The comments to this entry are closed.