Academia tends to be slow to embrace change, but here are a few ideas that I think are worth considering for improving how we evaluate students, conduct research, and run our journals.
1. The pass/fail first semester
Two of the most significant problems we face in higher education are grade inflation and underprepared students. There are no easy answers to either problem, but one of the best approaches I've seen is the pass/fail first semester used at Swarthmore College (my alma mater). Let me quote from a blog post written by a first-year student there last fall, which I just came across on Google -- it's completely consistent with my experience:
The first semester for every first-year at Swat is pass/fail. I love this system, and it’s one of so many reasons why the approach to academics at Swarthmore is fantastic.
Taking classes pass/fail deemphasizes the importance of grades. That seems obvious, and we heard that over and over again from the administration, our advisors, and upper class students. I didn’t really internalize the significance of that, however, until just recently…
The pass/fail semester helps first-years adjust to college. With some stress removed from academics, there’s more time to focus on other aspects of college: meeting new friends, joining interesting clubs, and trying not to get lost on the way to the fitness center (I had particular trouble with that last one). I’m not saying that this first semester is a breeze, or that it should be. It’s important to learn study habits that work for college, and figuring out how to manage your time is obviously essential (for example, spending one hour online-shopping for every half hour spent reading did not end up working for me). What’s great is being about to adjust without having to simultaneously stress out about grades.
Grades will come next semester, but the class of 2015 will tackle our workload with a greater appreciation for the material learned, and an understanding of the importance of the learning process, not just the grade received at the end of the year. I’m so glad Swarthmore gave us this adjustment period.
The pass/fail semester helps students get excited about learning for learning's sake before worrying about grades, and it provides underprepared students with a chance to catch up before their performance is recorded on their permanent transcript. It's worth considering whether the practice should be adopted both here at Dartmouth and elsewhere in higher education.
2. The pre-accepted article
Academics face intense pressure to publish new findings in top journals. In practice, those incentives create massive publication bias. Social scientists tend to think of medical and scientific journals as being more rigorous, but even most of the results published in those journals tend to fail to replicate. While some fraud may occur, the problem is more likely to be one of self-deception -- as human beings, we're simply too good at rationalizing choices that produce the results we want.
One response to this concern is preregistration of experimental trials -- a practice that is mandated in some areas of medicine and is beginning to be done voluntarily by some social science researchers conducting field experiments (particularly in development economics). The idea is that the author has publicly stated his or her hypotheses before the data have been collected and that the results are therefore less likely to be spurious. The best example of this that I know of is the Oregon Health Insurance Experiment, which publicly archived its analysis plan before any data were available and explicitly labeled all unplanned analyses in their manuscript (PDF).
Unfortunately, preregistration alone will not solve the problem of publication bias. First, authors have little incentive to engage in the practice unless it is mandated by regulators or the journal to which they are submitting. In addition, authors may still make arbitrary choices in how they code, analyze, and present the results of preregistered trials. But most fundamentally, if trial results are more likely to be published when they deliver statistically significant results, then publication bias is still likely to ensue.
In the case of experimental data, a better practice would be for journals to accept articles before the study was conducted. The article should be written up to the point of the results section, which would then be populated using a pre-specified analysis plan submitted by the author. The journal would then allow for post-hoc analysis and interpretation by the author that would be labeled as such and distinguished from the previously submitted material. By offering such an option, journals would create a positive incentive for preregistration that would avoid file drawer bias. More published articles would have null findings, but that's how science is supposed to work. A shift to a preregistered article system would also create healthy pressure on authors, editors, and reviewers to (a) focus on topics where we care about the null hypothesis; (b) keep articles short; and (c) make sure studies have enough statistical power to have a high likelihood of capturing the effect of interest (if real).
3. The replication audit
Ideally, every journal should follow the practice of the American Economic Review and require authors to submit a full replication archive before publication. However, my colleague Brian Greenhill has suggested a way that journals or professional associations could go even further to encourage careful research practice: conduct replication audits of a random subset of published articles. At a minimum, these audits would verify that all the results in an article could be replicated. They could conceivably go further in some cases and try to recreate the author's data and results from publicly available sources, re-run lab experiments, etc. when possible. An audit system would of course work best for journals that require replication archives to be made available -- otherwise, it could discourage authors from sharing replication data.
4. A frequent flier system for journals
Journals depend on the free labor provided by academics in the peer review process. Reviewing is a largely thankless task whose burden falls disproportionately on prominent and public-minded scholars, who receive little credit for the work that they do. As a result, manuscripts are often stuck in review limbo for months, slowing the publication process and stalling both the production of knowledge and the careers of the authors in question. How can we do better?
One idea is to develop a points system for each journal analogous to frequent flier miles. Each review would earn a scholar a certain number of points with bonuses awarded by editors for especially timely or high quality reviews. The author could then cash in those points when they submit to that journal in order to request a rapid review of their own manuscript. The journal would in turn offer those points to reviewers who review the manuscript quickly, helping to speed it through the process. It would not be useful for reviewers who don't submit to the journal in question, but for reviewers and authors who interact with a journal over a period of decades, it could help provide greater incentives for rapid and thoughtful reviewing.
Update 4/27 10:16 AM: Please see my followup post for more on pre-accepted articles.
Also, it turns out that a large group of psychologists are engaging in a collaborative replication audit of psychology articles published in top journals in 2008 called The Reproducibility Project - see this article in the Chronicle of Higher Education for more about the project.
Finally, I recently discovered that the American Medical Association offers continuing medical education credits to reviewers for Archives of Internal Medicine who "have completed their review in 21 days or less with a rating of good or better." CME credits are presumably not as strong an incentive as faster review of one's own articles, but I assume they're better than nothing.