UCLA political scientist Tim Groseclose and Missouri economist Jeff Milyo have published a study (PDF) alleging liberal media bias that is receiving a lot of attention, including a link on Drudge. But you should be wary of trusting its conclusions for reasons that I tried to explain to Groseclose after he presented the paper at Duke in fall 2003.
First, here's a summary of the study's methodology from the UCLA press release:
Groseclose and Milyo based their research on a standard gauge of a lawmaker's support for liberal causes. Americans for Democratic Action (ADA) tracks the percentage of times that each lawmaker votes on the liberal side of an issue. Based on these votes, the ADA assigns a numerical score to each lawmaker, where "100" is the most liberal and "0" is the most conservative. After adjustments to compensate for disproportionate representation that the Senate gives to low-population states and the lack of representation for the District of Columbia, the average ADA score in Congress (50.1) was assumed to represent the political position of the average U.S. voter.
Groseclose and Milyo then directed 21 research assistants -- most of them college students -- to scour U.S. media coverage of the past 10 years. They tallied the number of times each media outlet referred to think tanks and policy groups, such as the left-leaning NAACP or the right-leaning Heritage Foundation.
Next, they did the same exercise with speeches of U.S. lawmakers. If a media outlet displayed a citation pattern similar to that of a lawmaker, then Groseclose and Milyo's method assigned both a similar ADA score.
In short, the underlying assumption is that, if the press is unbiased, then media outlets will cite think tanks in news reporting in a fashion that is "balanced" with respect to the scores assigned to the groups based on Congressional citations. Any deviation from the mean ADA score of Congress is defined as "bias." But is that a fair assumption?
In particular, the paper's methodology doesn't allow for two important potential differences between the processes generating news citations and floor speech citations:
(1) Technocratic centrist to liberal organizations like Brookings and the Center on Budget and Policy Priorities tend to have more credentialed experts with peer-reviewed publications than their conservative counterparts. This may result in a greater number of citations by the press, which seeks out expert perspectives on the news, but not more citations by members of Congress, who generally seek out views that reinforce their own.
To illustrate, assume that there are two kinds of political stories. In the first, the press interviews think tank experts about policy debates in a "he said, she said" framework. We'll stipulate that these stories have an identical distribution of citations to that of Congress. But let's also assume the press is expected to consult technical experts or scholarship about trends or recent developments they are reporting on according to the norms of journalism, and that under such norms these experts, when they are cited, are not always "balanced" by an opposing expert. As a result, "expert" think tank citations are less frequently set off with a balancing quote or argument from the other side.
It follows from these premises that if there are more generally recognized technical experts on left-to-center side of the spectrum, then a study using the Groseclose/Milyo methodology would place the media on the Democratic side of the Congressional mean even if members of the press randomly chose technical experts to cite.
(2) The Groseclose/Milyo methodology doesn't allow for differential rates of productivity in producing work of interest to the media or Congress between organizations. To the extent that a think tank is better at marketing itself to the press than Congress (or vice versa), it could skew the results. For instance, the Heritage Foundation is extremely close to conservative members of Congress and has an elaborate operation designed to put material into their hands. But the fact that these members end up citing Heritage more than the press does is not ipso facto proof that the media is liberal.
In fact, there are a number of stories you can tell about why a media/Congress discrepancy in think tank citation would not necessarily imply ideological bias on the part of members of the elite media (including those listed above) and if any of them are true, the argument as stated does not hold.
Here is Groseclose/Milyo's response to the above criticisms:
More problematic is a concern that congressional citations and media citations do not follow the same data generating process. For instance, suppose that a factor besides ideology affects the probability that a legislator or reporter will cite a think tank, and suppose that this factor affects reporters and legislators differently. Indeed, John Lott and Kevin Hassett have invoked a form of this claim to argue that our results are biased toward making the media appear more conservative than they really are. They note:
For example, Lott [2003, Chapter 2] shows that the New York Times' stories on gun regulations consistently interview academics who favor gun control, but uses gun dealers or the National Rifle Association to provide the other side ... In this case, this bias makes [Groseclose and Milyo's measure of] the New York Times look more conservative than is likely accurate. [2004, p. 8]"
However, it is possible, and perhaps likely, that members of Congress practice the same tendency that Lott and Hassett have identified with reporters—that is, to cite academics when they make an anti-gun argument and to cite, say, the NRA when they make a pro-gun argument. If so, then our method will have no bias. On the other hand, if members of Congress do not practice the same tendency as journalists, then this can cause a bias to our method. But even here, it is not clear in which direction it will occur. For instance, it is possible that members of Congress have a greater (lesser) tendency than journalists to cite such academics. If so, then this will cause our method to make media outlets appear more liberal (conservative) than they really are.
In fact, the criticism we have heard most frequently is a form of this concern, but it is usually stated in a way that suggests the bias is in the opposite direction. Here is a typical variant: “It is possible that (i) journalists care about the ‘quality’ of a think tank more than legislators do (e.g. suppose journalists prefer to cite a think tank with a reputation for serious scholarship instead of another group that is known more for its activism); and (ii) the liberal think tanks in the sample tend to be of higher quality than the conservative think tanks.” If statements (i) and (ii) are true, then our method will indeed make media outlets appear more liberal than they really are. That is, the media will cite liberal think tanks more, not because they prefer to cite liberal think tanks, but because they prefer to cite high-quality think tanks. On the other hand, if one statement is true and the other is false, then our method will make media outlets appear more conservative than they really are. (E.g. suppose journalists care about quality more than legislators, but suppose that the conservative groups in our sample tend to be of higher quality than the liberal groups. Then the media will tend to cite the conservative groups disproportionately, but not because the media are conservative, rather because they have a taste for quality.) Finally, if neither statement is true, then our method will make media outlets appear more liberal than they really are. Note that there are four possibilities by which statements (i) and (ii) can be true or false. Two lead to a liberal bias and two lead to a conservative bias.
This criticism, in fact, is similar to an omitted-variable bias that can plague any regression. Like the regression case, however, if the omitted variable (e.g., the quality of the think tank) is not correlated with the independent variable of interest (e.g., the ideology of the think tank), then this will not cause a bias. In the Appendix we examine this criticism further by introducing three variables that measure the extent to which a think tank’s main goals are scholarly ones, as opposed to activist ones. That is, these variables are possible measures of the "quality" of a think tank. When we include these measures as controls in our likelihood function, our estimated ADA ratings do not change significantly. E.g., when we include the measures, the average score of the 20 news outlets that we examine shifts less than three points. Further, we cannot reject the hypothesis that the new estimates are identical to the estimates that we obtain when we do not include the controls.
Here is the portion of the appendix they refer to:
Columns 5-9 of Table A2 address the concern that our main analysis does not control for the “quality” of a think tank or policy group. To account for this possibility, we constructed three variables that indicate whether a think tank or policy group is more likely to produce quality scholarship. The first variable, closed membership, is coded as a 0 if the web site of the group asks visitors to join the group. For instance, more activist groups--such as the NAACP, NRA, and ACLU--have links on their web site that give instructions for a visitor to join the group; while the more scholarly groups—such as the Brookings Institution, the RAND Corporation, the Urban Institute, and the Hoover Institution—do not. Another variable, staff called fellows, is coded as 1 if any staff members on the group’s website are given one of the following titles: fellow (including research fellow or senior fellow), researcher, economist, or analyst.
Both variables seem to capture the conventional wisdom about which think tanks are known for quality scholarship. For instance, of the top-25 most-cited groups in Table I, the following had both closed membership and staff called fellows: Brookings, Center for Strategic and International Studies, Council on Foreign Relations, AEI, RAND, Carnegie Endowment for Intl. Peace, Cato, Institute for International Economics, Urban Institute, Family Research Council, and Center on Budget and Policy Priorities. Meanwhile, the following groups, which most would agree are more commonly known for activism than high-quality scholarship, had neither closed membership nor staff called fellows: ACLU, NAACP, Sierra Club, NRA, AARP, Common Cause, Christian Coalition, NOW, and Federation of American Scientists.
The third variable that we constructed is off K street. It is coded as a 1 if and only if the headquarters of the think tank or policy group is not located on Washington D.C.’s K Street, the famous street for lobbying firms.
The problem, however, is that conservative think tanks have consciously aped the tropes of the center-left establishment (such as fellows and closed memberships) while discarding their commitment to technocratic scholarship. Thus, the fact that the American Enterprise Institute and the Family Research Council are included in these categories means that the variables do not adequately address the criticism. Similarly, the Heritage Foundation, the prototypical faux-technocratic think tank (see here and here), has fellows as well. Counts of scholarly citations or staff with Ph.D.'s would have been far better metrics. (As for the K Street variable, it is simply bizarre -- many lobbying shops are a block or two away, as Media Matters points out.)
Others have also objected to the study's methodology. Here are the strongest portions of two critiques that have appeared in the last several days:
Dow Jones & Co. in a letter to Romenesko responding to the study's classification of the Wall Street Journal news pages as liberal:
"[T]he reader of this report has to travel all the way Table III on page 57 to discover that the researchers' "study" of the content of The Wall Street Journal covers exactly FOUR MONTHS in 2002, while the period examined for CBS News covers more than 12 years, and National Public Radio’s content is examined for more than 11 years. This huge analytical flaw results in an assessment based on comparative citings during vastly differing time periods, when the relative newsworthiness of various institutions could vary widely. Thus, Time magazine is “studied” for about two years, while U.S. News and World Report is examined for eight years. Indeed, the periods of time covered for the Journal, the Washington Post and the Washington Times are so brief that as to suggest that they were simply thrown into the mix as an afterthought. Yet the researchers provide those findings the same weight as all the others, without bothering to explain that in any meaningful way to the study’s readers."
We leave to the reader the judgment on whether anyone could take seriously a coding scheme in which RAND is considered substantially more "liberal" than the ACLU. But this is not the only problem with Groseclose and Milyo's study; they lump together advocacy groups and think tanks that perform dramatically different functions. For instance, according to their data, the National Association for the Advancement of Colored People (NAACP) is the third most-quoted group on the list. But stories about race relations that include a quote from an NAACP representative are unlikely to be "balanced" with quotes from another group on their list. Their quotes will often be balanced by quotes from an individual, depending on the nature of the story; however, because there are no pro-racism groups of any legitimacy (or on Groseclose and Milyo's list), such stories will be coded as having a "liberal bias." On the other hand, a quote from an NRA spokesperson can and often will be balanced with one from another organization on Groseclose and Milyo's list, Handgun Control, Inc...
It is not hard to imagine perfectly balanced news stories that Groseclose and Milyo would score as biased in one direction or the other, given the study's methodology. For instance, an article that quoted a member of Congress taking one side of an issue, and then quoted a think tank scholar taking the other side, would be coded as "biased" in the direction of whichever side was represented by the think tank scholar. Since Groseclose and Milyo's measure of "bias" is restricted to citations of think tank and advocacy groups, this kind of miscategorization is inevitable.
Groseclose and Milyo's discussion of the idea of bias assumes that if a reporter quotes a source, then the opinion expressed by that source is an accurate measure of the reporter's beliefs -- an assumption that most, if not all, reporters across the ideological spectrum would find utterly ridiculous. A Pentagon reporter must often quote Defense Secretary Donald H. Rumsfeld; however, the reporter's inclusion of a Rumsfeld quotation does not indicate that Rumsfeld's opinion mirrors the personal opinion of the reporter.
...The authors also display a remarkable ignorance of previous work on the subject of media bias. In their section titled "Some Previous Studies of Media Bias," they name only three studies that address the issue at more than a theoretical level. All three studies are, to put it kindly, questionable...
Although the authors seem completely unaware of it, in reality there have been dozens of rigorous quantitative studies on media bias and hundreds of studies that address the issue in some way. One place the authors might have looked had they chosen to conduct an actual literature review would have been a 2000 meta-analysis published in the Journal of Communication (the flagship journal of the International Communication Association, the premier association of media scholars). The abstract of the study, titled "Media bias in presidential elections: a meta-analysis," reads as follows:
A meta-analysis considered 59 quantitative studies containing data concerned with partisan media bias in presidential election campaigns since 1948. Types of bias considered were gatekeeping bias, which is the preference for selecting stories from one party or the other; coverage bias, which considers the relative amounts of coverage each party receives; and statement bias, which focuses on the favorability of coverage toward one party or the other. On the whole, no significant biases were found for the newspaper industry. Biases in newsmagazines were virtually zero as well. However, meta-analysis of studies of television network news showed small, measurable, but probably insubstantial coverage and statement biases.
Standard scholarly practice dictates the assembly of a literature review as part of any published study, and meta-analyses, as they gather together the findings of multiple studies, are particularly critical to literature reviews. That Groseclose and Milyo overlooked not only the Journal of Communication meta-analysis, but also the 59 studies it surveyed, raises questions about the seriousness with which they conducted this study.
Indeed, they seem to be unaware that an academic discipline of media studies even exists. Their bibliography includes works by right-wing media critics such as Media Research Center founder and president L. Brent Bozell III and Accuracy in Media founder Reed Irvine (now deceased), as well as an article from the right-wing website WorldNetDaily. But Groseclose and Milyo failed to cite a single entry from any of the dozens of respected scholarly journals of communication and media studies in which media bias is a relatively frequent topic of inquiry -- nothing from Journal of Communication, Communication Research, Journalism and Mass Communication Quarterly, Journal of Broadcasting & Electronic Media, Political Communication, or any other media studies journal.
With the recent disclosure of the significant delay in reporting the administration's "legally dubious wiretap scheme" on top of continued warnings on the quality of news fed to the American public with a wry smile,
http://www.commondreams.org/views05/0403-25.htm
http://www.democracynow.org/baltimoresun.shtml
the bias would seem irrelevant in the larger view.
In September 2004, Christiane Amanpour appeared on "Real Time - with Bill Maher" and pointed out that the stories of journalists in the field in Iraq were getting their stories, "bashed" - and that this was common. The inferrence followed a statement that the foriegn reports in mainstream U.S. media were not complete and were reduced to insignificant [edited] before getting on air. She pointed out that journalists had been reporting for over a year that the insurgency in Iraq was a serious indigenous uprising and that the U.S. authorities had been discounting this in the face of overwhelming evidence to the contrary. Ms. Amanpour received a bit of a scolding from her overlords shortly thereafter. Was this scolding a left scolding or a right scolding?
A review of the Bill Moyers debacle and follow up might be more of interest as well.
As regards bias - all news can be picked for bits of bias depending upon myriad factors. The more important concern is the limititations, due in part to bias, of the news that the average good citizen receives and that the good citizen does not make the effort to scan various alternative sources [including foreign] on any issue of which he/she might feel strongly in order to become better informed.
To discuss bias in mainstream U.S. media amid the seeming marginal censorship of that media seems to be like discussing the harmful effects of a cold with a terminal cancer patient.
Posted by: JM | December 23, 2005 at 10:25 AM
In a May, 2005 survey, the Pew Research Center finds that 39% of the American public identifies itself as Conservative, 37% as Independent & 19% as Liberal. These numbers come from the detailed demographic tables at the end of the write-up. The link is below.
http://people-press.org/reports/display.php3?PageID=944
If these numbers are correct, or even close to being correct, then there are roughly twice as many conservatives as liberals in the country and, presumably, in the Congress.
Another way of saying this is that roughly 80% of the population is not distinctly liberal.
In Table III of their study, Groseclose and Milyo say that their ADA scale is calibrated so that "50.06 is our estimate of the average American voter". Since the population is split right down the middle in the elections, this means that 50.06 is roughly equivalent to 50% of the American voting population.
The 50.06 figure roughly divides 50% of the people to the left and 50% to the right. Given that only around 20% of the population is liberal, there will be a number of conservatives and conservative-leaning people on the "left" side of the population.
The Groseclose and Milyo survey conflates this left group with liberal. It is clearly not liberal.
It is no wonder that they found the press has a liberal bias, since they count many people who are either conservative, or conservative-leaning, as liberals.
To drive the point home another way, the Pew survey records that many Democratic representatives self-identify as conservative. So many Democrats, in their own words, do not consider themselves to be liberal.
It seems to me that unless Groseclose and Milyo recalibrate their results to accomodate this lopsided conservative demographic in the population, they have redefined liberal to include elements of the population that are centrist and conservative. No wonder everything looks liberal to them.
homanid
Posted by: homanid | January 03, 2006 at 03:03 AM
Do you actually watch the news? Geez its not even a question anymore.
Posted by: Bill | October 03, 2008 at 08:32 AM