[W]hy do I doubt the conclusions reported in today's Op Ed piece? The problems I see have less to do with brain imaging per se than with the human tendency to make up "just so" stories and then believe them. The scattered spots of activation in a brain image can be like tea leaves in the bottom of a cup – ambiguous and accommodating of a large number of possible interpretations. The Edwards insula activation might indicate disgust, but it might also indicate thoughts of pain or other bodily sensations or a sense of unfairness, to mention just a few of the mental states associated with insula activation. And of course the possibility remains that the insula activation engendered by Edwards represents other feeling altogether, yet to be associated with the insula. The Romney amygdala activation might indicate anxiety, or any of a number of other feelings that are associated with the amygdala – anger, happiness, even sexual excitement.
Some of the interpretations offered in the Op Ed piece concern the brain states of subsets of the subjects, for example just the men or just the most negative voters. Some concern the brain states of the subjects early on in the scan compared with later in the scan. Some concern responses to still photos or to videos specifically. With this many ways of splitting and regrouping the data, it is hard not to come upon some interpretable patterns. Swish those tea leaves around often enough and you will get some nice recognizable pictures of ocean liners and tall handsome strangers appearing in your cup!
How can we tell whether the interpretations offered by Iacoboni and colleagues are adequately constrained by the data, or are primarily just-so stories? By testing their methods using images for which we know the "right answer." If the UCLA group would select a group of individuals for which we can all agree in advance on the likely attitudes of a given set of subjects, they could carry out imaging studies like the ones they reported today and then, blind to the identity of personage and subject for each set of scans, interpret the patterns of activation.
I would love to know the outcome of this experiment. I don't think it is impossible that Iacoboni and colleagues have extracted some useful information about voter attitudes from their imaging studies. This probably puts me at the optimistic end of the spectrum of cognitive neuroscientists reading this work. However, until we see some kind of validation studies, I will remain skeptical.
In closing, there is a larger issue here, beyond the validity of a specific study of voter psychology. A number of different commercial ventures, from neuromarketing to brain-based lie detection, are banking on the scientific aura of brain imaging to bring them customers, in addition to whatever real information the imaging conveys. The fact that the UCLA study involved brain imaging will garner it more attention, and possibly more credibility among the general public, than if it had used only behavioral measures like questionnaires or people's facial expressions as they watched the candidates. Because brain imaging is a more high tech approach, it also seems more "scientific" and perhaps even more "objective." Of course, these last two terms do not necessarily apply. Depending on the way the output of UCLA's multimillion dollar 3-Tesla scanner is interpreted, the result may be objective and scientific, or of no more value than tea leaves.