Science

We Tried to Publish a Replication of a Science Paper in Science. The Journal Refused.

Our research suggests that the theory that conservatives and liberals respond differently to threats isn’t actually true.

Issues of Science magazine.
Issues of Science magazine. Photo illustration by Slate. Images by Science magazine.

Science is supposed to be self-correcting. Ugly facts kill beautiful theories, to paraphrase the 19th-century biologist Thomas Huxley. But, as we learned recently, policies at the top scientific journals don’t make this easy.

Our story starts in 2008, when a group of researchers published an article (here it is without a paywall) that found political conservatives have stronger physiological reactions to threatening images than liberals do. The article was published in Science, which is one of the most prestigious general science journals around. It’s the kind of journal that can make a career in academia.

It was a path-breaking and provocative study. For decades, political scientists and psychologists have tried to understand the psychological roots of ideological differences. The piece published in Science offered some clues as to why liberals and conservatives differ in their worldviews. Perhaps it has to do with how the brain is wired, the researchers suggested—specifically, perhaps it’s because conservatives’ brains are more attuned to threats than liberals’. It was an exciting finding, it helped usher in a new wave of psychophysiological work in the study of politics, and it generated extensive coverage in popular media. In 2018, 10 years after the publication of the study, the findings were featured on an episode of NPR’s Hidden Brain podcast.

Fast forward to 2014. All four of us were studying the physiological basis of political attitudes, two of us in Amsterdam, the Netherlands (Bakker and Schumacher at the University of Amsterdam), and two of us in Philadelphia (Arceneaux and Gothreau at Temple University). We had raised funds to create labs with expensive equipment for measuring physiological reactions, because we were excited by the possibilities that the 2008 research opened for us.

The researchers behind the Science article had shown a series of images to 46 participants in Nebraska and used equipment to record how much the participants’ palms sweated in response. The images included scary stuff, like a spider on a person’s face. We conducted two “conceptual” replications (one in the Netherlands and one in the U.S.) that used different images to get at the same idea of a “threat”—for example, a gun pointing at the screen. Our intention in these first studies was to try the same thing in order to calibrate our new equipment. But both teams independently failed to find that people’s physiological reactions to these images correlated with their political attitudes.

Our first thought was that we were doing something wrong. So, we asked the original researchers for their images, which they generously provided to us, and we added a few more. We took the step of “pre-registering” a more direct replication of the Science study, meaning that we detailed exactly what we were going to do before we did it and made that public. The direct replication took place in Philadelphia, where we recruited 202 participants (more than four times than the original sample size of 46 used in the Science study). Again, we found no correlation between physiological reactions to threatening images (the original ones or the ones we added) and political conservatism—no matter how we looked at the data.

By this point, we had become more skeptical of the rationale animating the original study. Neuroscientists can often find a loose match between physiological responses and self-reported attitudes. The question is whether this relationship is really as meaningful as we sometimes think it is. The brain is a complex organ with parallel conscious and unconscious systems that don’t always affect the other one-to-one. We still believe that there is value in exploring how physiological reactions and conscious experience shape political attitudes and behavior, but after further consideration, we have concluded that any such relationships are more complicated than we (and the researchers on the Science paper) presumed.

We drafted a paper that reported the failed replication studies along with a more nuanced discussion about the ways in which physiology might matter for politics and sent it to Science. We did not expect Science to immediately publish the paper, but because our findings cast doubt on an influential study published in its pages, we thought the editorial team would at least send it out for peer review.

It did not. About a week later, we received a summary rejection with the explanation that the Science advisory board of academics and editorial team felt that since the publication of this article the field has moved on and that, while they concluded that we had offered a conclusive replication of the original study, it would be better suited for a less visible subfield journal.

We wrote back asking them to consider at least sending our work out for review. (They could still reject it if the reviewers found fatal flaws in our replications.) We argued that the original article continues to be highly influential and is often featured in popular science pieces in the lay media (for instance, here, here, here, and here), where the research is translated into a claim that physiology allows one to predict liberals and conservatives with a high degree of accuracy. We believe that Science has a responsibility to set the record straight in the same way that a newspaper does when it publishes something that doesn’t stand up to scrutiny. We were rebuffed without a reason and with a vague suggestion that the journal’s policy on handling replications might change at some point in the future.

We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all. Subfield journals are less visible, meaning the message often fails to reach the broader public. They are also less authoritative, meaning the failed replication will have less of an impact on the field if it is not published by Science.

Open and transparent science can only happen when journals are willing to publish results that contradict previous findings. We must resist the human tendency to see a failed replication as an indication that the original research team did something wrong or bad. We should continue to have frank discussions about what we’ve learned over the course of the replication crisis and what we could be doing about it (a conversation that is currently happening on Twitter).

Science requires us to have the courage to let our beautiful theories die public deaths at the hands of ugly facts. Indeed, our replication also failed to replicate part of a study published by one of us—Arceneaux and colleagues—which found that physiological reactions to disgusting images correlated with immigration attitudes. Our takeaway is not that the original study’s researchers did anything wrong. To the contrary, members of the original author team—Kevin Smith, John Hibbing, John Alford and Matthew Hibbing—were very supportive of the entire process, a reflection of the understanding that science requires us to go where the facts lead us. If only journals like Science were willing to lead the way.

Correction, June 25, 2019: The image for this piece originally contained a book covered mislabeled as a cover of Science magazine. The illustration has since been updated.