In the WSJ, Gautam Naik reports on a troubling surge in scientific retractions:
What's driving this dramatic spike? I don't think anyone really knows. The least likely explanation is that scientists have suddenly become less scrupulous and honest. Instead, I think the trend is almost certainly being driven by a number of unrelated factors, including a newfound willingness by journals to issue retractions, increased scrutiny from the blogosphere and the ever escalating complexity of scientific research, which makes innocent mistakes more likely. (According to one analysis, 73.5 percent of retractions were due to error, not fraud.)
One additional possibility that hasn't been mentioned elsewhere: I wonder if the newfound reliance on electronic tools for data analysis has blurred the line between innocuous "tweaking" and outright manipulation. Consider the investigation of Mike Rossner, executive director of the Rockefeller University Press. In 2002, while trying to format a scientific image in Photoshop that was going to appear in one of the journals, Rossner noticed that the background of the image contained different intensities of pixels. This led Rossner and his colleagues to begin analyzing every image in every accepted paper. They soon discovered that approximately 25 percent of all papers contained at least one “inappropriately manipulated” picture. Interestingly, the vast, vast majority of these manipulations (~99 percent) didn’t affect the interpretation of the results. Instead, the scientists seemed to be photoshopping the pictures for aesthetic reasons: perhaps a line on a gel was erased, or a background blur was deleted, or the contrast was exaggerated. In other words, they wanted to publish pretty images. That’s a perfectly understandable desire, but it gets problematic when that same basic instinct – we want our data to be neat, our pictures to be clean, our charts to be clear – is used to correct flawed experiments. Photoshop can be a slippery slope.
That said, the problems caused by scientific fraud are far more limited in scope than those caused by the unconscious biases that can affect even the most honest researchers. (I described many of these biases in a recent *New Yorker *article on the so-called "decline effect.") While that piece focused on biases that influence scientific replication, there's a new paper in Ecosphere by researchers at the University of Washington that investigates the prejudices and assumptions that come into play after a paper is soundly refuted. If we were good Bayesians, of course, the rebuttal would lead us to discount the original publication, to question the veracity of the data. At the very least, we should cite the rebuttal as often as we cite the initial results.
To measure the impact of rebuttals, the researchers tracked the citation history of seven high profile papers on fishery science originally published in Nature and Science. All of these papers were later subject to multiple falsifications, so that most objective observers would conclude that the proposed theories had been soundly refuted. How did these refutations impact the subsequent citation history? The results are bleak, as scientists stubbornly clung to the original claims:
In case you weren't depressed by this data, the University of Washington scientists make it clear that you should be:
Needless to say, such a limited survey of papers comes with plenty of caveats. Does this bias towards the initial publication extend to all scientific fields? Alas, there's some disconcerting evidence that medicine suffers from a similar flaw. A recent JAMA review concluded:
I'm also curious if papers not published in prestigious journals are easier to refute. My hunch is that research appearing in Nature and* Science* seems more impressive, which makes subsequent falsifications (most of which probably appeared in smaller journals) less persuasive.
The larger lesson, I guess, is that there's nothing inherently wrong with retraction and refutation, which is why we shouldn't obsess over a sudden increase. Science is a human process and reality is damn complicated - we are bound to make mistakes. There's also no reason to believe that scientists are somehow less likely to commit fraud than other ambitious professionals. The more relevant question is what happens after the error. Can science correct itself? Does a picture of reality gradually emerge from the scatterplot of mismeasurement? This returns us to the institutions of science, for they are what distinguish the scientific process from every other pursuit of the truth. As Richard Rorty once observed:
While Rorty is right that the institutions of science provide us with an essential cultural model - he seemed a little jealous of the "free and open encounters" of scientists - these institutions still have plenty of flaws that need to be fixed. Perhaps we should, as the authors of this recent Ecosphere paper suggest, automatically link refutations to the original citation, at least online. (I imagine this would be fairly easy to do in PubMed or Google Scholar.) Or perhaps we need to be more clear about stating hypotheses in advance of the actual experiment, as recently suggested by Ben Goldacre in the context of fMRI volumetric structural MRI studies. (Jonathan Schooler has proposed a similar setup for all academic research.) In this day and age, the progress of science is too essential for us to be apathetic about its imperfect process. We need to experiment with how experiments are done.