I recently wrote a piece reacting to Mina Bissell’s article on the risks of the replication drive. Now that American statistician and political scientist Andrew Gelman blogged about the topic, the discussion continues. According to Gelman, “the push for replication is so strong that now there’s a backlash against it”.
In her original article, “Reproducibility: The risks of the replication drive” (Nature 503, 333–334, 21 November 2013), Mina Bissell from the Lawrence Berkeley National Laboratory raises questions such as: Does the “gold standard” of reproducibility hinder promising research? Do we unfairly damage the reputations of scientist by declaring a finding irreproducible? How should we react when we cannot reproduce the work? I reacted to her points (I do not share her views) in an earlier post, stating that it does not make sense to see it only from the view of authors whose work is replicated (and potentially damaged unfairly).
Thinking of replication in a defensive way
With his post “Replication backlash” on 17 December 2013, statistician and political scientist Andrew Gelman, Columbia University, raises further crucial points that fuel the reproducibility discussion. Essentially, he points out that “where Bissell went wrong is by thinking of replication “in a defensive way”. Why should publication in a top journal stop others from replicating the work? This would mean that a “few referee reports is enough to put something into a default-believe-it mode, but a failed replication doesn’t count for anything.” Rather, Gelman points out that we should think of “replicability as a way of moving science forward faster.”
More specifically, Gelman appreciates Bissell’s work, but he points out that there are many labs “without 100% successful replication rates”. The reasons could be fraud, people cutting corners, selection bias and measurement error or simply that “published results just happened to stand out in an initial dataset” and “because certain effects are variable and appear in some settings and not in others.” Gelman concludes:
“In any case, replications do fail, even with time and careful consideration of experimental conditions. In that sense, Bissell indeed has to pay for the sins of others, but I think that’s inevitable: in any system that is less than 100% perfect, some effort ends up being spent on checking things that, retrospectively, turned out to be ok.”
Transparency of data and methods
Similar to what I wrote earlier, Gelman also criticizes that Bissell points out that replicators have to approach authors thoughtfully. Gelman emphasizes that “a simpler approach would be for the authors of the article to describe clearly (with videos, for example, if that is necessary to demonstrate details of lab procedure) in the public record.” Gelman recommends:
“To me, the solution is (…) to expand the idea of “publication” to go beyond the current standard telegraphic description of methods and results, and beyond the current standard supplementary material (which is not typically a set of information allowing you to replicate the study; rather, it’s extra analyses needed to placate the journal referees), to include a full description of methods and data, including videos and as much raw data as is possible (with some scrambling if human subjects is an issue). No limits—whatever it takes! This isn’t about replication or about pesky reporting requirements, it’s about science. If you publish a result, you should want others to be able to use it.”
Reproducibility is so strong that it causes a backlash
The best bit in Gelman’s excellent piece gives those pushing for reproducibility hope:
“I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it.”
Read Gelman’s blog post here, and make sure you don’t miss the many excellent comments under his text.