This is a case from psycholinguistics, but it’s interesting for the political science replication debate as well. The journal Applied Psycholinguistics rejected a re-analysis of a paper after a 20-month review process. The original paper was published in the same journal.
Jan Vanhove is a PhD student at the Department of Multilingualism and Foreign Language Didactics at the University of Fribourg in Switzerland. In his paper: “Second-language acquisition research on critical period effects: caveats and statistical fallacies” [pdf] he used data from a paper by Robert DeKeyser et al called “Cross-linguistic evidence for the nature of age effects in second language acquisition” (abstract). In his replication paper, as Vanhove writes on his webpage, he did not just re-analyze the data, but point to general statistical fallacies in the field:
“The statistical fallacies found in this study turned out to be fairly common in the field of second language acquisition (the topic of the original study) and continue to contribute to the perpetuation of the critical period hypothesis in second language acquisition. In the commentary (now a full-fledged methodological article), I discuss how these faulty analyses produce artefacts which are then misinterpreted as evidence supporting the critical period hypothesis.”
In an open letter to the editors, who rejected the paper, Vanhove asks them to clarify why the review process took so long, and he retells the whole (slightly more complicated) story.
In the letter he writes that the editors decided not to accept the paper, even after a revise-and-resubmit, given that the reviews were not unanimous. According to Vanhove, the reviewers said:
- Reviewer 1: “The article is potentially transformative”, “I strongly recommend publication”
- Reviewer 2: “I do not feel this manuscript makes a serious contribution”, “nitpicking”
- Reviewer 3: “I find this paper to be excellent”
Originality of the Paper
What seemed to be a problem was that it’s not clear how to place the paper. To get published, journals expect added value and new insights in addition to the replication itself.
As Vanhove writes on his webpage, this was not really his aim. He did not actually want to write a replication paper. What he did was: he reviewed the literature in the field, uncovered common statistical fallacies and reanalysed DeKeyser et al.’s data as an example. Vanhove says that his paper is not a typical replication study “because there wasn’t any relevant finding to replicate to begin with” in the original paper. So his paper was more a methodological commentary with an example (the journal does publish commentaries, see here), but the paper did (accidentally?) go through the peer-review stage for actual papers.
This means that for anyone writing a paper including a reanalysis they have to be very clear from the beginning what exactly it is they are offering, and let the journal know upon submission (and hope that they understand).
The paper is now published independently
Similar to Thomas Herndon et al, who recently published the replication of Reinhart-Rogoff on his university’s webpage which led to the largest recent replication scandal in economics, Vanhove decided to publish the replication as a working paper with data and Rcode. He wrote to the editors:
In sum, I’m not so much disappointed by your decision to reject (five months into the second review stage, you sort of feel it coming) as by the fact that it took 20 months before you decided not to publish a manuscript that is critical of a paper that previously appeared in your journal because the reviews weren’t unanimously positive. Had you communicated your strategy 20 months ago, I would’ve gone straight to another outlet since the manuscript was bound to rub some people the wrong way.