A lot of original authors are concerned about their reputation when their work is replicated, and the replication fails. But when can we actually label a replication as “failed”? And how should we deal with unhappy original authors who feel ‘bullied’?
A replication of a published article can fail at different stages.
If the results cannot be reproduced based on the exact same data and methods, then the results cannot be duplicated and there is not a lot of reason to trust this work. It might be that the author just made a mistake in an excel spread sheet, or mislabelled a variable. Ideally, that would not happen.
At a second stage of the replication project, when you conduct an additional independent data, use new, improved methods etc., you would have to describe exactly in which way the replication has failed. Different measurements of concepts that are hard to operationalize, e.g. human rights, can naturally yield different results.
Therefore, coming to different results when you use new data does not necessarily indicate that the original author’s work was faulty.
Before speaking of a failed replication, be very clear in your manuscript about what the problem was:
- Could you not identify which variable is which in the original data set?
- Was a variable missing from the original data set?
- Was a transformation of variables in the original data set unclear?
- Were there errors in the original data set (often happens in excel sheets)?
- Did you get different coefficients and standard errors?
- Do figures look different?
- Does a small change in outlier treatment, years or countries included suddenly change the results?
- Can you suspect p-value fishing?
- Did you measure the variables differently and come to different results?
- Did you update the data (e.g. for the recent years) and the results changed?
It is important to make sure that the replicator fully understands the methods and variables of the original author before the analysis is ‘improved.’
In fact, when the replication of an article is reported as ‘failed’, original authors often claim that the replication was flawed itself.
How the original author might react
In a recent example, an original author whose study did not replicate said:
There now is a recognized culture of “replication bullying:” Say anything critical about replication efforts and your work will be publicly defamed in emails, blogs and on social media, and people will demand your research materials and data, as if they are on a mission to show that you must be hiding something.
Another original author assessed a replication of his work stating that it is “less realistic”, “inconsistent with the substantive literature”, and “of limited utility.”
Other original authors stated that a replication study of their work presented a “fundamentally flawed analysis”, and, in a third case, an original author said that the replicator published “statistical, computational, and reporting errors that invalidate its conclusions.”
For replication conducted in the classroom (but also in general) this means that replicators need to be very careful and provide clearly documented evidence before calling a replication ‘failed’.
Do you have to contact the original author?
As I wrote elsewhere, a replicator should not be required to contact the original author before publishing a replication study (that might claim it failed to replicate). The original paper should include all information and data sets so that one should not have to ask the authors to clarify what they did.
However, it can make sense to contact the original author and if they are open to a dialogue, it can be good for everyone involved. In addition, replicators should be professional and write about the topic (and not the original author) so they don’t feel personally attacked.
By being even more diligent and transparent about a conducted replication, replicators can prevent that the original author claims that the replicator simply lacked the right skills to follow the paper’s procedures (Ishiyama 2014 makes this point), or claim that replicators are on some form of witch hunt.
A replicator should always provide all data, code and clear codebooks files.