Guest Post: Publishing a replication? Definitely worth repeating, by Chris Hartgerink

Chris Hartgerink is a research master student at Tilburg University, the Netherlands. I invited him to write a guest post about his experience of publishing a replication project. It turns out that, among many other aspects of doing a replication study, one of the main take away points was that a replicator must be reproducible as well. He explains here why it is important to always have a second assessor on all the analysis code before submitting to a journal.

chrishartgerinkChris Hartgerink is a research master student at Tilburg University, the Netherlands. Here, he writes about his experience with publishing a replication study:

I was recently invited to become a co-author on a research project, which led to my first peer-reviewed publication (Fayant, Muller, Hartgerink, & Lantian, 2014). I worked with a progressive research team that was willing to replicate a study, share their data, and open up their project to outsiders.

I was excited to join the project and to help where I could. The results indicated that the original null results were replicated, and a meta-analysis of these effects (that was where I came in, I was asked to join as a meta-analysis expert) gave a more powerful estimate, which indicated the same.

The manuscript was written up, submitted to Social Psychology for consideration in the replications section and after [minor] revisions, the paper was accepted.

However, that was when I realized that I only partly wrote the analysis code myself (being part of a team), and did not review the final, full version of the code.

“After the paper was accepted, I remembered that I had not replicated the study – and I was not sure it was reproducible.”

I was very aware of the possibility that our replication study might not be reproducible itself. I had just been part of a course at university on experimental research methods. This course had included research conduct and questionable practices. It became very clear to me that replication and reproduction are cornerstones of science, and that I actively wanted to uphold these cornerstones in my own research.

When I tried to reproduce all code…

Thinking “better late than never,” I started re-writing the analyses myself in the open-source R package. I wanted to thoroughly check if I could reproduce all the numbers independently. Note I wanted to check the code out of good practice, not because I suspected errors. I ran into some discrepancies, most of which were rounding errors. Some were data handling steps I did not understand, or nuances I missed in the functions as written by my co-author. However, these were minor things. The primary reason for this lack of clarity at moments, was that the code was not readily readable for researchers other than the primary researcher (co-author).

This often happens because certain chains of thought are clear to the researcher conducting the analyses, but not to others. Our team resolved these issues readily, after which the paper is now 1:1 reproducible with the source code and data I provide on my website (here is a .zip file).

The editor of Social Psychology was very accommodating and did not change his publication decision, as the conclusions of the paper stayed the same.

Always have a second assessor checking the code

This showed me that it is important to always have a second assessor on all the analysis code, as humans do make errors. Do not feel threatened or ashamed by errors; cross-checks within research-teams (and beyond) are part of the process. As a colleague of mine says: errors are only a bad thing if you do not catch them in time. We caught them in time.

About Chris Hartgerink

chrishartgerinkChris Hartgerink is a research master student at Tilburg University, the Netherlands. He is currently investigating false negatives in psychological science, and is starting his PhD next academic year. He tweets from @chartgerink and is interested in Open Science, scientific (mis)conduct, parametric vs. non-parametric statistics, and the subjective elements of science (e.g., evaluation of test results, method selection, publication bias).

Advertisements
Tagged , , ,

One thought on “Guest Post: Publishing a replication? Definitely worth repeating, by Chris Hartgerink

  1. […] Chris. 2014. “Publishing a Replication? Definitely Worth Repeating.” Political Science Replication June 18th. Retrieved January 1, […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: