What’s going to happen in the reproducibility world in 2018? What are the new trends to push transparency, especially in the social sciences? And most importantly, what’s the next big challenge to be tackled? Here are my top predictions, collected with the help of reproducibility folks on twitter.
Many new developments will keep going in 2018. For example, the Transparency and Openness Promotion (TOP) Guidelines now have almost 5000 signatories (journals, funders, publishers) who commit to developing clear requirements for transparency. More and more journals will follow, which means that more researchers will (need to) share their data when they publish their work. But what else could happen in the reproducibility world in the social sciences in 2018?
Ethics and practicability – transparent qualitative research
One of the major challenges in 2018 will be to foster transparent qualitative research.
The problem is that there is still a strong misconception among qualitative researchers that their data cannot be shared, and that transparency is impractical if not impossible. However, there are many ways to work reproducibly when you conduct interviews, collect field notes or observe participants. You can even share sensitive data if they are disconnected from personal information.
Therefore, in 2018, we need to develop, promote and implement more ideas on how to make it easy to share qualitative information. A good starting point are the guidelines by the UK Data Service and the Qualitative Data Repository. Looking forward, a new paper explores how pre-registration and pre-analysis plans can improve qualitative inference (Kern and Gleditsch, 2017). The authors stress that a “transparent archiving of research questions, expected data generating processes, and intended designs could help in providing insights into the procedure of qualitative inference.” Here, you don’t focus so much on openness of the data itself, but establish a transparent process of designing a study and selecting evidence. In doing so, you can reduce researchers’ degrees of freedom as proposed by Joseph Simmons, Leif D. Nelson and Uri Simonsohn (2011),
It will be crucial to hear more from qualitative researchers on what the practical issues are around sharing data itself and/or transparency in the research process. Ideally there would be some funding for those who are willing to create best practice examples based on their own work.
Finally, let’s talk about incentives. When you prepare qualitative data files, or you pre-register a case study, you are investing lots of valuable time. There are particular advantages for open qualitative research that need to be clarified and promoted.
Replicate, replicate, replicate
We need more researchers doing replications. Rolf Zwaan et al. (2017) argue that replication studies have to become mainstream. The authors give advice on various types of replication studies and how to do it right. They also point out how we can assess the merit of those replication studies (Thanks to Prof Neil Martin on twitter for suggesting this article).
I’m curious about the results of ongoing replication projects that may be published in 2018. For example, the Social Sciences Replication Project (similar to Reproducibility Projects for Cancer Biology or Psychology) is still ongoing. Or keep an eye out for George Avelino & Scott Desposato’s project in which they assess the reproducibility of articles in three leading Brazilian social/political science journals. (If you know of ongoing replication projects in the social sciences please comment below, tweet or email me.)
Of course, more journals and funders need to acknowledge the merit of replication studies to give an incentive to do it in the first place.
Ticking time bomb – will 2018 be the year of social science scandals?
The gay marriage data fabrication scandal that made it into the media a while ago was the first notable reproducibility scandal in political science. By then, the new Data Access and Research Transparency (DA-RT) guidelines had already triggered a huge discussion in the field. But it was after the scandal around fabricated data and a retraction in Science that more researchers in political science took transparency seriously.
Other disciplines had their own scandals and crisis moments (Cancer, Psychology, Economics). Scandals are often a way forward, pushing new boundaries and persuading researchers, funders and journals that we can’t ignore sloppy science and low standards. In many social science fields, I’m expecting that the rise in replication initiatives might also lead to more scandals.
Admitting errors – the rise of self-corrections
Many researchers are uncomfortable with comments on and corrections of their work (see my replication chains series). And it’s true – replicators might make mistakes themselves, and many times you can approach a problem in different ways and come to different results. Also, since it’s so much easier to run a replication study when the original data for an article is uploaded, there’s a bias towards ‘good’ researchers being scrutinised.
But what if it’s not someone else re-examining your work? What if you revisit one of your own earlier studies and cannot replicate the results? Should you contact the journal and risk a retraction?
I hope that in 2018, researchers will realise that self-corrections are to be applauded and deserve almost a transparency badge in itself. Human error, in particular errors by younger researchers, can happen and many of them should be forgiven if the correction is handled professionally. Dorothy Bishop argues in her article “Fallibility in science: Responding to errors in the work of oneself and others” (pdf) that “mistakes are part of normal science.” When you publish a self-correction and admit to errors, you even enhance your reputation for scientific integrity. Retraction Watch has already recognised that ‘doing the right thing‘ deserves positive attention.
The good, the bad, and the ugly – framing the reproducibility crisis in a more nuanced way
Finally, I hope for a more nuanced and thoughtful discussion about reproducibility in 2018. Headlines in the media about a ‘crisis’ are a wakeup call, but they are also used by some to dismiss science altogether. Not every error in a study creates a ‘crisis’ and not every study that does not fully replicate signals scientific misconduct. Replicators have to communicate clearly which results could (not) be replicated, and put a ‘failure to replicate into a wider context. For those of us talking to the media: We should make sure that the discussion reflects not only the ugly and bad, but also all the good work being done.