These are the best pieces I came across in the last months on replication, reproducibility & data sharing. While not strictly on political science, they are inspiring and worth discussing. Collection #2 (5 March, 2013 – May 27, 2013).
Replication Done Right
Replication Done Right by Rolf Zwaan on Friday, May 10, 2013: How to do a good replication (e.g. checking the structural integrity of the empirical foundations) with an example from Psychological Science, and how to write it up in a professional way without “pointing fingers”.
Failures to replicate in cancer research
Half of researchers have reported trouble reproducing published findings: MD Anderson survey by Retraction Watch Blog on May 16, 2013: A new study in PLOS ONE, “A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic,” found that 50% of scientists in cancer research are unable to reproduce data in at least one previously published study.
Failure to replicate intelligence-priming in social psychology
Disputed results a fresh blow for social psychology by Nature (Alison Abbott) on 30 April 2013: Think about a professor – or hooligans – just before someone takes an intelligence test. Will you perform better or worse? An influential theory that certain behaviour can be modified by unconscious cues was rejected by nine different experiments that failed to replicate this example, as PLOS ONE published. See also the New York Times on this.
Planning reproducible projects
Designing projects by Nice R Rode Blog on 5 April 2013: Two post-docs describe how to set up a perfect template for your data projects. Their project folders contain subdirectories such as: original data, code, figures, text. This post was so good it was reblogged by Rbloggers, Revolutions, and I also reblogged and extended their post here.
Blaming others for bad research
Fixing Science, Not Just Psychology by Discover Magazine on May 27, 2013: Bad research practice in psychology and neuroscience are a “formal” and “systemic” problem. Scientists should not be rewarded more for publishing statistically positive results. Also, while it “rather unfair” that people shame a lot of social psychologists rather than cognitive psychologists, “we’re no worse than anyone else” is never a good defense. All fields need to change.
Journals invite replication studies
APS Journal Seeks Labs to Participate in First Registered Replication Report Project by psychologicalscience.org on May 14, 2013: The Association for Psychological Science (APS) launched an initiative to promote replication. The first protocol has been finalized and editors are now accepting proposals from researchers who would like to contribute an independent replication. They call it the “Registered Replication Project”, and you can find more at the Guardian and on Daniel Simons’ blog. Similarly, the Journal of Research in Personality is calling for replication studies.
Failing to replicate does not mean fraud
It’s not a failure when you fail to replicate, by Pete Etchells on 3 May 2013: This article is against shaming authors whose work could not be replicated. A Failure to replicate a study does not mean that the original study was “wrong, poor, or fraudulently conducted.” All we learn is that the results of the replication did not match the original study – and this can be for a number of reasons. A blogpost by simplystatistics also asks: When does a failure to replicate detect fraud?
Center for Open Science
Psychologists Launch a Bare-All Research Initiative by John Bohannon (sciencemag.org) on 5 March 2013: Psychologists are launch a project called the Center for Open Science. It is collaborating with an established journal, Perspectives on Psychological Science, to solicit work from authors who want to have their studies replicated. Authors will be asked to first publish an experimental design and then, after a public vetting, collect data. Findings come in a separate publication. Authors will get credit for all steps in this process: experimental designs, peer review, delivering results, and replicating them. See also nationalgeographic.com on this initiative.
I’m not quoting articles here because there are too many – some of which are mentioned on my own blog.
More ‘best of’ collections
… are here.