Guest post: Stop trusting other researchers

Guest post by L.J Zigerell: Current practice in the social sciences places trust in researchers regarding their data collection, analysis, and reporting of results. That trust is sometimes unwarranted. Instead, we should increase trust in social science by encouraging tools of reproducibility: replication studies, pre-registration, third-party data collection, and open data.
Continue reading

Tagged , , ,

Teaching reproducibility for stats beginners – give me seven ideas!

ReplicationI’m teaching reproducibility as part of a lecture for statistics beginners in the social sciences. I reserved seven slides for that and I have 15 minutes. What would you include?

I’m thinking of these topics, and I would really need input on how to rank these, or additions on what I missed. Please comment below or on twitter (@polscireplicate).

  • Definition of reproducibility and research transparency
  • Recent scandals in the social sciences
  • How to save your project files in a transparent way (structuring your files)
  • Software tools 1: Rstudio and Rscripts
  • Software tool 2: knitr and R markdown
  • What can happen if you don’t work reproducibly

If you know of lecture slides or other materials, please share the links so I can give them to my students. I will also publish a link list on this blog. Thank you!

Tagged , , ,

Repost: Do Political Scientists Care About Effect Sizes – Replication and Type M Errors

Repost of an article by Christoper Gandrud: Reproducibility has come a long way in political science. Many major journals now require replication materials be made available either on their websites or some service such as the Dataverse Network. This is certainly progress. But what are political scientists actually supposed to do with this new information?

Continue reading

Tagged , , , , ,

Guest Post: The Replication Paradox, by Michèle B. Nuijten

Will integrating original studies and published replications always improve the reliability of your results? No! Replication studies suffer from the same publication bias as original studies. In her guest post, Michèle B. Nuijten, who focuses on statistical errors and data manipulation in psychology, presents two solutions to this problem.
Continue reading

Tagged , , , , , ,

Share your replication: Political Science Replication Initiative

Screen Shot 2014-08-29 at 5.56.57 PMEarlier this year, we sent out a survey on replication in political science to the Political Methodology (PolMeth) mailing list. Our survey results, which we wrote up in a post on the Monkey Cage blog, indicated quite a bit of interest in a site to share replication studies.
Continue reading

Tagged , , ,

Guest Post: Leading international studies journal takes replication seriously, by Joseph Young

Transparency and data access – these public goods are crucial principals for science. But why don’t researchers implement them? Political scientist Joseph K. Young discusses incentives for reproducibility, and how he is tracking down old replication data for the leading international studies journal ISQ.
Continue reading

Tagged , , , ,

What is a failed replication?

failed replicationA lot of original authors are concerned about their reputation when their work is replicated, and the replication fails. But when can we actually label a replication as “failed”? And how should we deal with unhappy original authors who feel ‘bullied’? Continue reading

Against publication bias: Politics journal invites pre-registered papers

After fields like Psychology have explored pre-registration of studies, now Political Science follows the trend for more transparency in research. A high ranking journal, Comparative Political Studies, plans a special issue in which all papers have to be pre-registered and are submitted without the results section. I asked Michael Findley, one of the editors behind the initative, about publication bias and transparency in the field. Continue reading

Tagged , , , , , ,

Guest Post: Publishing a replication? Definitely worth repeating, by Chris Hartgerink

Chris Hartgerink is a research master student at Tilburg University, the Netherlands. I invited him to write a guest post about his experience of publishing a replication project. It turns out that, among many other aspects of doing a replication study, one of the main take away points was that a replicator must be reproducible as well. He explains here why it is important to always have a second assessor on all the analysis code before submitting to a journal.
Continue reading

Tagged , , ,

Guest Post: Two simple things to make your research more reproducible, by Thomas Wallis

In this guest post, experimental psychologist Thomas Wallis (University of Tübingen) proposes two simple ideas how you can make your work more reproducible.
Continue reading

Tagged , , ,

Get every new post delivered to your Inbox.

Join 2,143 other followers