The Guardian published a long read piece today on attempts to detect data errors or fabrication on a large scale. Using the app Statcheck by Michele Nuijten, we can now detect automatically if papers may have errors in them. Unfortunately Michele is hardly mentioned in the article and does not become a vital part of the larger story. Gender bias?
Being able to detect data errors automatically raises all sorts of questions, for example, should we publish a complete list of potentially fraudulent papers, as Chris Hartgerink did? Wouldn’t that damage the reputation of those papers where errors were actually very small, and not due to fraud?
The Guardian article suggests that younger, more progressive scholars are all for it, while more established ones “feared an intrusive new world of public blaming and shaming“.
All this is important in the replication discussion, but I noticed something else.
Gender bias in the article
The Guardian article is a good read, but I have one issue with it: It portrays the whole scientific transparency story and the new development of automated error detection as a story of ‘detective’ men.
Michele Nuijten, who co-developed the software, does not even get a quote or become a substantial part of the narrative. She actually just received the Leamer-Rosenthal Prize for Open Social Science by the Berkeley Initiative for Transparency in the Social Sciences!
I think the Guardian article – otherwise a great read – reflects a problem in the already gendered field of science, and the Guardian author and the editors should have noticed this.