A new article by researchers at the University of Amsterdam shows that publication bias towards statistically significant results may cause p-value misreporting. The team examined hundreds of published articles and found that authors had reported p-values < .05 when they were in fact larger. They conclude that publication bias may incentivize researchers to misreport results.
I have recently had a paper rejected, with one of the reviewers stating that I presented “primarily a big set of null findings across a variety of models.” This is, apparently, not uncommon. A growing range of studies provide evidence that a rejection of articles with non-significant findings (p-values < .05) is common in social science (Fanelli 2010, Mervis 2014).
A new article by Ivar Vermeulen and his co-authors (2015) now provides evidence that such publication bias may cause researchers to misreport their p-values, so that their findings seem statistically significant below the cut-off point of .05, when in fact they are not.

Clear bias toward reporting p-values too low
In communication science, 8.8% of the p-values reported were misreported, and in social psychology 8.7%. Some of the misrepresented p-values were critical errors, which led to the presentation of significant results – somewhat an advantage if you want to get published. Interestingly, they also found that p-value misrepresentation is more common in higher impact journals and when smaller samples were used.
With respect to errors in p-value reporting, our results show that p-value misreporting in communication science literature is rather frequent, similarly frequent to the field of social psychology (…). It is disturbing that, rather than being innocent mistakes, these errors appear to be driven by researchers’ motivations to demonstrate significant relationships. That is, our results reveal that erroneously reducing p-values (especially in favor of significance) occurs significantly more often than erroneously enlarging p-values.
Proposed solutions: Pre-replication
The authors proposed several solutions to the problem that can be tackled by journals and by the scientific community as a whole. For example:
- Journals should ask editors and reviewers to conduct an automated statistics check for every incoming document
- Increase transparency, e.g., by posting research data and analysis files in public repositories and promoting replication research
- Pre-registration of studies, where researchers submit their hypotheses and analysis plans before starting data collection
Read the full study
Ivar Vermeulen , Camiel J. Beukeboom , Anika Batenburg , Arthur Avramiea , Dimo Stoyanov , Bob van de Velde , Dirk Oegema, Blinded by the Light: How a Focus on Statistical “Significance” May Cause p-Value Misreporting and an Excess of p-Values Just Below .05 in Communication Science, Communication Methods and Measures, Vol. 9, Iss. 4, 2015.
[…] that publication bias may incentivize researchers to misreport results.” To read the blog, click here. To read the article, click […]
LikeLike
[…] Reporting statistical significance causes p-hacking, says Nicole Janz. […]
LikeLike