Can we trust published articles in political science? A recent paper suggests that we should be sceptic. When comparing the published results of survey experiments with the pre-registered plans for the same study, a lot of information gets lost. 80 percent of the studies failed to report all experimental conditions and planned outcomes.
Many transparency advocates call for pre-registration of planned research online. This practice is only slowly being taken up in political science (e.g. by the journal CPS), but there are some instances in which we can already examine the difference between an initial research proposal and the published outcome.
In the study “Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results“, authors Annie Franco, Neil Malhotra and Gabor Simonovits from Stanford University examined how accurate published findings were in political science survey experiments. Did researchers fail to report and adjust for multiple testing? Did they leave out undesired results, or unimportant outcome variables and conditions?
The Stanford researchers downloaded questionnaires that are publicly available as part of the Time-sharing Experiments in the Social Sciences (TESS) program. The program, which is funded by the National Science Foundation, invites investigators to submit proposals for experiments, and then fields successful proposals for free on a representative sample of adults in the United States.
Franco et al. collected pre-analysis plans and questionnaires for 249 studies that were conducted between 2002 and 2012; out of these, 53 studies made it into peer-reviewed political science journals such as the American Journal of Political Science or Public Opinion Quarterly.
When they compared the planned design features against what was reported in published articles, they found:
- 30% of papers report fewer experimental conditions in the published paper than in the questionnaire
- 60% of papers report fewer outcome variables than what are listed in the questionnaire;
- 80% of papers fail to report all experimental conditions and outcomes.
This is a problem, as the authors state:
“These practices lead to underestimation of type I errors. If this underreporting is selective, then published effect sizes arelikely to be overestimated.”