Published articles understate the probability of type I errors

Screen Shot 2015-12-16 at 17.52.20Can we trust published articles in political science? A recent paper suggests that we should be sceptic. When comparing the published results of survey experiments with the pre-registered plans for the same study, a lot of information gets lost. 80 percent of the studies failed to report all experimental conditions and planned outcomes.


Many transparency advocates call for pre-registration of  planned research online. This practice is only slowly being taken up in political science (e.g. by the journal CPS), but there are some instances in which we can already examine the difference between an initial research proposal and the published outcome.

In the study “Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results“, authors Annie Franco, Neil Malhotra and Gabor Simonovits from Stanford University examined how accurate published findings were in political science survey experiments. Did researchers fail to report and adjust for multiple testing? Did they leave out undesired results, or unimportant outcome variables and conditions?

The Stanford researchers downloaded questionnaires that are publicly available as part of the Time-sharing Experiments in the Social Sciences (TESS) program. The program, which is funded by the National Science Foundation, invites investigators to submit proposals for experiments, and then fields successful proposals for free on a representative sample of adults in the United States.

Franco et al. collected pre-analysis plans and questionnaires for 249 studies that were conducted between 2002 and 2012; out of these, 53 studies made it into peer-reviewed political science journals such as the American Journal of Political Science or Public Opinion Quarterly.

When they compared the planned design features against what was reported in published articles, they found:

  1. 30% of papers report fewer experimental conditions in the published paper than in the questionnaire
  2. 60% of papers report fewer outcome variables than what are listed in the questionnaire;
  3. 80% of papers fail to report all experimental conditions and outcomes.

This is a problem, as the authors state:

“These practices lead to underestimation of type I errors. If this underreporting is selective, then published effect sizes are
likely to be overestimated.”
The study shows the value of publishing pre-analysis plans, and the Stanford researchers hope that more journals will require them from authors. At the moment, as Franco et al. stress, we still face a “collective action problem” in the field of political science.
As long as standard research practices do not require the full disclosure of data and code, or ask for authors to publish pre-analysis plans, then publication bias, underreporting and p-value hacking are likely to continue.
Advertisements
Tagged , , ,

2 thoughts on “Published articles understate the probability of type I errors

  1. ingorohlfing says:

    That’s interesting. It suggests we need strong mechanisms making sure that the published study conforms with the pre-registered study. When the pre-registered study is publicly available, everybody can check, but maybe this is not enough. One option is to require authors declaring that their publication is fully in accord with the pre-registered plan and, if not, to clearly highlight what the differences are.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: