The largest International Relations conference, ISA’s 55th Annual Convention (March 26-29, 2014), features two panels on replication and reproducibility this year. The second panel covers “Replication in International Relations: Successes & Failures in Practice.”
This second of two panels discusses questions such as: Why provide data? Why engage in replication? What are typical errors detected through replication? What can the field gain from replication in terms of improving methodology and good research practice? [Panel 1 here]
The papers will contribute to a better understanding of conceptual, statistical and practical challenges to replication.
Nils Petter Gleditsch (Peace Research Institute Oslo, PRIO): Posting Your Data: Will You Still Get Famous?
Gleditsch, Metelits & Strand (2003) provided evidence in support of the idea that sharing data increases the number of expected citations for an article in Journal of Peace Research by as much as a factor of 2. This finding has later been criticized by Abbot (2007), who points out that the positive impact of posting replication data mainly works for a low number of top articles. This paper revisits the original study with an expanded dataset. First, we extend the empirical analysis by adding articles from ten additional volumes of the journal. Secondly, we take a closer look at the original dataset, where the articles have now had another decade to gather citstions and the citation patterns are presumably more stable. We also make comparisons with other journals with replication data.
Fernando Martel García (Harvard University): Scientific Progress in the Absence of New Data: A Procedural Replication of Ross (2006)
Estimating the effect of macro variables like democracy on aggregate outcomes like child mortality remains a formidable challenge. When new data are limited, theories imprecise, and experimentation impossible findings are mostly driven by modeling assumptions and inadvertent errors. Here I propose procedural replication as a method of generating objective evidence capable of demonstrating new insights in the absence of new data. I develop a simple Bayesian framework for replication studies, distinguish five different types of replication, and show how procedural replication can improve answers to existing research questions, tether inferences to data, and generate checklists for cumulative research. A procedural replication of Ross’s (2006) controversial finding that democracy has no effect on child mortality shows this null finding to be an artifact of questionable procedures and software. Addressing these shortcomings demonstrates democracy is associated with lower mortality with downstream implications for political conditionality in the allocation of foreign aid.
Todd Tucker (University of Cambridge): Companies Suing Countries: Bad for Development?
Arbitration by investors against countries under international investment treaties has surged in recent years, from a few dozen cases in 2000 to over 500 today. Most of the countries sued are developing nations, provoking the question: is investment arbitration bad for development? I replicate and extend two studies to show that it matters how we define “developing country.” The standard OECD and World Bank country classifications show little bias – as measured by a greater propensity for developing countries to lose than developed countries (Franck 2009). But more nuanced measures show countries with weaker institutions are more likely to be sued (Freeman 2013). I extend this work by showing that countries in rapid processes of economic change are also more likely to be sued. This contributes to the international affairs literature on the types and timing of costs of investment treaties, and also demonstrates the importance of thorough replication work for uncovering weaknesses in pre-existing studies.
Ingo Rohlfing (Bremen International Graduate School of Social Sciences): Replication in Case Studies and Process Tracing – Formulating and Implementing Standards
Qualitative research has standards regarding the handling of sources, such as providing a list of interview partners or primary and secondary sources consulted in the course of the analysis. While these are necessary steps that provide the basis for replication, they are not sufficient and unnecessarily inhibit the replication of qualitative research. The goal of this contribution is to formulate standards for qualitative research that establish a maximum degree of transparency about the collection and processing of evidence for the generation of causal inferences. The current qualitative practice of listing primary sources, interview partners, etc. is deficient because actual observations are the basis for causal inference. Consequently, the standard for replication must be to provide a list with what are now called causal process observations. Every single observation such as a single statement in an interview or a specific sentence in a primary source are to be detailed in combination with the inference that a qualitative researcher derived from this observation. Using an example from the field of international political economy, I demonstrate the current problems with replication in qualitative research and how the proposed standards can be achieved via the use of data analysis software.
The replication panels
Friday, March 28
8:15 AM – 10:00 AM
FA10: Replication in International Relations: How Journal Data Policies and Replication in Teaching Can Improve Reproducibility Standards
Willow Centre, Sheraton Centre Toronto
Immediately followed by:
10:30 AM – 12:15 PM
FB10: Replication in International Relations: Successes and Failures in Practice
Read the full conference program
ISA’s 55th Annual Convention
Spaces and Places – Geopolitics in an Era of Globalization
March 26th – 29th, 2014, Toronto, Canada