Transparent reporting, replications and open data are vital for scientific progress and developing useful knowledge for practice. However, public administration is not fully transparent (for instance, null effects are seldom published), replications are almost never conducted let alone published and few open datasets are available. We do not have a fully open research culture. In this article, I first argue that this is problematic. Second, I show how we can make progress. At the moment, we are facing a collective action problem: the research community would benefit if we promote an open research culture, but individual scholars lack incentives. One fruitful way to move forward is that journals like Public Administration Review step in and actively promote values like transparency, openness and replication. This can be done by adopting – in a thoughtful and nuanced way – the recently developed Transparency and Openness Promotion (TOP) guidelines for journals.

Note: Previously posted on the Public Administration Review website and written in my previous role as an Associate Editor of Public Administration Review.

A need for transparency, replications and open data

Transparent reporting, open data and replications are crucial elements for scientific progress. In a transparent science, for instance, both null results and statistically significant results are published, improving our evidence base. However, the current culture promotes publishing statistically significant results. In a recent article in Science, Franco et al. (2014) show that there is a strong publication bias in the social sciences. One of the reasons is that scholars do not write up their null results and submit such papers with null results for peer review. Many authors believe that a null result can be interesting, but that it has limited publication potential. As one scholar noted (see Franco et al., 2014:1504):

 “I think this is an interesting null finding, but given the discipline’s strong preference for P<0.05, I haven’t moved forward with it.“

We would probably find similar results if we did a similar survey in the public administration community. This is worrisome as when null results are available but are not shared openly, public administration scholars and practitioners receive biased information (Rosenthal, 1979).

Related to this, replications of qualitative or quantitative research are seldom published in the public administration community. A lack of replications is problematic, as replications show the external validity of knowledge and its boundary conditions (Freese, 2007). For instance, it could be that a survey among American public managers found that red tape is negatively related to job satisfaction. A direct replication of this finding with – say Chinese public managers – might find a different result. A failure to replicate in a field setting is not problematic by definition (cf. Open Science Foundation, 2015). In fact, it can be quite interesting to discuss the context dependence of such relationships. However, there are to date very few direct replications published in public administration journals. Scholars may feel novel ideas are more valued by editors and reviewers than replications.

Lastly, it would be beneficial for our discipline if datasets on which empirical articles are based are openly available. In a recent article in PAR, Ken Meier (2015:20) states that “all journals should require that datasets used for publication be archived and publicly available.” He argues that this gives other scholars the opportunity to use existing work as a base to extend their own research. It also avoids losing databases simply because they were not archived. However, I fully understand that some scholars do not share their databases, for instance because of intellectual property rights or ethical concerns. At the moment, however, almost no databases are shared. In an open research culture, sharing data is the norm, and not sharing is the exception.      

Moving forward: Transparency, Openness and Preregistration guidelines

It is clear that the public administration community would benefit from a research culture where null results are being published, where ‘successful’ and ‘unsuccessful’ replications are being conducted and published, and where the majority of databases on which the studies are based are available. Many scholars would probably agree (De Vries et al., 2006). However, individual scholars lack incentives. Put crudely, they feel that null-results and replications are hard to publish and wonder ‘what’s in it for me’ when considering to share their data. In sum, we are facing a collective action problem.

One potentially fruitful way to move forward is that public administration journals step in and actively promote an open research culture. This will change incentives for scholars. In order to stimulate this, Nosek and colleagues (2015) published the Transparency and Openness Promotion (TOP) guidelines. Public administration journals like PAR could adapt (parts of) these TOP-guidelines in a nuanced and thoughtful way, adapting them to the context of publishing in public administration journals.

              There are eight standards: citation standards; data transparency; analytic methods transparency; research materials transparency; design and analysis transparency; preregistration of studies; preregistration of analysis plans; and replication. Detailed information about the standards can be accessed via https://cos.io/top/. There are four levels for each standard. This gives public administration journals the opportunity to choose which level is most beneficial at which point in time. We show the levels below and include an example for data transparency (Nosek et al., 2015:1424). Data transparency is one of the eight TOP standards and is in line with the recommendation for open databases of Meier (2015:20):

  • Level 0:  Thisis the basic level. The journal says nothing about data transparency. PAR, and all the other PA journals, is at the moment at this level.
  • Level 1: This level is designed to “have little to no barrier to adoption while also offering an incentive for openness”. PAR articles state whether data are available. Hence, data sharing is not obligatory at all. If data is available, the article states where to access them.
  • Level 2: This level “has stronger expectations for authors but usually avoids adding resource costs to editors or publishers that adopt the standard.” PAR articles must be posted at a trusted repository. However, exceptions are possible. Such exceptions must be identified at article submission. For instance, exceptions can indeed be necessary to prevent identification of subjects and recovery of large investments by the initial creators/funders of a database;
  • Level 3: This level could present implementation challenges for large journals like PAR. In addition to the requirement that data must be posted at a trusted repository, reported analyses must be independently reproduced before publication. As with all levels, exceptions are possible if identified at article submission.

Major disciplinary journals have already taken up (parts of) the TOP-guidelines, including Science, the American Journal of Political Science and Psychological Science. For a full list of journals, see https://cos.io/top/#signatories.  No public administration journals have yet adopted TOP guidelines. I argue that we should not adopt everything all at once, moving to level 3 for all eight standards. We know from scholarly work on the diffusion and adoption of innovation that we should make innovations compatible to the context and that we should try out which elements do and do not work (Rogers, 2003). Therefore, I would suggest that journals like PAR consider which TOP-standards to adopt at which level, and study to what extent this is beneficial. We should take into account that public administration is a diverse field, with different positions regarding philosophy of science and methods. In the end, we should aim towards a more transparent and open research culture, were null results are published, studies are being replicated and the majority of databases are openly available. This can be beneficial for scientific progress and for developing useful knowledge (Perry, 2012) for public administration practitioners.


De Vries, Raymond, Anderson, Melissa. S., & Martinson, Bryan. C. (2006). Normal misbehavior: Scientists talk about the ethics of research. Journal of Empirical Research on Human Research Ethics, 1(1), 43-50.

Franco, Annie, Malhotra, Neil, & Simonovits, Gabor (2014). Publication bias in the social sciences: Unlocking the file drawer. Science345(6203), 1502-1505.

Freese, Jeremy (2007). Replication standards for quantitative social science: Why not sociology?. Sociological Methods & Research36(2), 153-172.

Meier, Ken J. (2015). Proverbs and the evolution of public administration. Public Administration Review75(1), 15-24.

Nosek, Brian. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., … & Contestabile, M. (2015). Promoting an open research culture: Author guidelines for journals could help to promote transparency, openness, and reproducibility. Science348(6242), 1422-1425.

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science349(6251), aac4716-1-8.

Perry, James L. (2012). How Can We Improve Our Science to Generate More Usable Knowledge for Public Professionals? Public Administration Review, 72(4): 479–482.

Rogers, Everett M. (2003). Diffusion of innovations. New York: Simon and Schuster

Rosenthal, Robert (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638.

Note: Previously published at Public Administration Review, Speak Your Mind (www.publicadministrationreview.org), Lars Tummers, Utrecht University.

Comments are closed.

  • LarsTummers.com