David Davis Inadvertently Shows Why Transparency in All Forms of Research is Important

7 12 2017

Michel Barnier, Chief Negotiator and Head of the Taskforce of the EC for the Preparation and Conduct of the Negotiations with the United Kingdom under Article 50 of the TEU receives David Davis, British Secretary of State for Exiting the European Union.

In the last 24 hours, academics, executives, and other cerebral people here in the UK have been astounded by the revelation that the UK document did not actually produce the Brexit sectoral impact studies it had previously claimed to.have produced. The  earlier position of David Davis, the hapless minister in charge of exiting the EU, was that impact assessments for 57 or 58 different sectors of the economy did exist, but that he couldn’t show them to either the public or to his fellow MPs. Davis’s previous position had fed intensive speculation that the studies would show that Brexit would damage most or all of the 57 sectors surveyed.  Testifying before a parliamentary committee yesterday, Davis announced that no actual studies had been conducted. Davis remained sanguine that Brexit would be very good for the overall UK economy, and for all 57 sectors of it, but he refused to elaborate on what sort of research methods allowed him to come to this conclusion. Davis continues to maintain that Brexit will be a net benefit, although he seems to have modified his view that a hard Brexit (the so-called Canada Option) would confer more benefits that a soft Brexit (the so-called Norway Model). Of course, Davis never presented anything resembling a coherent social-scientific study regarding the costs and benefits of either of these models for structuring the future EU-UK relationship.

In fact, he declared that commissioning experts to write detailed forecasts and scenario plans was useless, since experts can’t really provide helpful advice to policymakers. He declared
I am not a fan of economic models as they have all been proven wrong. When you have a paradigm change as in 2008, all the models are wrong. As we are dealing with here [with a] free trade agreement or a WTO outcome, it’s a paradigm change.

Davis’s remarks, which are further evidence of rising skepticism about academic expertise and “System 2 thinking” more generally, have generated a storm of debate, particularly from those of us who believe in evidence-based policy.    Davis’s remarks were a reminder of Michael Gove’s now infamous statement that the UK had had “enough of experts” and the people who prioritize feeling, “gut instinct,” and faith over science and reason in dealing with issues ranging from GMOs to global warming.

The non-existence and non–publication of the 57 sectoral studies is certainly an important issue, since such reports can help to guide policy decisions (e.g., the choice between the Norway and Canada models) and provide valuable information to investors, firms, and households that can allow them to adjust their own strategies prior to Brexit. [If the reports said that a hard Brexit would likely destroy jobs in car manufacturing but would likely create them in fish-processing, that intel could be valuable to estate agents in Sunderland or to young peoople currently deciding which skill sets to acquire]. Governments can help markets to work better by supplying people with useful information. However, I’m not writing this post to point out the various ways in which commisioning and publishing the 57 studies would improve either policy decisions or the functioning of markets. Instead, I want to make a more fundamental point about why increased transparency in all forms of research, academic and governmental, is desirable.

By increased research transparency, I mean the people who present findings need to show their work– to show in greaters detail than has hitherto been the case how they arrived at a given set of conclusions, whether those conclusions are “Brexit will be good for the UK economy” or “avoid carbs” or “CO2 emissions will likely cause sea levels to rise”. Norms in many academic disciplines and in  policymaking have shifted in recent years in favour of greater transparency.

For those academics who do research that informs public policy and/or private-sector (that includes me in a modest way, as today’s hearings at the Supreme Court of Canada show) increased research transparency is doubly important. In my own field of business history, I have been advocating for Open Data and the adoption of a form of Active Citation. Andrew Nelson, a qualitative researcher at the Lundquist Center for Entrepreneurship at the University of Oregon, has been advocating more or less the same thing in his home field, which is organization studies (see here).

2017_am_banner-1-960x345

For several years, the Berkeley Initiative for Transparency in the Social Sciences has been working to promote the adoption  of more rigorous research transparency institutions in the social sciences. Their annual conference, which concluded yesterday, included a paper that deals with precisely the issues that have been raised by David Davis’s shambolic performance in parliament yesterday, namely the ways in which transparency and reproducibility can increase the credibility of policy analysis. The  paper, which is by Fernando Hoces de la Guardia, is about the US context and the battles over how to interpret the results of Seattle’s famous experiment with a $15 per hour minimum wage, but there are lessons of broader applicability that should be observed by both, or rather all sides, in the various debates related to Brexit. More importantly,  it supports my contention that all researchers, whether academic or in government, need to be more transparent if we are to re-gain the trust of stakeholders.

 

How Transparency and Reproducibility Can Increase Credibility in Policy Analysis: A Case Study of the Minimum Wage Policy Estimate
Fernando Hoces de la Guardia

Abstract: The analysis of public policies, even when performed by the best non-partisan
agencies, often lacks credibility (Manski, 2013). This allows policy makers to cherrypick between reports, or within a specific report, to select estimates that better match their beliefs. For example, in 2014 the Congressional Budget Office (CBO) produced a report on the effects of raising the minimum wage that was cited both by opponents and supporters of the policy, with each side accepting as credible only partial elements of the report. Lack of transparency and reproducibility (TR) in a policy report implies that its credibility relies on the reputation of the authors, and their organizations, instead of on a critical appraisal of the analysis.

This dissertation translates to policy analysis solutions developed to address the
lack of credibility in a different setting: the reproducibility crisis in science. I adapt the Transparency and Openness Promotion (TOP) guidelines (Nosek et al, 2015) to the policy analysis setting. The highest standards from the adapted guidelines involve the use of two key tools: dynamic documents that combine all elements of an analysis in one place, and open source version control (git). I then implement these high standards in a case study of the CBO report mentioned above, and present the complete analysis in the form of an open-source dynamic document. In addition to increasing the credibility of the case study analysis, this methodology brings attention to several components of the policy analysis that have been traditionally overlooked in academic research, for example the distribution of the losses used to pay for the increase in wages. Increasing our knowledge in these overlooked areas may prove most valuable to an evidence-based policy debate.