Moving Towards Open Data in Business History

13 04 2018

I have long been an advocate of Open Data and of the adoption of an Open Data norm in the field of business history. I recently published a paper in the journal Business History that outlines why our field needs to adopt Open Data and a system called Active Citation. What Open Data would mean in practice is that whenever a business historian cites a primary source (e.g., a letter in an archive or an article in a historic newspaper), the footnote must include a hyperlink to a scanned image of the document. This system would have a number of advantages. First, it would accelerate the digitization of primary sources and once a primary source has been put online for one purpose, it can be re-used by another researcher. Moreover, the creation of an Open Data rule in business history would be yet another victory for the research transparency movement. In the last half decade, a variety of academic disciplines have embraced research transparency and Open Data is a big part of the research transparency movement.  The requirement that raw data be published alongside the article based on that data is designed to counteract the impression that researchers sometimes use data selectively or in an otherwise unprincipled way.

 

Although the impetus for research transparency and Open Data has come largely from academics concerned about data mis-representation, the movement has been able to make so much progress in recent years because it has had a backer with deep pockets, the philanthropist John Arnold. Arnold was recently profiled in Wired magazine. I would encourage anyone interested in Open Data and Research Transparency to check out this article.

In view of the importance of Open Data to the future of the field of business history, it is exciting to see that an increasing number of business-historical data sources are being made freely available online.

I see from The Exchange, the blog of the Business History Conference, that The Newberry Library in Chicago has announced a major revision to its policy regarding the re-use of collection images: “images derived from collection items are now available to anyone for any lawful purpose, whether commercial or non-commercial, without licensing or permission fees to the library.”

This reform to the Newberry Library’s rule would certainly help to make it easier for researchers who based papers on materials in their collection to use Open Data in their papers. I would like to congratulate the Newberry Library on their wise decision and would like to encourage other repositories of business historical materials to follow this example whenever they are legally allowed to do so.





David Davis Inadvertently Shows Why Transparency in All Forms of Research is Important

7 12 2017

Michel Barnier, Chief Negotiator and Head of the Taskforce of the EC for the Preparation and Conduct of the Negotiations with the United Kingdom under Article 50 of the TEU receives David Davis, British Secretary of State for Exiting the European Union.

In the last 24 hours, academics, executives, and other cerebral people here in the UK have been astounded by the revelation that the UK document did not actually produce the Brexit sectoral impact studies it had previously claimed to.have produced. The  earlier position of David Davis, the hapless minister in charge of exiting the EU, was that impact assessments for 57 or 58 different sectors of the economy did exist, but that he couldn’t show them to either the public or to his fellow MPs. Davis’s previous position had fed intensive speculation that the studies would show that Brexit would damage most or all of the 57 sectors surveyed.  Testifying before a parliamentary committee yesterday, Davis announced that no actual studies had been conducted. Davis remained sanguine that Brexit would be very good for the overall UK economy, and for all 57 sectors of it, but he refused to elaborate on what sort of research methods allowed him to come to this conclusion. Davis continues to maintain that Brexit will be a net benefit, although he seems to have modified his view that a hard Brexit (the so-called Canada Option) would confer more benefits that a soft Brexit (the so-called Norway Model). Of course, Davis never presented anything resembling a coherent social-scientific study regarding the costs and benefits of either of these models for structuring the future EU-UK relationship.

In fact, he declared that commissioning experts to write detailed forecasts and scenario plans was useless, since experts can’t really provide helpful advice to policymakers. He declared
I am not a fan of economic models as they have all been proven wrong. When you have a paradigm change as in 2008, all the models are wrong. As we are dealing with here [with a] free trade agreement or a WTO outcome, it’s a paradigm change.

Davis’s remarks, which are further evidence of rising skepticism about academic expertise and “System 2 thinking” more generally, have generated a storm of debate, particularly from those of us who believe in evidence-based policy.    Davis’s remarks were a reminder of Michael Gove’s now infamous statement that the UK had had “enough of experts” and the people who prioritize feeling, “gut instinct,” and faith over science and reason in dealing with issues ranging from GMOs to global warming.

The non-existence and non–publication of the 57 sectoral studies is certainly an important issue, since such reports can help to guide policy decisions (e.g., the choice between the Norway and Canada models) and provide valuable information to investors, firms, and households that can allow them to adjust their own strategies prior to Brexit. [If the reports said that a hard Brexit would likely destroy jobs in car manufacturing but would likely create them in fish-processing, that intel could be valuable to estate agents in Sunderland or to young peoople currently deciding which skill sets to acquire]. Governments can help markets to work better by supplying people with useful information. However, I’m not writing this post to point out the various ways in which commisioning and publishing the 57 studies would improve either policy decisions or the functioning of markets. Instead, I want to make a more fundamental point about why increased transparency in all forms of research, academic and governmental, is desirable.

By increased research transparency, I mean the people who present findings need to show their work– to show in greaters detail than has hitherto been the case how they arrived at a given set of conclusions, whether those conclusions are “Brexit will be good for the UK economy” or “avoid carbs” or “CO2 emissions will likely cause sea levels to rise”. Norms in many academic disciplines and in  policymaking have shifted in recent years in favour of greater transparency.

For those academics who do research that informs public policy and/or private-sector (that includes me in a modest way, as today’s hearings at the Supreme Court of Canada show) increased research transparency is doubly important. In my own field of business history, I have been advocating for Open Data and the adoption of a form of Active Citation. Andrew Nelson, a qualitative researcher at the Lundquist Center for Entrepreneurship at the University of Oregon, has been advocating more or less the same thing in his home field, which is organization studies (see here).

2017_am_banner-1-960x345

For several years, the Berkeley Initiative for Transparency in the Social Sciences has been working to promote the adoption  of more rigorous research transparency institutions in the social sciences. Their annual conference, which concluded yesterday, included a paper that deals with precisely the issues that have been raised by David Davis’s shambolic performance in parliament yesterday, namely the ways in which transparency and reproducibility can increase the credibility of policy analysis. The  paper, which is by Fernando Hoces de la Guardia, is about the US context and the battles over how to interpret the results of Seattle’s famous experiment with a $15 per hour minimum wage, but there are lessons of broader applicability that should be observed by both, or rather all sides, in the various debates related to Brexit. More importantly,  it supports my contention that all researchers, whether academic or in government, need to be more transparent if we are to re-gain the trust of stakeholders.

 

How Transparency and Reproducibility Can Increase Credibility in Policy Analysis: A Case Study of the Minimum Wage Policy Estimate
Fernando Hoces de la Guardia

Abstract: The analysis of public policies, even when performed by the best non-partisan
agencies, often lacks credibility (Manski, 2013). This allows policy makers to cherrypick between reports, or within a specific report, to select estimates that better match their beliefs. For example, in 2014 the Congressional Budget Office (CBO) produced a report on the effects of raising the minimum wage that was cited both by opponents and supporters of the policy, with each side accepting as credible only partial elements of the report. Lack of transparency and reproducibility (TR) in a policy report implies that its credibility relies on the reputation of the authors, and their organizations, instead of on a critical appraisal of the analysis.

This dissertation translates to policy analysis solutions developed to address the
lack of credibility in a different setting: the reproducibility crisis in science. I adapt the Transparency and Openness Promotion (TOP) guidelines (Nosek et al, 2015) to the policy analysis setting. The highest standards from the adapted guidelines involve the use of two key tools: dynamic documents that combine all elements of an analysis in one place, and open source version control (git). I then implement these high standards in a case study of the CBO report mentioned above, and present the complete analysis in the form of an open-source dynamic document. In addition to increasing the credibility of the case study analysis, this methodology brings attention to several components of the policy analysis that have been traditionally overlooked in academic research, for example the distribution of the losses used to pay for the increase in wages. Increasing our knowledge in these overlooked areas may prove most valuable to an evidence-based policy debate.

 

 





The FT Journal List in the Age of Brexit

2 07 2016

Results of the Brexit referendum on Friday overshadowed the publication on the same day of the Financial Times’s updated its list of the most important academic and practitioner journals in management. The number of listed journals was increased from 45 to 50: four journals were de-listed (e.g., Academy of Management Perspectives) and nine new ones (e.g., Human Relations) have been added. The exclusion or inclusion of journals on the list is vitally important to the career prospects of individual academics, since authorship of a paper in FT-listed journal confers prestige. Similarly, the inclusion of its journal in the list is crucial for a scholarly organization or community, as it confers legitimacy. The listing of journals is also used by the Financial Times in compiling its own Business School research rankings. A business school’s research rank is calculated according to the number of faculty publications published in the listed journals.

 

Given the importance of this list, one would have expected some clarity about the methodology used to generate it. Without a published methodology, there is the risk that people may regard the list as being the product of the subjective whims of a few individuals sitting in an office in England. The development of the new FT list was preceded by a consultation period in which academics were invited to email their thoughts about which journals should be included to the following email address mba@ft.com. This email account was managed by one Laurent Ortmans. We know from LinkedIn that this individual has worked as a UK civil servant and is a graduate of Kingston University and the University of Rennes. Aside from that, his background, interests, and associations are murky.   During the consultation period, which ended on 17 June, a number of scholarly organizations mobilized to lobby on behalf of their journals. For instance, Debra Shapiro, the president of the Academy of Management sent out the following email to its members on 6 June:

 

As you may know, the Financial Times uses a list of 45 journals to assess research quality and determine business school rankings (http://www.ft.com/cms/s/2/3405a512-5cbb-11e1-8f1f-00144feabdc0.html#axzz48pTKFgOO.)  We recently learned that the Academy of Management Review (AMR) may be removed from this FT 45 list of journals.   [AS: Prof. Shapiro did not specify how she learned of the possibility that the AMR and AMJ might be removed ].

We find this troubling, as AMR has consistently been ranked among the top five most influential and frequently cited journals in our field.  In fact, AMR is ranked #1 in the category of business and #2 in the category of Management (Thompson Reuters, 2014).  The journal’s impact factor is 7.45 with a 5 year impact factor of 10.736. 

AMR consistently publishes the highest quality theoretical work done in the field.  With close to *5 million downloads* to its content in 2015, AMR is an essential resource for management scholars and students who seek to understand the “why’s and how’s” behind timely and fundamental organizational problems faced by managers and organizations.

Your school may be asked to vote on whether to keep AMR on the Financial Times list of journals.  If so, please contact your representative as soon as possible to make sure that AMR stays on the list.

 

 

We know that the consultation period ended on 17 June and that the new list was published on 24 June. What isn’t clear is the process that took place during the intervening six days (only four of which were working days in the UK). It’s a total black box. Information was fed into the email address mentioned above and was processed by the staff of the FT, who may also have used citation counts, but not that much is really known.  The FT has not published the methodology it used to rank journals and make decisions about journal listing and delisting. (I would note here that the methodology used to decide on the 24 journals that are part of the rival Dallas Journals List is also unpublished, but one would have expected better from a UK-based organization such as the FT, especially as the  UK’s Chartered Association of Business Schools explains in great detail the methodology used to determine its ranking of journals. The CABS list contains an admirably clear and transparent description of the methodology used and the individuals consulted).

 

The lack of transparency about the process behind the FT journal rankings is ironic on many levels. It is ironic because the FT rightly critiques developing countries for their lack of transparency. It is also ironic because virtually all management journals require papers to include a methodology section in which authors explain precisely how they came up with their results. I’m certain that if an academic sent a paper that expressed opinions in the form of a ranking without a detailed justification of the methodology and the nature of the data used, it would be desk rejected. You can’t just make a claim and say “trust me”. You need to show your work. Indeed, there is currently a pan-disciplinary movement in the social sciences to increase rather than decrease the transparency, for instance by requiring academic authors to publish their raw data as well as a description of how they used it. Research transparency is also a big issue in the natural sciences.

The lack of transparency about the process used by the FT in making its journal rankings is disappointing to me because I really respect the FT as source of information about business precisely because it is transparent and always declares potential conflicts of interest in a note at the base of the article. During and after the global financial crisis, the FT’s coverage of the issue of bond rating agencies and their non-transparent procedures was excellent.

What we don’t know right now is whether Mr. Ortmans worked solely or with others in the course of processing the information that arrived in his email account during the consultation period that ended on 17 June.  The methodology that the FT staff used and the precise weighting of citation counts, number of lobbying emails, etc, are also unspecified. In contrast, the number of signatories to petitions on the 10 Downing Street website is a matter of public record, since 100,000 signatures triggers a requirement for a debate in parliament.

 

Here is another key issue: since practitioner journals are included in the rankings, it would be very useful to know which particular practitioners were consulted. Was the sample of practitioners consulted representative of the global readership of the FT? Or did they just go for a focus group of people in London and ask them to read representative articles? Were the practitioners exclusively employed in the private sector or were public-sector managers consulted? Were the journalists who use academic knowledge involved in the process? One of the wonderful things about the FT is that many of its columnists, including the great Martin Wolf and Gillian Tett, follow academic research and use it in their analysis. Were any London-based FT journalists invited to express their views about which academic journals were included? We simply don’t know. Were Big Data techniques used to evaluate the utility of the research presented in the journals? For instance, does the extent to which articles in a journal are shared by managers on LinkedIn and other social media determine whether a journal is included? If so, what was the weighting given to such evidence of utility to practitioners?  Does academic research that was shared 1,000 times on LinkedIn get more or fewer points that academic research that was cited 1,000 times by other academics? How were potential conflicts of interest avoided? We just don’t know the answers to any of these questions. In contrast, journal rankings based on overall citation count or H-index, while admittedly somewhat arbitrary, are transparent.

If not saying that the FT rankings are incorrect or that any of the additions or deletions from the list were unjustified.  Personally, I’m pleased that the excellent journal Human Relations was added but that’s just me being subjective. At this point, nobody except Mr. Ortmans can express an informed opinion about this subject! The hilarious lack of transparency about methodology means that one won’t be able to accept his list as legitimate until the details of the process are published.  Until we see a detailed explanation of the methodology and  weighting, we should probably stop referring to the list as the FT50 and instead call it the Ortmans50 after the obscure individual who appears to have made this list over the course of a few days in June.

I am convinced that unless there is greater transparency about all matters related to research, management academics and experts more generally will lose their social licence to operate. Or maybe they will continue to get paid to publish in academic journals, but managers and the general public will cease to pay any attention to what they have to say in the same way that most French people no longer pay attention to the sexual mores taught by the Catholic Church. The priests of France haven’t starved or been turfed out of their accommodation, but they have lost all  influence. Increased transparency in all matters related to research is necessary if management academics are to escape a similar fate.

 

As others have noted (see here and here), in voting for Brexit, the British public were rejecting the advice of the experts from the universities, IMF, government, and the private sector who were almost uniformly in favour of Remaining in the EU. In large part, the general public disregarded the expert consensus because the 2008 financial crisis taught the general public that economists, and by extension other experts, are full of crap. (One must admit the flood of conflicting advice that the general public gets from experts in the field of nutrition has also contributed to the erosion of the credibility of experts). Films such as Inside Job and, more recently, The Big Short, reinforced the view that experts are self-interested frauds, which became conventional wisdom down at the pub. (Trust me about that last bit). Of course, experts aren’t actually full of crap, but without transparency measures academics will be unable to rebuild the trust of the public or, in this case, business people. The relevance of business schools will continue to erode.

This blog post should not be interpreted as an attack on Mr. Ortmans, the FT, or any of the journals that were listed or delisted. I do think that if experts are to regain the credibility that they have so evidently lost, they need to be more transparent about their research and in all systems related to the presentation of their research. If we don’t, the general public will continue to regard us as scoundrels and scammers and with disregard our advice.