The FT Journal List in the Age of Brexit

2 07 2016

Results of the Brexit referendum on Friday overshadowed the publication on the same day of the Financial Times’s updated its list of the most important academic and practitioner journals in management. The number of listed journals was increased from 45 to 50: four journals were de-listed (e.g., Academy of Management Perspectives) and nine new ones (e.g., Human Relations) have been added. The exclusion or inclusion of journals on the list is vitally important to the career prospects of individual academics, since authorship of a paper in FT-listed journal confers prestige. Similarly, the inclusion of its journal in the list is crucial for a scholarly organization or community, as it confers legitimacy. The listing of journals is also used by the Financial Times in compiling its own Business School research rankings. A business school’s research rank is calculated according to the number of faculty publications published in the listed journals.

 

Given the importance of this list, one would have expected some clarity about the methodology used to generate it. Without a published methodology, there is the risk that people may regard the list as being the product of the subjective whims of a few individuals sitting in an office in England. The development of the new FT list was preceded by a consultation period in which academics were invited to email their thoughts about which journals should be included to the following email address mba@ft.com. This email account was managed by one Laurent Ortmans. We know from LinkedIn that this individual has worked as a UK civil servant and is a graduate of Kingston University and the University of Rennes. Aside from that, his background, interests, and associations are murky.   During the consultation period, which ended on 17 June, a number of scholarly organizations mobilized to lobby on behalf of their journals. For instance, Debra Shapiro, the president of the Academy of Management sent out the following email to its members on 6 June:

 

As you may know, the Financial Times uses a list of 45 journals to assess research quality and determine business school rankings (http://www.ft.com/cms/s/2/3405a512-5cbb-11e1-8f1f-00144feabdc0.html#axzz48pTKFgOO.)  We recently learned that the Academy of Management Review (AMR) may be removed from this FT 45 list of journals.   [AS: Prof. Shapiro did not specify how she learned of the possibility that the AMR and AMJ might be removed ].

We find this troubling, as AMR has consistently been ranked among the top five most influential and frequently cited journals in our field.  In fact, AMR is ranked #1 in the category of business and #2 in the category of Management (Thompson Reuters, 2014).  The journal’s impact factor is 7.45 with a 5 year impact factor of 10.736. 

AMR consistently publishes the highest quality theoretical work done in the field.  With close to *5 million downloads* to its content in 2015, AMR is an essential resource for management scholars and students who seek to understand the “why’s and how’s” behind timely and fundamental organizational problems faced by managers and organizations.

Your school may be asked to vote on whether to keep AMR on the Financial Times list of journals.  If so, please contact your representative as soon as possible to make sure that AMR stays on the list.

 

 

We know that the consultation period ended on 17 June and that the new list was published on 24 June. What isn’t clear is the process that took place during the intervening six days (only four of which were working days in the UK). It’s a total black box. Information was fed into the email address mentioned above and was processed by the staff of the FT, who may also have used citation counts, but not that much is really known.  The FT has not published the methodology it used to rank journals and make decisions about journal listing and delisting. (I would note here that the methodology used to decide on the 24 journals that are part of the rival Dallas Journals List is also unpublished, but one would have expected better from a UK-based organization such as the FT, especially as the  UK’s Chartered Association of Business Schools explains in great detail the methodology used to determine its ranking of journals. The CABS list contains an admirably clear and transparent description of the methodology used and the individuals consulted).

 

The lack of transparency about the process behind the FT journal rankings is ironic on many levels. It is ironic because the FT rightly critiques developing countries for their lack of transparency. It is also ironic because virtually all management journals require papers to include a methodology section in which authors explain precisely how they came up with their results. I’m certain that if an academic sent a paper that expressed opinions in the form of a ranking without a detailed justification of the methodology and the nature of the data used, it would be desk rejected. You can’t just make a claim and say “trust me”. You need to show your work. Indeed, there is currently a pan-disciplinary movement in the social sciences to increase rather than decrease the transparency, for instance by requiring academic authors to publish their raw data as well as a description of how they used it. Research transparency is also a big issue in the natural sciences.

The lack of transparency about the process used by the FT in making its journal rankings is disappointing to me because I really respect the FT as source of information about business precisely because it is transparent and always declares potential conflicts of interest in a note at the base of the article. During and after the global financial crisis, the FT’s coverage of the issue of bond rating agencies and their non-transparent procedures was excellent.

What we don’t know right now is whether Mr. Ortmans worked solely or with others in the course of processing the information that arrived in his email account during the consultation period that ended on 17 June.  The methodology that the FT staff used and the precise weighting of citation counts, number of lobbying emails, etc, are also unspecified. In contrast, the number of signatories to petitions on the 10 Downing Street website is a matter of public record, since 100,000 signatures triggers a requirement for a debate in parliament.

 

Here is another key issue: since practitioner journals are included in the rankings, it would be very useful to know which particular practitioners were consulted. Was the sample of practitioners consulted representative of the global readership of the FT? Or did they just go for a focus group of people in London and ask them to read representative articles? Were the practitioners exclusively employed in the private sector or were public-sector managers consulted? Were the journalists who use academic knowledge involved in the process? One of the wonderful things about the FT is that many of its columnists, including the great Martin Wolf and Gillian Tett, follow academic research and use it in their analysis. Were any London-based FT journalists invited to express their views about which academic journals were included? We simply don’t know. Were Big Data techniques used to evaluate the utility of the research presented in the journals? For instance, does the extent to which articles in a journal are shared by managers on LinkedIn and other social media determine whether a journal is included? If so, what was the weighting given to such evidence of utility to practitioners?  Does academic research that was shared 1,000 times on LinkedIn get more or fewer points that academic research that was cited 1,000 times by other academics? How were potential conflicts of interest avoided? We just don’t know the answers to any of these questions. In contrast, journal rankings based on overall citation count or H-index, while admittedly somewhat arbitrary, are transparent.

If not saying that the FT rankings are incorrect or that any of the additions or deletions from the list were unjustified.  Personally, I’m pleased that the excellent journal Human Relations was added but that’s just me being subjective. At this point, nobody except Mr. Ortmans can express an informed opinion about this subject! The hilarious lack of transparency about methodology means that one won’t be able to accept his list as legitimate until the details of the process are published.  Until we see a detailed explanation of the methodology and  weighting, we should probably stop referring to the list as the FT50 and instead call it the Ortmans50 after the obscure individual who appears to have made this list over the course of a few days in June.

I am convinced that unless there is greater transparency about all matters related to research, management academics and experts more generally will lose their social licence to operate. Or maybe they will continue to get paid to publish in academic journals, but managers and the general public will cease to pay any attention to what they have to say in the same way that most French people no longer pay attention to the sexual mores taught by the Catholic Church. The priests of France haven’t starved or been turfed out of their accommodation, but they have lost all  influence. Increased transparency in all matters related to research is necessary if management academics are to escape a similar fate.

 

As others have noted (see here and here), in voting for Brexit, the British public were rejecting the advice of the experts from the universities, IMF, government, and the private sector who were almost uniformly in favour of Remaining in the EU. In large part, the general public disregarded the expert consensus because the 2008 financial crisis taught the general public that economists, and by extension other experts, are full of crap. (One must admit the flood of conflicting advice that the general public gets from experts in the field of nutrition has also contributed to the erosion of the credibility of experts). Films such as Inside Job and, more recently, The Big Short, reinforced the view that experts are self-interested frauds, which became conventional wisdom down at the pub. (Trust me about that last bit). Of course, experts aren’t actually full of crap, but without transparency measures academics will be unable to rebuild the trust of the public or, in this case, business people. The relevance of business schools will continue to erode.

This blog post should not be interpreted as an attack on Mr. Ortmans, the FT, or any of the journals that were listed or delisted. I do think that if experts are to regain the credibility that they have so evidently lost, they need to be more transparent about their research and in all systems related to the presentation of their research. If we don’t, the general public will continue to regard us as scoundrels and scammers and with disregard our advice.