Ranking the Rankings: Which Ranking of Management Journals is the Most Credible?

3 07 2017

Journal rankings are extremely important in the management school world, so we business historians ought to give some thought as to the processes by which the different ranking systems are produced. It’s also crucial to see whether the journal rankings produced by different scholarly organizations (the UK’s Association of Business Schools, the Australian Deans’ List, CNRS, etc) are consistent with each other and with those created by for-profit outfits such as the Financial Times.   As I show below, there are some major discrepancies between the positions of journals on these lists. There are also differences in the degree of methodological rigour employed in the development of these lists.

Sadly, no business history journal was included in the all-important FT50 journal list. As I pointed out when this list was first published, the Financial Times has never revealed the methodology it used to rank these journals. The FT did inadvertently reveal the name of the individual who produced this list, one Laurent Ortmans. We know from LinkedIn that this individual is a graduate of Kingston University and the University of Rennes. We also know from other sources that some of the major management journals lobbied Mr. Ortmans intensively in the period when he was compiling the FT50 list, so perhaps the business history community will need to do so the next time he revises the list. Given its sheer importance to the working lives of academics in the British Isles and elsewhere, it is unfortunate that the FT50-Ortmans list is produced with so little methodological rigour. As far as I can tell, it is whipped up by a single individual who lacks a PhD. Pretty much the only thing I can say in defence of the FT50-Ortmans list is that there are no Japan-based journals on it (yet). Readers of the FT will have noted that ever since Pearson sold the FT to a Japanese company, there has been a surge in the number of articles about Japanese business in its pages. When I first heard of Nikkei’s purchase of the Financial Times, I was concerned that one or two really mediocre English-language journals published by elite Japanese universities might be added to the FT journal list. So far, that has not happened, which  I guess is to the credit of the FT50-Ortmans list. To be clear, there is good research being produced in Japanese universities, but it is typically published in either English-language international journals or in Japanese language journals and not so much English-language management journals edited at Japanese universities.

For business historians who work in UK management schools, perhaps the most important journal ranking is the one produced by the Chartered Association of Business Schools. Unlike the FT50-Ortmans list, part of the methodology behind the ABS guide is published. We know that the 2015 ABS list was produced under the supervision of a Scientific Committee that included leading scholars from various business-school disciplines. In 2015, committee’s expert on business and economic history was Geoffrey G. Jones Harvard Business School. It is not yet known who the historical expert hired to help produce the forthcoming 2017 ABS guide was. The ABS has wisely chosen not to release the name of the 2017 expert so as to preclude lobbying of the sort that took place prior to the release of the all-important FT50 list.  Keeping the name of the subject experts secret was a smart move on the part of the ABS and one that increases the credibility of their ranking system.

That being said, I’m not entirely convinced that the ABS journal ranking process is sufficiently transparent and robust for us to use it as actionable information.  The version of the ABS guide released in 2015 placed journals into five categories: 4*, 4, 3, 2, and 1. A variety of journals in the field of business history, economic history, and management history appeared on page 17 of the guide. (Note that in the 2010 version of the guide, there were separate lists for business history and economic history but that in 2015 these were lists were merged, perhaps in response to the growing importance of economic history in top econ departments).  In 2015, 26 historical journals were ranked. None were ranked 4* (the best possible ranking in the ABS 2015 system), but two were ranked 4, five were ranked 3, twelve were ranked 2, and seven were ranked 1.

Here are the relative positions of the journals in the 2015 ABS guide.

Business History Review 4

Economic History Review  4

Business History 3

Enterprise and Society 3

European Review of Economic History 3

Explorations in Economic History 3

Journal of Economic History 3

Entreprises et Histoire 2

European Journal of the History of Economic Thought 2

Financial History Review 2

Journal of the History of Economic Thought 2

Management and Organizational History 2

Journal of Management History 1

 

It is not clear what methodology will be used to create the 2017 guide.  According to the 2015 ABS guide, the classification process was “stringent and methodical in all cases” and “five sources of evidence” were used

  1. The assessments of leading researchers in each of the main fields and sub-fields covered; [AS: unfortunately, this criterion leaves room for subjectivity, especially since the written assessments were not published]
  2. The mean citation impact scores for the most recent five-year period (where available); [AS: ok, this part of the methodology is based on some actual hard numbers, which reduces the scope for subjectivity, which is good]
  3. Evaluation by the Editors and Scientific Committee members of the quality standards, track records, contents and processes of each journal included in the Guide; [AS: unless a detailed description of the working methods used by the Editors and a transcript of the deliberations of the Scientific Committee are published, this criterion would leave room for even more subjectivity]
  4. The number of times the journal was cited as a top journal in five lists taken to be representative of the ‘world’ rating of business and management journals [AS: Some hard numbers will be used here, which is positive because it gets us away from subjectivity]
  5. The length of time a journal has been established.[Again, a nice clear criterion that can be measured and independently confirmed].

 

Unfortunately, the ABS hasn’t published the formula they use to weigh these five factors. One hopes that the relative weighting of factors will be specified when the ABS releases the 2017 version of its list.  The ABS are to be commended for showing at least part of their work, unlike the Financial Times. Why anybody respects that FT50-Ortmans list is beyond me.

 

Let’s turn from the UK to the rankings of management journals used in other countries.  In many French business schools, the CNRS list is used. Journals are ranked from 1 to 4, with 1 being the best. If you publish in a journal ranked 1, you are rewarded more than if you publish in a 3 or 4 journal.  Version 1 of this ranking was published in 2004 and version 4 was released last year. Overall, the French rankings are not massively dissimilar to the British ABS rankings.  The Accounting History Review is ranked 3 in the French system and 2 in the UK system—it is thus a third-tier journal in both countries.  Accounting History is also ranked 3 in France.  So far, so good: the ratings look commensurate, which suggests the absence of home-nation bias and conflicts of interest. However, there are some discrepancies in the category “Business History Histoire des Affaires”  that are worthy of note.  The relative positions of the journals Business History and Business History Review reverse when one crosses the English Channel. In the CNRS system, Business History (2) is a higher ranked journal that the Business History Review (3). Moreover,  Management and Organizational History is ranked 2 in the UK system but has a lower ranking of 1 in the French system. Similarly, Entreprise et Histoire, a French journal, is lower ranked in France than it is in the UK.

I really like how the creators of the CNRS list of management journals frankly  concede in the preface of the document  that all rankings of journals are necessarily somewhat subjective. I really appreciate this degree of intellectual humility.

Elle ne peut évidement pas prétendre à la perfection, tout simplement parce que les appréciations trop raccourcies que peut fournir une telle liste sur des revues scientifiques (présence ou non sur la liste et rang de classement) sont évidemment des appréciations très réductrices et qui ne sont pas exemptes de subjectivité.

 

 

In Australian business schools, they use the ABDC Journal Quality List. In looking at the relative position of the historical journals on that list, one notices some interesting discrepancies between the relative positions of journals here and on the lists used in the UK and France. In Australia, journals are ranked A*, A, B, or C. Business History and Business History Review are both ranked A (second tier). So in France, Business History is considered to be better than BHR, in the UK BHR is better than BH, and in Australia they are viewed as equal.  Enterprise and Society is also considered an A journal in Australia. The really curious thing about the Australian list is that it Journal of Management History, which is ranked a lowly 1 in the UK, is highly ranked in Australia (it’s an A journal in the ABDC list). The editor of this journal, Bradley Bowden, teaches at Griffith Business School at Griffith University, Queensland. Bradley is a good scholar who is working hard to develop this journal, but the sheer discrepancy between this journal’s ranking in Australia and in other countries is worthy of discussion.

The role of ratings agencies in the 2008 financial crisis generated a great deal of interest in the wider social phenomenon of conflicts of interest in ratings.  This issue, which was dramatized in the film The Big Short, has investigated by academics who work in business schools around the world. Going forward, it will be interesting to see how the FT, ABDC, ABS, and CNRS ranking systems improve their credibility through improved methodological rigour and a higher degree of process transparency. I would strongly suggest that they work with the Center for Open Science as they move in this direction.  I am confident that improving the overall rigour of the process used to rank management-school journals would be a net benefit to the field of business history.

A journal’s Impact Factor is a relatively crude way of determining its quality, but at least it has the advantage of being measurable and independently verifiable. For the record, here are the impact factors of some journals in our field.

Business History 2016 Impact Factor: 0.830.   Ranking: 94/120 (Business); 5/35 (History of Social Sciences) in Thomson Reuters Journal Citation Reports. [Note: the 2015 IF appeared in an earlier version of this post].

Enterprise & Society 2016 Impact Factor: 0.593. Journal is ranked 14 out of 35 History of Social Sciences and 110 out of 121 Business in Thomson Reuters Journal Citation Reports.

Business History Review 2016 Impact Factor: 0.425. Ranking: 19/35 History of Social Sciences; 112/121 (Business)

Andrew Popp, the editor of Enterprise and Society, should be congratulated for the rapidly rising profile of this young(ish) journal. Even though he has not set out to chase Impact Factor, the high quality and innovative nature of the scholarship published in this journal has increased its IF as a sie effect.

Full disclosure: Andrew Popp is a colleague here at the University of Liverpool Management School.

 





The FT Journal List in the Age of Brexit

2 07 2016

Results of the Brexit referendum on Friday overshadowed the publication on the same day of the Financial Times’s updated its list of the most important academic and practitioner journals in management. The number of listed journals was increased from 45 to 50: four journals were de-listed (e.g., Academy of Management Perspectives) and nine new ones (e.g., Human Relations) have been added. The exclusion or inclusion of journals on the list is vitally important to the career prospects of individual academics, since authorship of a paper in FT-listed journal confers prestige. Similarly, the inclusion of its journal in the list is crucial for a scholarly organization or community, as it confers legitimacy. The listing of journals is also used by the Financial Times in compiling its own Business School research rankings. A business school’s research rank is calculated according to the number of faculty publications published in the listed journals.

 

Given the importance of this list, one would have expected some clarity about the methodology used to generate it. Without a published methodology, there is the risk that people may regard the list as being the product of the subjective whims of a few individuals sitting in an office in England. The development of the new FT list was preceded by a consultation period in which academics were invited to email their thoughts about which journals should be included to the following email address mba@ft.com. This email account was managed by one Laurent Ortmans. We know from LinkedIn that this individual has worked as a UK civil servant and is a graduate of Kingston University and the University of Rennes. Aside from that, his background, interests, and associations are murky.   During the consultation period, which ended on 17 June, a number of scholarly organizations mobilized to lobby on behalf of their journals. For instance, Debra Shapiro, the president of the Academy of Management sent out the following email to its members on 6 June:

 

As you may know, the Financial Times uses a list of 45 journals to assess research quality and determine business school rankings (http://www.ft.com/cms/s/2/3405a512-5cbb-11e1-8f1f-00144feabdc0.html#axzz48pTKFgOO.)  We recently learned that the Academy of Management Review (AMR) may be removed from this FT 45 list of journals.   [AS: Prof. Shapiro did not specify how she learned of the possibility that the AMR and AMJ might be removed ].

We find this troubling, as AMR has consistently been ranked among the top five most influential and frequently cited journals in our field.  In fact, AMR is ranked #1 in the category of business and #2 in the category of Management (Thompson Reuters, 2014).  The journal’s impact factor is 7.45 with a 5 year impact factor of 10.736. 

AMR consistently publishes the highest quality theoretical work done in the field.  With close to *5 million downloads* to its content in 2015, AMR is an essential resource for management scholars and students who seek to understand the “why’s and how’s” behind timely and fundamental organizational problems faced by managers and organizations.

Your school may be asked to vote on whether to keep AMR on the Financial Times list of journals.  If so, please contact your representative as soon as possible to make sure that AMR stays on the list.

 

 

We know that the consultation period ended on 17 June and that the new list was published on 24 June. What isn’t clear is the process that took place during the intervening six days (only four of which were working days in the UK). It’s a total black box. Information was fed into the email address mentioned above and was processed by the staff of the FT, who may also have used citation counts, but not that much is really known.  The FT has not published the methodology it used to rank journals and make decisions about journal listing and delisting. (I would note here that the methodology used to decide on the 24 journals that are part of the rival Dallas Journals List is also unpublished, but one would have expected better from a UK-based organization such as the FT, especially as the  UK’s Chartered Association of Business Schools explains in great detail the methodology used to determine its ranking of journals. The CABS list contains an admirably clear and transparent description of the methodology used and the individuals consulted).

 

The lack of transparency about the process behind the FT journal rankings is ironic on many levels. It is ironic because the FT rightly critiques developing countries for their lack of transparency. It is also ironic because virtually all management journals require papers to include a methodology section in which authors explain precisely how they came up with their results. I’m certain that if an academic sent a paper that expressed opinions in the form of a ranking without a detailed justification of the methodology and the nature of the data used, it would be desk rejected. You can’t just make a claim and say “trust me”. You need to show your work. Indeed, there is currently a pan-disciplinary movement in the social sciences to increase rather than decrease the transparency, for instance by requiring academic authors to publish their raw data as well as a description of how they used it. Research transparency is also a big issue in the natural sciences.

The lack of transparency about the process used by the FT in making its journal rankings is disappointing to me because I really respect the FT as source of information about business precisely because it is transparent and always declares potential conflicts of interest in a note at the base of the article. During and after the global financial crisis, the FT’s coverage of the issue of bond rating agencies and their non-transparent procedures was excellent.

What we don’t know right now is whether Mr. Ortmans worked solely or with others in the course of processing the information that arrived in his email account during the consultation period that ended on 17 June.  The methodology that the FT staff used and the precise weighting of citation counts, number of lobbying emails, etc, are also unspecified. In contrast, the number of signatories to petitions on the 10 Downing Street website is a matter of public record, since 100,000 signatures triggers a requirement for a debate in parliament.

 

Here is another key issue: since practitioner journals are included in the rankings, it would be very useful to know which particular practitioners were consulted. Was the sample of practitioners consulted representative of the global readership of the FT? Or did they just go for a focus group of people in London and ask them to read representative articles? Were the practitioners exclusively employed in the private sector or were public-sector managers consulted? Were the journalists who use academic knowledge involved in the process? One of the wonderful things about the FT is that many of its columnists, including the great Martin Wolf and Gillian Tett, follow academic research and use it in their analysis. Were any London-based FT journalists invited to express their views about which academic journals were included? We simply don’t know. Were Big Data techniques used to evaluate the utility of the research presented in the journals? For instance, does the extent to which articles in a journal are shared by managers on LinkedIn and other social media determine whether a journal is included? If so, what was the weighting given to such evidence of utility to practitioners?  Does academic research that was shared 1,000 times on LinkedIn get more or fewer points that academic research that was cited 1,000 times by other academics? How were potential conflicts of interest avoided? We just don’t know the answers to any of these questions. In contrast, journal rankings based on overall citation count or H-index, while admittedly somewhat arbitrary, are transparent.

If not saying that the FT rankings are incorrect or that any of the additions or deletions from the list were unjustified.  Personally, I’m pleased that the excellent journal Human Relations was added but that’s just me being subjective. At this point, nobody except Mr. Ortmans can express an informed opinion about this subject! The hilarious lack of transparency about methodology means that one won’t be able to accept his list as legitimate until the details of the process are published.  Until we see a detailed explanation of the methodology and  weighting, we should probably stop referring to the list as the FT50 and instead call it the Ortmans50 after the obscure individual who appears to have made this list over the course of a few days in June.

I am convinced that unless there is greater transparency about all matters related to research, management academics and experts more generally will lose their social licence to operate. Or maybe they will continue to get paid to publish in academic journals, but managers and the general public will cease to pay any attention to what they have to say in the same way that most French people no longer pay attention to the sexual mores taught by the Catholic Church. The priests of France haven’t starved or been turfed out of their accommodation, but they have lost all  influence. Increased transparency in all matters related to research is necessary if management academics are to escape a similar fate.

 

As others have noted (see here and here), in voting for Brexit, the British public were rejecting the advice of the experts from the universities, IMF, government, and the private sector who were almost uniformly in favour of Remaining in the EU. In large part, the general public disregarded the expert consensus because the 2008 financial crisis taught the general public that economists, and by extension other experts, are full of crap. (One must admit the flood of conflicting advice that the general public gets from experts in the field of nutrition has also contributed to the erosion of the credibility of experts). Films such as Inside Job and, more recently, The Big Short, reinforced the view that experts are self-interested frauds, which became conventional wisdom down at the pub. (Trust me about that last bit). Of course, experts aren’t actually full of crap, but without transparency measures academics will be unable to rebuild the trust of the public or, in this case, business people. The relevance of business schools will continue to erode.

This blog post should not be interpreted as an attack on Mr. Ortmans, the FT, or any of the journals that were listed or delisted. I do think that if experts are to regain the credibility that they have so evidently lost, they need to be more transparent about their research and in all systems related to the presentation of their research. If we don’t, the general public will continue to regard us as scoundrels and scammers and with disregard our advice.

 

 

 

 

 





The Implications of Angus Deaton’s Nobel Prize for the Relative Position of Economics Journals in the ABS Ranking System

16 10 2015

A few days ago, Angus Deaton of Princeton was awarded the Nobel Prize for his research on poverty, health, and development. The Dwight D. Eisenhower Professor of Economics and International Affairs at the Woodrow Wilson School of Public and International Affairs and the Economics Department at Princeton University. This prize is clearly richly deserved, as Deaton’s books and articles contain important insights into fundamental issues. His research is overviewed here, here, here, and here. Deaton’s Google Scholar profile (53,937 citations) is here.

Deaton’s many publications include a recent piece in the Review of Austrian Economics, “On tyrannical experts and expert tyrants”. Here’s the thing. The RAE is currently ranked a mere 1 in the Association of Business School’s Journal Quality Guide, a ranking system used for assessing job applications and job performance in management schools in the UK and some other European countries. Given that a Nobel Laureate has recently published in this journal, it is likely that the ranking of the RAE will increase in the next version of the ABS journal guide.





American Football and the ABS Journal Quality Guide

27 02 2015

2006_Pro_Bowl_tackle

The article in the Times Higher Education supplement about the release of the ABS journal rankings has generated some interesting online discussion.

The Association of Business Schools’ Academic Journal Guide 2015assesses the quality of 1,401 business and management publications worldwide, based on citation scores and the judgements of leading researchers. 

It is designed to help academics to make decisions about where they should seek to have their work published and to help deans to evaluate performance.

But some scholars complain that the guide has become too powerful in decisions on recruitment, promotion and salary review, and that as a consequence they are assessed only on where they publish, not what they publish.

The first person to comment on this article is Bill Cooke, Professor of Strategy at the University of York Management School. He argues that the new guide will contribute to the Americanization of the research culture of UK business schools.

This list “aimed to provide scholars with clear goalposts against which to aim for in seeking to progress their careers”.

But you better be playing American Football, because this is simply a list of US journals. In my subdisciplines, in the journals ranked, the work is conservative, narrow, and parochial to the United States. People had better not believe the playing field is level, either. The normal institutional biases, let alone positive discrimination, tilt play in favour of the US scholar. The paper you’ve written and want someone to read informally. Your reader down the corridor or you bump into at a local seminar in THE-land is not going to be the editor of any of these journals, is he. Yes, he (nice, and appropriate pic btw).

The Association of Business Schools is now trying to face in two directions at once. It is sending the message, firstly, that if you are ‘seeking to progress [your] career’ you should publish in these US journals.

At the same time management scholars are told, rightly, and not least by the ABS that they need to have a strong concern for ‘impact’ in the REF sense – for material changes in the real world. Well, intuitively, research that gets published in these 4* journals will by definition have to be the opposite of impactful.

(I make these comments in a personal capacity, and not in my capacity as Vice Chair Research and Publications of the British Academy of Management).

I think that Professor Cooke is right about this. The creators of the ABS guide are to be commended for overcoming nationalistic passions and automatically giving more points to British journals simply because they are British. However, it maybe that they have gone too far in the opposite direction and have rewarded US journals that are nationally insular but which nevertheless have high citation counts simply because of the sheer size of the United States.

A better, but more complicated, method for measuring whether a journal has a truly global impact would be to look at both absolute citations and citation-miles, based on the mean distance between the author’s place of employment and the employers of the authors who cite a given paper.  Under this scenario, a journal by a New York City academic that is frequently cited by other academics who all work in New York would get a lower ranking than a journal with a similar number of citations by authors on different continents.