The new Google Scholar Journal metrics are out and have generated a great deal of discussing on Academic Twitter. I’m writing this blog post to suggest that there may be serious methodological problems in Google’s rankings of academic journals.
Here are the top-ranked journal in the field of Business, Economics, and Management (BEM). The numbers are the h5-index and the h5-median.
1. | American Economic Review | 147 | 229 |
2. | Journal of Financial Economics | 112 | 164 |
3. | The Journal of Finance | 108 | 181 |
4. | The Quarterly Journal of Economics | 104 | 189 |
5. | Journal of Business Ethics | 98 | 131 |
6. | Journal of Business Research | 96 | 131 |
7. | The Review of Financial Studies | 94 | 140 |
8. | Tourism Management | 94 | 139 |
9. | Management Science | 91 | 124 |
10. | Strategic Management Journal | 90 | 123 |
11. | International Journal of Production Economics | 89 | 126 |
12. | Journal of Management | 88 | 146 |
13. | Academy of Management Journal | 86 | 129 |
14. | World Development | 84 | 116 |
15. | International Journal of Project Management | 79 | 105 |
16. | Journal of Economic Perspectives | 77 | 140 |
17. | Econometrica | 75 | 125 |
18. | Energy Economics | 75 | 90 |
19. | Technological Forecasting and Social Change | 74 | 96 |
20. | Journal of Political Economy | 73 | 131 |
There are no major surprises in Google’s rankings of the top journals in the general field of BEM, which makes Google’s ranking system seem fairly credibly to me. Within the BEM field, Google ranks journals by subcategory. Here are the rankings for Economic History. As you can see, the Review of Keynesian Economics was absurdly categorized as an economic history journal, which strongly suggests to me that the ranking and categorization decision was made by either a non-academic who didn’t bother speaking to academics in the field, or some automatic system. Either way, the credibility of the entire ranking system is reduced, at least in my eyes. If Google’s journal categorization system is flawed for the subdiscipline I know best, it makes me suspect there are other dubious decisions lurking elsewhere in the system.
My point is that we should use extreme caution when thinking about these new rankings. We should also ask some tough questions about the procedure used to create them.
Publication | h5-index | h5-median | |
---|---|---|---|
1. | The Journal of Economic History | 28 | 48 |
2. | The Economic History Review | 22 | 27 |
3. | Business History | 20 | 29 |
4. | Explorations in Economic History | 20 | 27 |
5. | Review of Keynesian Economics | 18 | 23 |
6. | European Review of Economic History | 16 | 20 |
7. | History of Political Economy | 16 | 18 |
8. | The European Journal of the History of Economic Thought | 13 | 19 |
9. | Accounting History | 12 | 14 |
10. | Journal of the History of Economic Thought | 11 | 17 |
It looks like the rankings were drawn up by a monolingual English-speaker too…
Good point Rory. The rankings are based simply on citation counts, so that would naturally exclude journals in pretty much any language against English and Chinese. Since Google Scholar isn’t in China, it can’t really count the Chinese journals either.