Tyler Cowen has long occupied an unusual niche in the academic division of labour. Formally, he is a credentialed and tenured economist, with a chaired position at George Mason University and a conventional scholarly base. In practice, however, he made a different allocative choice once the tenure tournament was won. Rather than spend the rest of his career maximising the output of peer-reviewed journal articles (total readership about nine humans), he shifted a great deal of his effort into blogging, podcasting, public commentary, teaching, and institution building through projects such as Marginal Revolution and Emergent Ventures. Instead of talking to dozens or hundreds of academic economists, he decided to talk about economics to orders of magnitude more people. That move has always made some academics uneasy. It looks, depending on your priors, either like an abdication of scholarly duty or like a rational reallocation of talent toward higher-leverage forms of intellectual production.
That latent argument has now surfaced quite explicitly on X, where Cowen is suddenly the object of a broader debate among economists and social scientists about what an academic career is for. The discussion has been sharpened by a cluster of Cowen interventions over the past week, especially his comments on AI, journals, and the possibility that scholars should increasingly think about writing for machine readers as well as human ones. The lines of argument are fairly predictable. One camp treats Cowen as the prototype of the public-facing scholar of the future: less invested in artisanal journal production, more interested in moving ideas rapidly through networks, institutions, and now model weights. The other camp sees something more troubling: a celebrated economist whose prestige allows him to escape the disciplines of peer review while still shaping discourse on a very large scale.
The Academic Paper: Dead Format Walking?
My suspicion is that Cowen is attracting so much attention right now because he has become a proxy for a deeper anxiety: many academics increasingly suspect that the peer-reviewed article is a dead format walking. Alexander Kustov puts the point with unusual bluntness in his recent Substack essay on AI and academia, arguing that the thirty-page article is becoming vestigial wrapping paper in a world where AI can do literature reviews, summarisation, and even parts of manuscript production faster than most scholars can. Kustov’s sequel goes further, suggesting that academics may need to accept that their primary audience is increasingly LLMs. That is precisely the terrain on which Cowen has been operating for years, even before the current AI wave made the implication impossible to ignore. He has behaved as though influence, reach, speed, and institutional spillovers matter more than another marginal journal placement. The reason people are arguing about Cowen is not just that he chose an unconventional life. It is that his life now looks, to some observers, less like a deviation and more like an early market signal.
Seen this way, the Cowen debate is really a debate about the future production function of academia. If the journal article no longer monopolises prestige, dissemination, or even validation, then the old hierarchy of scholarly activities starts to wobble. Blogging begins to look less like self-indulgence. Podcasting starts to resemble a discovery mechanism. Institution building becomes a way of shaping the direction of intellectual capital rather than merely commenting on it after the fact. Some people will hate that world, not always for bad reasons. The peer-reviewed article, for all its defects, at least aspired to slow people down. But if the article is indeed becoming a dead format walking, then Cowen’s career may not be an eccentric personal brand. It may be a glimpse of the post-paper academic equilibrium.
Since November 2024, Canada has seen a very visible revival of anti-American nationalism. Pride in being Canadian rose sharply as tariff threats and bellicose rhetoric from the United States pushed many voters and consumers into a “Buy Canadian” mood. I have never liked this sort of nationalism, and I do not like its latest incarnation either. The reason is straightforward. Nationalism is rarely just sentiment. It is usually the political language through which distributional coalitions demand protection, preference, subsidy, exclusion, and symbolic deference. Patriotism, in the older and more honourable sense, is something else. A soldier accepting modest pay for dangerous service (WWI soldiers earned less per day than they could have got in civilian life), or an athlete giving time and effort for national representation (many Olympic athletes are unpaid), is contributing to a common enterprise. People who voluntarily donate to special charities set up to pay off the national debt also fall within that definition of patriotism. That sort of patriotism is a close cousin of the civic-minded person who donates lots of money to local charities in their will. Nationalism, by contrast, all too often becomes the banner under which cartels and other organized producer groups demand that everyone else pay. Economic nationalism is bad when it is undertaken by the representatives of capital. It’s bad when it is undertaken by the representatives of organized labour, as when a faction within the Canadian branch of a pan-North American “international union” calls for the formation of a distinctly Canadian union. It’s also bad when it is undertaken by artists and intellectuals, as when there is a campaign to make replace US novels like Kill a Mockingbird with Canadian ones in the high school reading curriculum of a Canadian province.
The best framework for understanding this comes from Mancur Olson, the American economist and social theorist best known for his work on collective action and the political economy of organized interests. Olson first became famous for The Logic of Collective Action in 1965, but the key work here is his 1982 book The Rise and Decline of Nations. Its central insight was that stable societies with long histories of freedom of association accumulate what he called distributional coalitions: organized groups small enough to overcome collective action problems and cohesive enough to lobby for advantages at the expense of the larger economy. These are not merely “interest groups” in the harmless civics-textbook sense. They are tariffs, licensing cartels, restrictive work rules, producer boards, regulatory choke points, occupational monopolies, market-sharing devices, and every other arrangement by which a narrow coalition can secure a larger slice without enlarging the pie. The longer a country has existed and has had freedom of association, the more of these growth-destroying distributional coalitions it has. (The UK is a classic case of that and one discussed at length in Olson’s book). Olson’s claim was not that every association is malign, but that societies thick with such coalitions become rigid, slow-moving, and resistant to adaptation. That was part of his explanation for British stagnation: too many entrenched organizations, too many veto points, too much protection of incumbents, too little flexibility. And he used harsher cases too. The Indian caste system and South African apartheid interested him not only because they were morally ugly, though they were, but because they institutionalized barriers, privileges, exclusions, and labour-market rigidities on a colossal scale. In Olson’s framework, apartheid was not just a racist order; it was also an extreme machinery of distributional protection: the concept of the “white race” was used in South Africa as the basis of a sort of meta-coalition of interest groups who enriched themselves in ways that actually reduced total GDP.
Albert Breton had already produced similar insight before Olson published that 1982 book. Breton was a Canadian economist associated with public choice and the economic analysis of political institutions. His key essay here is “The Economics of Nationalism,” published in the Journal of Political Economy in 1964.
Here is a key passage from the first page of his paper:
It is the object of this paper, first, to show that societies in which political nationalism exists in vest resources in nationality or ethnicity; second, that these investments are made because they are profitable; and third, that they are not profitable for everyone in a society but only for specific and identifiable groups. Taken together, the second and third points mean that investments in nationality are not so much income-creating as income-redistributing.
Breton treated nationality as something with instrumental political uses: a way of organizing claims, creating barriers to entry, tilting the competitive playing field, justify corporate welfare, allocating benefits, and defining the boundary between those entitled to favoured treatment and those who are not. That is why Breton is so useful. He allows one to see nationalism not simply as a spontaneous expression of collective feeling, but as a political technology for constructing an in-group within which transfers, protections, controls, and privileges can be justified. Put in somewhat stark terms, Breton says that a nation is simply a coalition of individuals and groups of individuals who have decided to call themselves a nation so they can command resources. Olson supplied the broader theory of how organized minorities immobilize economies and reduce GDP; Breton, writing a bit earlier, supplied a way of seeing nationality itself as one of the most powerful organizing devices such minorities can deploy in ways that enrich individuals (particularly the most strident nationalists) while making the world as a whole poorer. Put the two together and nationalism looks less like a mysterious force of history and more like a meta-coalition: a coalition of coalitions, held together by a shared claim that the protected insiders are “the nation” and the political system ought to protect the interests of producers who are part of the nation. On that reading, Canadian nationalism and U.S. nationalism can function as a passport-based legitimating ideology for domestic producer interests, while Quebec or French-Canadian nationalism can function as an ethnolinguistic legitimating ideology for a differently bounded but structurally similar set of producer interests.
That is why the recent surge of economic nationalism in Canada should worry anyone who cares about prosperity. (Quebec economic nationalism, which seeks to shield producers in Quebec from competition from both U.S. firms and firms in English-speaking Canada, also appears to have surged). Once nationalism is politically activated, even if it is in response to someone else’s nationalism, the queue of suitors forms immediately: firms wanting procurement preference, individual workers who want foreign workers to be excluded from their little corner of the labour market (e.g., the Canadian academics who argue that there should be a crackdown on the hiring American citizens by universities), incumbent firms (the chartered banks with lousy customer service) wanting foreign rivals kept out, cultural and tech interests wanting “sovereignty” policies, and ministers wanting to dress industrial favouritism up as nation-building. One can already hear the old corporate welfare music returning: not healthy competition, but shelter; not openness, but strategic insulation; not boosting hourly productivity and TFP, but a patriotic vocabulary for old-fashioned rent-seeking. The danger is not merely a few wasteful programs. It is that a broad Canadian nationalist distributional coalition will use the current mood to entrench a whole latticework of costly procurement rules, local-preference schemes, technology nationalism, and retaliatory symbolism that lowers productivity, reduces GDP, and makes preservation of CUSMA more difficult. The demands the US is making on Canada in CUSMA negotiation are pretty reasonable and include getting rid of dairy supply management, something that should be done anyway.
In its defence, Canadian nationalism does at least have one redeeming feature: it can undercut Quebec nationalism and the narrower, and thus more prosperity destroying, producer coalitions that shelter beneath that flag. But one should not confuse the displacement of a provincial rent-seeking coalition by a national one with a victory for economic openness and rationality. It may simply be a cartel merger on a larger territorial scale. Needless to say, U.S. economic nationalism, or even the movement for a tariff-ringed Fortress North America (a sort of economic nationalism for the nation of North America) is also prosperity-destroying, albeit less so simply because the units are larger and more natural. For the record, I’ll note here that I was opposed to Brexit: I’m aware that the EU has some economic nationalist tendencies that have resulted in external tariffs, but the net prosperity-destroying effect of European nationalism is less than that of nation-state nationalisms, such as the drive to take the UK out of the EU and then impose protectionist restrictions on, say, French cheese exports to the UK.
Thinking Historically About Mark Carney’s Davos Comments Global Public Goods
Much of the global commentariat has been talking about Canadian Prime Minister Mark Carney’s recent speech in Davos. There is a great deal of chatter about this speech (see historian Adam Tooze’s analysis here), which ably blended some economic reasoning with historical references to Vaclav Havel and even to Thucydides, a writer whose ideas continue to influence serious thinkers in Washington and beyond (see what I have written about the influential concept of the Thucydides Paradox).
I don’t agree with everything said in the speech, particularly his insinuation that global supply chains have made us more, rather than less, vulnerable to supply interruptions. (This issue was debated during Covid and I remain convinced that globalized supply chains reduce our vulnerability). However, I’m struck by the fact that Carney spoke about the provision of global “public goods” in his speech. I’ve spent much of my academic career, which dates back to the time when Bush and Blair were in power, thinking about empire, hegemony, and business particularly in the three great and generally benevolent countries of the North Atlantic Triangle: Britain, America, and Canada. The title of the book that came out of my PhD thesis was: British Businessmen and Canadian Confederation Constitution Making in an Era of Anglo-Globalization. My schtick has been long been to take some basic concepts from economics e.g., global public goods, some public choice theory, etc and then apply them in fairly old-fashioned historical research on the development of political institutions, constitutions, and what Douglass North called “the rules of the game.”
Economists use simple examples to make profound points. One of the first things you learn in Econ 101 is that public goods are hard for markets to provide. Think of a street lamp on a city corner. Once lit, it brightens the block for everyone. You cannot easily exclude people from its light, and one person’s use doesn’t diminish another’s. Because no vendor can easily charge effectively for its light, the private market undersupplies street lighting; free riding depresses revenue below what would be socially efficient.
Public goods contrast with the other familiar categories. Private goods are rival and excludable: my coffee and your coffee are gone once consumed, and the café can lock the door and charge. Club goods are non-rival up to some capacity and excludable: a gym membership gives you access, and the owner can keep non-members out. Common pool resources are non-excludable but rival: one fisher’s catch depletes the fish stock available to all. Each category has a different incentive problem and therefore a different institutional response. Street lighting gets funded by local taxation; fisheries require quotas or property rights; clubs use membership rules.
When we scale the lens up from a city to the globe, the same underlying logic persists but the problems get harder. There is no global Leviathan with a tax base and coercive authority. As there isn’t a one-world government, there is no “city hall” to collect the equivalent of property taxes and to provide the global equivalent of street lights. The goods we call global public goods—stable sea lanes, predictable finance, eradication of disease, coherent standards for telecommunications infrastructure, and, potentially in the future, the deflection of dangerous asteroids, are non-excludable across borders and non-rival in use, but no sovereign exists to fund and enforce them. The result is a global collective action problem: individually rational choices aggregate to an outcome that is suboptimal for all.
How, then, do such goods get provided? Part of the answer lies in a tradition of scholarship known as hegemonic stability theory, which traces its intellectual lineage to the historical work of Charles Kindleberger. In his study of the interwar world economy, which was written in the 1970s, Kindleberger argued that the collapse of the global order in the 1930s was not accidental but the predictable outcome when Britain, the world’s former hegemon/global police man could no longer bear the costs of international responsibility and the United States would not yet step in. In Kindleberger’s telling, stability in the international economy requires a leader with both capacity and a stake in the system willing to underwrite key public goods. That leader may not be benevolent but it is beneficent as it has a vested interest in a safe and prosperous world.
Later scholars, most prominently Robert Keohane, formalised this insight into what came to be called hegemonic stability theory. The emphasis is not on altruism; rather, it is on incentives and capabilities. A dominant power, by virtue of its size and reach, internalises more of the benefits of a stable, open system than its weaker partners and is often in a position to bear the upfront costs and enforcement actions required to sustain that system. This logic explains why, in different periods, an incumbent hegemon undertook functions that others would gladly free ride on: because the hegemon’s own welfare was tied to the stability of the system as a whole.
Looking at history, this pattern is striking. In the nineteenth century, the Royal Navy suppressed piracy across the major sea lanes of commerce, from the Caribbean to the Straits of Malacca. British cruisers patrolled the Atlantic and Indian Oceans, not because the pirate problem was only a British concern but because Britain’s own economy, and, by extension, global trade, depended on relatively secure shipping routes. Over decades, this naval presence dramatically reduced the incidence of piracy; only when British capacity waned and geopolitical priorities shifted did piracy reappear in earnest in some regions.
A century later, with British naval hegemony eclipsed and the United States ascendant, another hegemonic provider emerged. In the late twentieth and early twenty-first centuries, the U.S. Navy led international coalitions to suppress Somali piracy in the Gulf of Aden and off the Horn of Africa. These engagements were often under United Nations mandates, but the material capacity, persistent presence, intelligence, and coordination, came largely from a single dominant navy. Like the Royal Navy before it, the U.S. contribution did not eliminate piracy everywhere, but it dramatically reduced attacks on commercial shipping when sustained presence was maintained.
Another historically significant case was the British effort to suppress the trans-Atlantic slave trade in the nineteenth century. After abolishing the trade within its own empire, Britain utilised its Royal Navy to interdict slavers on the West African coast and beyond. This was not fully successful in eliminating slavery in every nook and cranny of the quarter of the world that was notionally British, and it was unevenly enforced, but it was a case where dominant capacity and naval reach constrained a private-good-like behaviour with wide externalities. In a similar vein, during the Cold War the United States undertook a global security commitment against Soviet expansion and aligned threats. In that case, the post-1945 hegemon saved the world from a different type of slavery (Communism). Whether one views that commitment as virtue or interest (as Anglican Christian, I’m against all forms of slavery, whether they involve cotton harvesting or Stalinism), it had the effect of providing a kind of collective public good that smaller states, even working in concert, could not have provided for themselves.
Where hegemons have been willing to act, globalisation and cooperation have expanded. Kindleberger was particularly interested in how cooperation broke down in the 1930s. With Britain exhausted after the First World War and the United States unwilling to shoulder leadership in the face of isolationist politics, the interwar system was left without a stabilising centre. Trade barriers rose, currency conflicts proliferated, and the global economy fractured. Deglobalization was a symptom of the underlying collective action failure: without a powerful state willing to underwrite the public goods of openness and order, the incentives of individual states led them to close markets and hoard liquidity.
Some global public goods, however, have been provided not by a single hegemon but through multilateral cooperation when the technology of provision made it feasible. The eradication of smallpox through the World Health Organization involved coordination and sustained effort from many countries. Remarkably, this feat was achieved at a time when the hegemony of the United States was being challenged by a near-peer adversary, the Soviet Union. Technical standards for post, telegraphy, and later the Internet were developed through international bodies negotiating compatibility rules that reduced transaction costs and uncertainty. Meteorological data sharing and tsunami warning systems operate through common reporting and reciprocal access to information. Even the Montreal Protocol of 1987, which successfully phased out ozone-depleting substances, succeeded through negotiated schedules, monitoring, and reciprocal incentives rather than through the unilateral enforcement of any one state. That great environmental achievement was possible due a conjunction of factors: it wasn’t all that expensive to replace CFCs in fridges with something better for the ozone layer and the agreement was implemented at a time when the US was really at the peak of its relative power, during the unipolar moment after the end of the Cold War.
But all global public goods are not alike. Some require large upfront investments; some depend on enforcement capacity; some can be delivered by one good actor and other need all actors meeting a minimum standard. Here it is useful to distinguish between two dimensions of global public goods:
First, whether their provision in practice depends on the existence of dominant state with exceptional capabilities, and
Second, the nature of their production function:
2a) best-shot goods, where one major provider can largely determine success, and
2b) weakest-link goods, where failure by even a single actor undermines the whole.
An example of a best-shot public good is from the movie Deep Impact, where a crew in the US-government owned Space Shuttle provide a global public good: they deflect the asteroid that is heading to earth. All of the other countries were basically free-riders in that film. (Seriously folks, let’s spend more on asteroid monitoring). Weakest-link global public good provision would require every country to suppress the outbreak of new contagious diseases, Ebola style. If just one failed state fails to do that and someone with say Super-Ebola gets on a plane, it doesn’t matter if most countries have done a good job of preventing outbreaks within their borders.
The following 2×2 matrix shows how I think about these issues.
Best-shot
Weakest-link
Hegemon-necessary global public goods
Acute crisis liquidity & backstops: a credible balance sheet (e.g., U.S. dollar swap lines serving as global backstops) can stabilise expectations system-wide. Only institutions based in Washington DC (the IMF, the Treasury) can do this because on the US has the financial firepower.
Maritime security in contested or poorly policed corridors: deterrence fails if even one strategic route is effectively unpatrolled; a dominant navy’s presence mitigates this. Think of the Horn of Africa, which threatened important trade routes. Just one weak-link country (Somalia) endangered trade between Europe and Asia—luckily we had a US-led naval alliance to take out those guys.
Hegemon-unnecessary global public goods
Ozone layer protection: feasible substitutes, mutual monitoring, and incentives made multilateral provision practical, but even in this case US dominance probably helped to get CFCs out of the world’s fridges
Routine disease surveillance and reporting: the system’s performance hinges on minimum capacity everywhere, not dominance somewhere. This could probably continue without the US
This matrix reflect my viewpoint that not all global public goods require a hegemon to provide it. Many can be and have been provided through institutions, reciprocal arrangements, and clubs, especially when the technical problem does not hinge on coercive capacity or large unrecouped upfront costs.
All that being said, the implications for a world in which a dominant power no longer plays the role of beneficent hegemon are stark. (Let’s assume for a moment that Carney is right and that there has been profound rupture and the end of the beneficent hegemony of the US). As structural incentives shift, the supply of hegemon-dependent goods will tend to shrink. Institutions are not magic machines; they operate within a world of incentives and capabilities.
Global public goods likely to continue to be provided post-hegemony
Technical and interoperability standards
Postal, aviation, and telecommunications coordination
Meteorological and early-warning systems
Routine public health surveillance and cooperative disease control
Global public goods likely to go away without hegemonic capacity
Comprehensive crisis liquidity backstops
Persistent maritime security in strategically vital but contested waters
Coercive suppression of transnational threats requiring sustained physical presence (think of rooting al-Qaeda out of the mountains of Afghanistan)
Enforcement-intensive global mitigation efforts for environmental problems
For smaller advanced states such as the United Kingdom and Canada, the political-economic landscape has changed. For both of these countries, one policy implication is that climate change policy should pivot from mitigation to adaptation. Climate mitigation is a weakest-link, multilateral public good that depends on nearly universal cooperation. For our sacrifices to cut CO2 emissions to actually produce a reduction in the rate at which mean global temperature goes up, virtually every country on the planet has to pursue the same carbon-tax style policies. Maybe if Rwanda and a couple of other countries cheat and don’t introduce the same carbon tax rate as Canada and the UK, it won’t matter but if there are more than a few free-rider nations, then it becomes pointless for the governments of countries like Canada and the UK to tax their citizens to reduce their nation’s CO2 emissions. I suppose that once it became clear that the world’s most populous states (China, India, US, Indonesia) were not prepared to accept significant costs for mitigation, continued unilateral mitigation efforts by middle powers would yield little systemic effect. Maybe if the US had elected Al Gore in 2000 and the US had then used the unipolar moment to coerce all countries into adopting a uniform global carbon tax, we could have really avoided global warming. In light of Carney’s Davos speech, it seems to me logical for countries like Canada and the UK to reallocate their climate policy budget to climate change adaptation.
For the UK, a world without US hegemony strengthens the case for re-joining the EU, especially if the EU reforms its policies. It also strengthens the case for efforts to make the country more self-sufficient in food: submarine warfare in two world wars taught British food consumers about the downsides of relying on distant continents for their calories. That means adjusting agricultural policies so we aren’t doing such things like paying farmers to allow fields to revert to nature. You can do that in a world in which there is one big navy to keep global supply chains moving. Some luxuries are no longer wisely affordable.
For 150 years, Canadians have had to think about the pros and cons of their unique status in the global hegemonic system: Canadians have never been actual citizens of the global hegemon, represented in its ruling legislature. Yes, there were individual Canadians in the imperial parliament in 1910 and some of them later rose to Cabinet rank, but they weren’t representing Canadian constituencies. Perhaps the world would have been a better place if Imperial Federation had been implemented, but the idea of a federal parliament for the whole British Empire wasn’t every tried. However, Canadians have been close enough to each of the successive global hegemons to influence its policy and lend their strength to the hegemon’s work. Canadian-born officers helped to rule India. Canadian-born speechwriters worked in the Bush administration and most of their West Wing colleagues weren’t really aware the guy they were talking to wasn’t U.S. born or even U.S. citizens. Canadians now need to reflect on the pros and cons of continuing with their unique status on the North American continent which deprives them of any participation in such decision-making forums as the electoral college of the United States and Congress.
Other implications doubtless flow from the matrix I’ve created but I will discuss them in a future blog post. I will conclude by saying that the point of studying history is not to romanticise past hegemons but to see clearly the conditions under which cooperation is easier and when it is fraying—and then design institutions that fit the world as it is, not as we wish it to.
When Canadian Prime Minister Mark Carney framed his new infrastructure and resource agenda as a “nation-building” project, he tapped into an old Canadian ambition: escape velocity from the gravitational pull of the U.S. market. The motivation for this sudden interest in trade diversification is Donald Trump’s stated intention to use tariffs as leverage to force Canada to become the fifty-first state. That threat has sharpened Ottawa’s incentive to diversify exports in a way that successive Canadian governments, dating back at least to the Ottawa Economic Conference of 1931, have aspired to but consistently failed to achieve.
Image source: 2016 presentation by Lawrence Schembri of the Bank of Canada to the AIMS think tank in Halifax.
This historic pattern has been remarkably stable. The history of Canada’s efforts to divert trade away from the United States is a boulevard of broken dreams. Whether Canada pursued Commonwealth preferences (the sentimental favourite of the political right in Canada), Pierre Trudeau’s “Third Option” in the 1970s (which envisioned closer trade ties with the social-democratic countries of the EEC), the Asia pivot of the 1990s (which involve Team Canada trade missions in which a plane load of politicians and businessmen flew on a glorified sales mission), the structural fact remained: the U.S. absorbed the overwhelming majority of Canadian exports. Geography, cultural similarity, scale, integrated supply chains, and risk-minimizing behaviour by firms kept the share stubbornly high. The Team Canada trade missions of the late 1990s appear to have had nearly zero impact on Canada’s international trade, as academic researchers have statistically demonstrated. The relative importance of Asian export markets to Canada did increase after 2000, that appears to have been driven entirely by economic growth in Asia rather than Canadian government policy.
The question right now is whether the latest tranche of so-called fast-tracked projects, which involve such commodities as so-called critical minerals, LNG, graphite, and electrons flowing down wires, can meaningfully reduce Canada’s degree of export dependence on the United States. The real question is not whether these projects are “nation-building” in an abstract national-identity sense, but whether they actually shift the geography of Canadian exports. On that metric, only one of these projects really matters, for reasons I will explain below.
I decided to invest a bit of time in trying to figure out which of the newly announced projects is likely to do the most to change the overall headline figure. So I looked at some of the documents related to four projects with clear export potential—Northcliff Resource’s (TSX:NCF) Sisson mine (tungsten/molybdenum), Crawford (nickel),Ksi Lisims (LNG), and NMG Phase 2 (graphite) and then tried to estimate how many dollars of exports each of them would likely generate in 2030 and 2035, if all of the plans come to fruition without delays. I know that’s a highly charitable assumption, given we are talking about Canada. For fun, I also tried to estimate how many full-time jobs each project would create since there is a lot of concern in Canada right now about the country’s high unemployment rate, the outflow of talent to the US, and falling fertility rates.
Project
Estimate of annual exports (US$)
Exports to US
Exports to RoW
Share of project sales that are exports
Share of exports going to US
Approx. FTE operations (2030 & 2035)
Sisson Mine (NB – tungsten/moly)
$0.31 bn
190m
120m
90%
60%
300 FTE
Crawford Nickel Ontario
$560m
$340m
$220m
70%
60%
1,000 FTE works in the greater Timmins area
Ksi Lisims LNG (BC)
$6.86 b
$0 bn
$6.86 b
100%
0%
700 FTE (operations)
NMG Phase-2 – Matawinie + Bécancour (QC)
180m
110m
70m
85%
60%
350 FTE (mine + battery plant)
Iqaluit hydro (NU)
$0 (no exports)
0
0
0%
–
20 FTE, likely seasonal
North Coast Transmission Line (BC)
$0 (no direct commodity exports)
0
0
0%
–
order of 100–150 FTE (grid ops & maintenance)
I could show you my rough work, but I estimate the total annual export value in 2035 or US$7.9billion or C$10.3billion. Of these exports, I estimate US$7.3 billion or C$9.5billion, will go to countries other than the US. As well, it seems that for most of these projects, with the obvious exception of the Iqaluit summertime hydroelectric project, virtually none of the commodities produced will be for the domestic Canadian market.
The arithmetic is unambiguous: roughly 87% of incremental export value from Thursday’s announcement is expected to come from just one of the new projects announced by Carney, that’s the LNG project. The LNG from this facility will overwhelmingly go to Asia, where energy is much more expensive than in North America and there is a desperate desire to stop using Russian energy. The export revenue and export diversification potential of the other projects is pocket change. The critical minerals and processed graphite are supposed to feed into the North American EV supply chain. (During the Biden era, Canada hoped that it would be part of emerging North American value chain. That’s why it put massive tariffs on Chinese EVs. The vision of the future that animated the Canadian government was of Canadians driving around in EV Chevrolets and Fords that were manufactured in the Great Lakes region of North America).
How Much Will Any of This Contribute to Diversification?
If it is built on time and on scale, the LNG project could have small but positive impact on the variable that the Carney government claims it wants to move downwards, the percentage of Canadian exports that go to the US. In 2023, Canada’s total exports of goods and services were about 600 billion USD. Ksi Lisims is designed for 12 mtpa of LNG, supplied by roughly 1.7–2.0 bcf/d of gas, with first shiploads leaving Canada in late 2029. In 2030, it will start earning foreign exchange for Canada. Now if we assume, for the sake of simplicity, that the price of a bcf of LNG will be the same in 2030 as it is today and that Canada’s overall exports will from between now and 2030 will increase at the historically expected pace, a fully operational Ksi Lisims LNG facility could account for roughly 0.7–0.8% of the value Canada’s total exports in the calendar year 2030. That’s not nothing.
When I look at the few concrete steps the Carney government has taken to diversify Canada’s trade away from the US, I’m convinced they are mostly symbolic. In that sense, they are strikingly similar to the Global Britain rhetoric we heard in the UK for a few years after Brexit. The UK spoke about striking ambitious trade deals with distant countries but was unwilling to do much in that area, in large part because of the domestic political costs of some of the trade deals proposed. I’m also concerned that policymakers are allocating an increasingly scarce resource, attention, to issue that have no export potential. The fact that a really important LNG project sits on the same list as some tiny projects suggests that the list-makers aren’t prioritizing.
I read everything Professor Joseph Heath writes, whether it’s a blog post from In Due Course or one of his books. He’s one of the most consistently clear-headed philosophers working today, especially when it comes to political economy. His recent Substack post, “Are cooperatives more virtuous than investor-owned firms?“, is another example of his signature style: lucid, skeptical, and refreshingly empirical. In it, Heath challenges the romanticism often attached to cooperative firms, arguing that they are neither inherently more virtuous (i.e. more socially beneficial) than investor-owned enterprises. (I suppose we should define organizational virtue here as an organization that seeks to increase the utility the human race/sentient beings as whole, not just the people who control the organization). I agree with Heath’s central claim, and it aligns with the argument made by Henry Hansmann in The Ownership of Enterprise, which shows that the choice of ownership form is best understood as a response to transaction costs and governance challenges.
Heath’s post also makes a comparative political claim that deserves closer scrutiny. Here’s where the empirical researcher in me gets pedantic. He writes: “In Canada, co-operatives have always played a much more important role in left-wing politics than they have in the UK.” This is a striking assertion, and one that I think isn’t quite right. While Canada certainly has a rich tradition of cooperative enterprise, especially in agriculture and finance and in Heath’s native Province of Saskatchewan, the UK’s Labour Party has had a formal electoral alliance with the Co-operative Party for about a century. This relationship is not merely symbolic. As of the 2024 general election, 41 sitting Labour MPs are also officially designated as Labour and Co-operative MPs. These MPs advocate for cooperative principles within the broader Labour agenda, and the alliance reflects a deep institutional connection between cooperative ownership and British left politics.
The historical relationship between the Co-operative movement and Britain’s Labour Party dates back nearly a century, back to the time when Canada’s CCF, and it predecessors, closely followed intellectual trends in the UK. The cooperative movement and the UK Labour Party formalized their alliance in the 1920s, jointly endorsing candidates who represent both Labour’s social democratic values and the Co-operative movement’s commitment to shared ownership and democratic control. Today, the partnership remains robust, with dozens of MPs carrying the joint designation, although I can’t think of any recent Labour policies in tax or anything else that favour the cooperative form over the investor-owned firms.
By contrast, the link between cooperative ownership and left politics in the United States seems to be far weaker, except perhaps in a few states settled by Scandinavians. While there are many successful cooperatives in the US, particularly in agriculture and rural finance, these organisations often have stakeholder bases that would almost certainly lean Republican. Some of the largest agricultural co-ops in the Midwest, for example, are deeply embedded in conservative communities. The cooperative form in the US has not been consistently championed by the Democratic Party, nor has it been institutionally integrated into, say, the DNC’s machinery, in the way it has in the UK or Scandinavia. In countries like Sweden and Norway, cooperative ownership is tightly woven into the fabric of social democracy, supported by both policy and party infrastructure.
This variation raises an important question: why is the linkage between cooperative ownership and left politics stronger in some countries than others? We probably need scholars to develop a causal model that explains this divergence. Such a model would need to account for historical party structures, electoral systems, the role of civil society, regional economic patterns, and perhaps even cultural attitudes toward ownership and governance. Heath’s skepticism about cooperative moralism is well-founded, but his comparative politics could use a bit more empirical grounding. A few years ago a great book on the history of the UK’s Cooperative group was published by some of my friends. It could provide empirical detail to stimulate the thinking of philosophers.
Business-historians are very interdisciplinary and they are always looking to publish their historical findings in new journals. So here’s an opportunity that sits squarely at the intersection of business history and contemporary policy: the Sustainability (MDPI) special issue on “Energy Transitions and the Banning of Synthetic Products: Historical Developments and Present-Day Controversies.” The journal’s current Journal Impact Factor is 3.3, which puts it in the same general ballpark as some of the business-history-adjacent outlets that are important to business historians in business schools. I see that Industrial and Corporate Change’s 2024 JIF is 1.8; the Journal of Economic History is 2.9. The upshot: publishing in this Special Issue could gives business historians reach into policy conversations.
Practicalities first. The submission deadline is 30 June 2026; the special issue is under Sustainability’s “Energy Sustainability” section. Manuscripts go through the standard single-blind review, and accepted pieces appear online on a rolling basis. (APC details and instructions for authors are on the call page.)
The guest editor is Pierre Desrochers (Department of Geography, University of Toronto Mississauga). Many of you will know Pierre from BHC: he presented back in 2010 (“Industrial Symbiosis: Old Wine in Recycled Bottles?”), and he has long collaborated with scholars working at the business–environment interface. In short, he knows our literature and our norms.
What makes this SI especially attractive to business and economic historians is its framing of “energy transitions” through the lens of E. A. Wrigley’s organic-to-mineral economy narrative. Wrigley’s central claim was that industrial development required escaping the constraints of an “organic economy” dependent on surface-grown resource by shifting to a “mineral economy” (coal, later hydrocarbons, and synthetic materials) is not just an energy story. It’s a firm-level, supply-chain, and market-structure story: greater energy density changes relative prices, which reorganize production functions, logistics, and the boundaries of the firm.
The CFP talks about Wrigley and then asks us to interrogate today’s policy moves that nudge economies back toward organic inputs (biomass substitutes for plastics, mandated renewables) and away from synthetics and fossil fuels. That is precisely the sort of historically grounded counterfactual thinking that business historians are well placed to do.
Several research avenues suggest themselves:
Firm strategy and path dependence. How have incumbent producers of synthetics and polymers adapted to regulatory pushes toward “organic” alternatives? Do we see Schumpeterian entry or defensive consolidation? A comparative sectoral history—synthetic rubber, plastics, fertilizers—could illuminate.
Transaction costs and infrastructure compatibility. Wrigley’s story is ultimately about system-level complementarities (fuels, machines, transport, finance). Policies that discourage synthetics may impose hidden coordination costs across supply networks. Business historians are good at thinking about the law of unintended consequences and can also quantify such frictions with archival pricing series and procurement records.
Addition versus substitution. The SI explicitly notes that “transitions” often look like energy addition, not displacement. That invites studies of rebound, stacking, and multi-fuel equilibria inside firms and regions. Those are all topics business historians are good at.
Corporate political economy. The present-day controversies over plastics bans, grid stability, and renewables integration echo earlier episodes (municipal light & power debates, post-war petrochemical build-out). Tracing lobbying, standard-setting, and coalition formation across episodes can test whether today’s arguments are genuinely novel or just re-packaged.
It’s an exciting time to be talking about energy transitions, particularly in light of the recent conversations sparked by the “note” by Bill Gates. If your comparative advantage is archival depth, you can still target contemporary relevance: the editor explicitly welcomes historical or contemporary analyses, qualitative and quantitative work, and literature reviews.
Despite common ancestry in English rugby football, Canadian football has long stood apart from its American cousin. The Canadian Football League (CFL) plays on a field that is 110 yards long, with two 20-yard end zones, and 65 yards wide, compared to the NFL’s 100-yard field with 10-yard end zones and a width of 53⅓ yards. Bigger country, bigger football field, you might say. These differences at the pro level exist alongside three downs instead of four, and they cascade down into university and high school play as well.
English: The second Harvard-McGill football game, played under the rugby rules. The Harvard players are on the left (in white), and the McGill players on the right. They flank the game officials. Date Taken on 15 May 1874 Source From the book “Football – The American Intercollegiate Game”, written by Parke H. Davis in 1911 and no longer in copyright
Under Commissioner Stewart Johnston, a Queen’s University graduate who took office in April 2025, the CFL has proposed and begun to phase in a package of changes that will move aspects of the game toward NFL norms. In 2027, the league will move the goal posts to the end line, shorten the field from 110 to 100 yards, and reducing end zones from 20 to 15 yards. These details were announced by the league and summarized in national coverage.
Fan reaction has been mixed. One organized response is a petition by CFL fans calling for a two-week blackout to delay the implementation of the changes, on the grounds that they erode the distinctive character of Canadian football. The petition and the surrounding discussion make plain that identity and tradition matter in sport, and that any shift toward alignment with the American game will be scrutinized closely by the league’s core audience. Today, the Globe and Mail, Canada’s national paper of record, denounced the pronounced rule changes in nationalistic, quasi-religious terms as breaking a “covenant”. That piece in the Globe prompted me to write this blog post.
I strongly support the proposed convergence of rules. The CFL commissioner is doing the right think. Harmonization reduces transaction costs for broadcasters, equipment suppliers, analytics firms (think of Moneyball but in age of AI), and sponsors who operate across the Canada–United States market. Players benefit from fewer adjustment frictions when moving between leagues or training environments. On a broader level, harmonization deepens the interconnection between Canadian and United States football economies in such areas as media rights, merchandising, talent mobility, and joint ventures in youth development, and it thereby enhances bilateral returns from cross-border synergies. These are exactly the sorts of effects one expects when regulatory regimes become more compatible.
The economists’ gravity model of trade helps explain why this logic travels beyond sport. The model holds that trade between two economies rises with their economic size and falls with distance, where distance includes not only geography but also regulatory difference. Canada trades far more with the United States than with distant partners because the United States is both very large and very close in every relevant sense, including legal and regulatory familiarity. A contemporary illustration of why gravity models matter in thinking about regulation is the United Kingdom’s experience after Brexit. The Brexiteers wanted UK rules to diverge from European ones because divergence felt good, at least for them. However, they should have looked at a map before deciding whether regulatory divergence from Europe is the right policy– fact is, the UK is in Europe. Even small increases in regulatory distance from the European Union created non-tariff barriers, compliance costs, and uncertainty that weighed on trade, despite the minimal geographic distance. The lesson is straightforward. When regulatory distance grows, exchange tends to fall, even where countries sit side by side. The UK should, almost always, harmonize regulations with the EU, unless there is a really compelling reason for divergence. Similarly, Canada should, almost always, harmonize regulations with the US, unless there is a really compelling reason for divergence.
History offers a practical reminder of what happens when standards diverge from infrastructure. During the CFL’s United States expansion in the mid-1990s, several American venues struggled to accommodate a full Canadian field. In Memphis, for example, attempts to fit the larger geometry into the Liberty Bowl produced irregular, truncated end zones that were reported to be as shallow as seven to nine yards at certain points, an awkward solution that undercut the on-field spectacle. I distinctly recall that being discussed at the time, although until today I had forgotten about the problems involved in squeezing a CFL playing field into a US stadium. Lack of rule harmonization was not the main reason CFL expansion into the US failed, but it was a factor.
None of this is an argument for indiscriminate alignment in every last area of life. There remain moral, cultural, and constitutional domains where distinct Canadian standards are appropriate just as there is a case for the UK retaining Imperial units of measurement in a few spheres of life that aren’t that important to visting Europeans. Nobody wants Canada to start using the electric chair because the US does. I kinda like that my local fruit and veg trader here in England can now sell in pounds of weight as well as pounds of money, although I would trade that for the right to live in the south of France near the Med. Nor would I deny that there is some value in trade diversification efforts: Canada has periodically explored trade diversification strategies, from the Third Option in the 1970s to more recent efforts. However, the structural forces described by the gravity model will keep the United States as Canada’s principal economic partner for the foreseeable future. In that context, targeted harmonization of rules is less a surrender of sovereignty than a way to sustain it.
If you don’t want Canada to become the fifty-first state, you need to make Canada as rich as possible. The best route to preserving meaningful independence is a strong economy, and building such strength involves deeper trade with the United States and regulatory compatibility that enables it. To have a strong country, army, navy and so forth, you need a rich economy to act as its tax base. To get to that rich economy, you sometimes need to adopt rules and institutions from the hegemonic power. Paradoxically, building a richer and more sovereign Canada involves adopting US rules.
For years, I have been thinking about the likely impact of AI on labour markets. In fact, I taught a career strategy class for first-year university students that introduced them to the scholarly debates about the automation of different types of cognitive tasks and then got them thinking about how they should adapt their career strategies to AI. I explained that AI was going to eliminate a few jobs entirely and would replace human input in parts of jobs. (Many jobs involve bundles of different tasks). The students usually ended up writing that they needed to use their time at university to develop skills that are complementary to AI. My point is that I’ve spent a fair bit of time thinking about how AI will impact different labour markets.
One of the most consequential but still under-examined implications of artificial intelligence is its likely impact on the boundaries of firms. Until recently, I wasn’t thinking about how AI is going to change the case for vertical integration. In this blog post, I’m going to try to use business history to think about these issues.
As Ronald Coase observed in his 1937 paper on “The Theory of the Firm”, firms arise to internalize those activities that are too costly to handle through contracts and arm’s-length exchange. Coase was trying to answer a deceptively simple question: if markets are so great, why do firms exist at all in a market economy? If markets are so wonderful as efficient allocators of resources, what justifies the creation of hierarchical structures that internalize production? Each large firm is an island in command economy hierarchy in a sea of market forces. Coase’s answer lay in the concept of transaction costs, which the costs associated with using the market to coordinate activity. These include the costs of finding suppliers, negotiating and enforcing contracts, and, crucially, the costs of dealing with opportunistic behaviour. When these market-based costs exceed the costs of managing an activity internally, through direction, authority, and monitoring, that changes the Make or Buy decision of that company. This logic underpins the concept of vertical integration, where a firm extends its boundary to include upstream suppliers or downstream distributors to avoid the frictions of market transactions. The make-or-buy decision thus turns on a comparative assessment of transaction costs inside the firm versus those incurred in the open market.
A classic illustration of this logic comes from the case of General Motors and Auto Fisher Body, which has been extensively discussed in the literature on transaction cost economics and by my fellow business historians (see here, here, and here). Initially, GM sourced automobile bodies from Fisher Body via a long-term contract. However, as demand for comfy “closed-body” cars surged in the 1920s, GM became increasingly dependent on Fisher. Fisher got the upper hand and exploited its power. Fisher, in turn, had little incentive to invest in production facilities that were close to GM’s assembly lines, and there were allegations that they exploited GM’s dependence by pricing opportunistically. According to the prevailing interpretation of this episode, Fisher Auto Body’s managers were only adhering the letter not the spirit of their contract with GM. According to the transaction cost interpretation, this created a classic case of asset specificity: Fisher had made investments tailored to GM’s needs, and GM was exposed to hold-up risk. In response, GM chose to vertically integrate by acquiring Fisher Body, thereby eliminating the need for ongoing contract renegotiation and securing control over a critical input. While some of my fellow business historians have questioned the details of this narrative, the case remains a widely used teachable example of how transaction costs, particularly those arising from relationship-specific investments, can drive firms toward integration by undermining the rationale for using the market.
My strong impression is that AI, especially when coupled with automation, predictive analytics, and large-scale data infrastructures, is reshaping each of the three main categories of transaction costs: search and information costs, bargaining and contracting costs, and monitoring and enforcement costs. Each of these shifts the relative attractiveness of market-based coordination versus managerial hierarchy, and each of them is already playing out in real firms right now.
My thinking about all of these issues has been influenced by Thierry Warin, a Montreal-based economist whose recent California Management Review piece, From Coase to AI Agents (2025), got me thinking about this issue. Warin suggests that the rise of AI agents changes not just the microeconomics of transaction costs but the architecture of organizational coordination itself. He builds on Coase but extends their insights into a world populated by autonomous agents, predictive models, and generative tools. Warin’s suggest to me that while AI can certainly lower the costs of using managerial hierarchies (it makes it easier of managers to monitor their subordinates), it is almost certainly going to lower the transaction costs involved in using the market to the greater extent. As such, it likely to shift, in many industries, people from reliance on coordination using managerial hierarchies to using coordination by the market. So we will see fewer cases of companies acquiring their suppliers, as GM did with Fisher Auto Body long ago. In fact, AI may cause the vertical disintegration of firms, accelerating a trend we saw in the late 20th century, when then great vertically integrated firms constructed in the first part of the twentieth century were replaced by coordination by the market.
Here’s what I have taken from the new work on AI and the theory of the firm. AI significantly reduces search and information costs, even more than the was the case with Google searches. Intelligent agents, algorithmic procurement systems, and even natural language interfaces now make it cheaper to identify suppliers, assess their offerings, and match capabilities. Pre-AI and, especially, pre-World Wide Web, the difficulty of finding a reliable vendor with the right specialization created a very strong commercial rationale for integrating the function in-house. Now, many of those frictions are collapsing, along with the case for vertical integration.
AI can also assist in bargaining and contracting by writing very good, watertight totally “complete” contracts, evaluating risks, and even simulating negotiation outcomes. Smart, self-enforcing contracting frameworks, potentially supported by blockchain infrastructure, embed enforcement directly into digital exchanges, can reducing ex-post haggling and the costs of opportunism (think of Fisher Auto Body). Lastly, the cost of monitoring third-party performance, traditionally a major argument for internalization, is falling. Thanks to the internet of Things, you can monitor what a distant supplier is doing in real time and at lower cost than was the case thirty years ago. Thanks to AI, you can interpret all of that data without hiring lots of human compliance and monitoring people. Real-time analytics, machine vision (you don’t need a human to count how many units come of the assembly line), and anomaly detection (an AI agent can inspect the quality of the auto bodies that today’s Fisher Auto Body is sending to today’s Alfred P. Sloan) will let upstream firms oversee quality and compliance without being physically present or organizationally intertwined.
This shift in the transaction cost landscape might suggest a simple conclusion: AI favours vertical disintegration everywhere and always. I suppose the actual effects of AI are context-dependent, and the classic Coasian trade-offs don’t disappear. In some industries, new technological capabilities reduce some transaction costs but increase others. For example, AI systems are often cognitively opaque. (Do I really understand the AI system that allowed me to find a great deal on my hotel for the Academy of Management?). A company may outsource an analytics task to an external AI provider, but understanding how the system arrived at a decision and ensuring that it didn’t do something reputationally or ethically risky (here’s where the business ethics professor comes in) may be more difficult than managing a transparent internal process.
This gives rise to a more nuanced map of trade-offs. In some domains, especially those involving modular, standardized, and relatively routine tasks (making auto bodies for Alfred P. Sloan), AI will promote market-based governance and strengthen the case for vertical disintegration. In others, particularly where the decisions are ethically or legally fraught, AI may actually reinforce the case for vertical integration. The table below summarizes these contextual trade-offs.
In effect, AI reduces the traditional cost penalties of outsourcing, but it also introduces new strategic uncertainties. As a result, we should expect to see increasing divergence across industries and functions in how firms draw their boundaries. This is already visible in early-stage evidence. For example, some companies are disaggregating their analytics and marketing functions, relying on external AI vendors with scalable expertise. I hope those firms know what it going on within the AI products they are buying. Meanwhile, Tesla has reintegrated key parts of its supply chain, including battery production and chip design. I bet they are doing so to remain complaint with the law.
In some industries, like precision medicine, defence, or autonomous vehicles, require complex coordination between proprietary hardware, sensitive data, and domain-specific machine learning. In those cases, control over process and data becomes a source of value, which means that vertical integration offers benefits in the age of AI that weren’t there before.
Sector
Likely Impact of AI Firm Boundaries
Rationale
Digital marketing, logistics/trucking
Disintegration
High modularity, low IP sensitivity, strong market tools
To put this all in perspective, it helps to bring research from my home field of business history into play and discuss how firm boundaries have evolved in response to past technological changes that shifted the cost of moving information. These new technologies, the telegraph, the fax machine, etc all changed transactions in different periods, and thus the case for using vertical integration in the face of a difficult Make or Buy decision.
Let’s draw on the work of Alfred Chandler, a historian of business organization who, in the 1970s and 1980s, laid the groundwork for our understanding of vertical integration in the industrial age. To people outside of the business history fraternity, Chandler is best known for The Visible Hand (1977), where he argued that the rise of large, vertically integrated firms in decades around 1900 was driven by their ability to coordinate better than the invisible hand of the market. His argument, which was strongly influenced by Ronald Coase’s theory of the firm, was that as railroads, the telegraph, and the telephone lowered internal communication costs, it became more efficient to organize production hierarchically and within big firms. The visible hand of management replaced the invisible hand of the market because of technology. If the AI is likely to promote vertical disintegration in many industries today, the telegraph had the opposite effect in the age of John D. Rockefeller and Alfred P. Sloan.
Chandler’s story is one of integration enabled by coordination. He documented how firms like DuPont, General Motors, and Standard Oil created internal hierarchies to manage flows of materials, information, and decision-making in ways that captured economies of scale and scope. The availability of the telegraph and the typewriter made real-time internal coordination feasible. Centralized administrative systems, bolstered by accounting innovations, allowed firms to replicate their managerial processes across divisions. The relative cost of internal governance dropped below the cost of external contracting and vertical integration became the rational response.
Chandler developed his ideas in the 1960s and 1970s. Almost immediately after he published his landmark book, new communications technologies began to reduce the costs of using markets, which then helped to produce a wave of vertical disintegration. A brilliant economist and historian of technology, Langlois advanced what he called the “vanishing hand” thesis, deliberately inverting Chandler’s title. In a series of papers published around 2000 and then a book that I recently and glowingly reviewed in the journal Business History, Langlois argued that Chandlerian integration was historically contingent. It was a response to immature market institutions and underdeveloped communication infrastructure. Once information technology advanced, market coordination became viable again, and firms began to shed internal functions in favour of modular, contract-based production. Langlois saw the return of the invisible hand.
Chandler saw the rise of vertically integrated firms and managerial hierarchies as a functional response to the challenges of industrial coordination in the early twentieth century. Big firms emerged, in his telling, because they could do what markets couldn’t: coordinate complex production and distribution systems more efficiently. Langlois, writing decades later, picks up the pen where Chandler put it down and then continues the story. His “vanishing hand” theory argues that by the late twentieth century, those same large, bureaucratic firms, which were oligopolistic, integration-heavy, and middle-manager-laden, had outlived their usefulness in most sectors. Once the economy moved through the transitional phase Chandler had chronicled, the visible hand of management became less necessary. Technological progress, especially in computing and communications, lowered the cost of outsourcing and made decentralized coordination viable again. As a result, market selection started penalizing firms that clung to the old integrated model.
Where Chandler saw integration as a triumph of managerial capacity over market chaos, Langlois saw it as a workaround: a temporary fix until markets and modularity caught up. Seen from a Langloisian perspective (I just made the adjective up), improvements in the market’s ability to handle complexity, which are driven by IT, digital standards, and now AI, restore the advantages of specialization and exchange.
Take-away lessons
My reading of history suggests to me that AI is going to have a big impact on firm boundaries (I’m very confident of that), and that it will, on net, encourage vertical disintegration in most industries (I have moderately high confidence in this history-informed prediction).
Right, so what are the implications of these two history-informed claims about the future for investors? By investors, I mean people who aren’t passive investors in index funds but who are in the foolishly/risky game of picking stocks. If AI is indeed going to analogous in its effects on levels of vertical integration to Morse’s electric telegraph, Malcoln MacLean’s container ship, and World Wide Web, then active investors should be thinking less about which firms can own the full value chain and more about which ones are best positioned to orchestrate, specialize, or intermediate. The big opportunity lies not in backing vertically integrated giants, but in identifying firms that sit at key nodal points in increasingly disaggregated value chains. This includes infrastructure providers with privileged access to training data or cheap computing power (e.g., foundation model specialists), platform firms that coordinate ecosystems rather than build everything themselves, and hyper-focused specialists that can carve out high-margin niches in the long tail of modularized functions. Investors should also be attentive to companies with architectural leverage: those that define the protocols, interfaces, and workflows that others plug into.
I never give investment advice on my blog. However, if I were to make a suggestion for short sellers based on my reading of history, it would be to would be target firms whose business models remain over-invested in vertical integration at a time when AI-enabled disintegration becomes the lower-cost, higher-flexibility equilibrium. These are companies that double down on doing everything in-house (manufacturing, analytics, logistics, customer service) even as AI makes it increasingly efficient (and strategically necessary) to specialize, outsource, or orchestrate ecosystems instead. Chandlerian-style firms that depend on tightly coupled hierarchies and rigid internal workflows may find themselves bloated, slow to adapt, and burdened by fixed costs in an economy that increasingly rewards nimbleness and interoperability. If they fail to unbundle or pivot, their margins erode and their strategic relevance declines. I’m thinking of Intel here.
The most promising short candidates would be firms that (a) operate in sectors where vertical disintegration is becoming increasingly feasible due to AI, such logistics, IT services, legal or back-office operations and then (b) persist with high internal headcount, capital-intensive infrastructure, or proprietary tech stacks that do not interface well with emerging AI ecosystems. I’m not saying that vertical integration would be maladaptive in all sectors. In domains where AI introduces new opacity, liability, or tightly coupled learning loops, such as like national defence (think of NATO’s new 5% target), biotech, or autonomous systems, vertical integration may remain rational.
Let turn now from the implications for investors to the societal implications. So what is this going to mean for the non-stakeholder shareholders of firms? For workers, local communities, governments, and the natural environment?
Gerald Davis’s The Vanishing American Corporation (2016) tells the story of how the large, vertically integrated corporation, which was the dominant organizational form in American economic life at the time Chandler wrote his book, has steadily eroded since about 1980. He attributes this shift to changes in technology, finance, and ideology that made it increasingly feasible and desirable for firms to disaggregate. His account is congruent to that of Langlois, except that Davis is more interested in the implications of this change for workers, for Joe Sixpack in places like Michigan. (Davis works at a university in Michigan!). In his account, as supply chains became global, digital technologies reduced coordination costs, and capital markets demanded flexibility and short-term performance, firms began to outsource everything from manufacturing to HR to R&D. The core transaction-cost logic here mirrors Richard Langlois’s “vanishing hand” thesis: improvements in market-supporting institutions and technologies enabled tasks that once had to be done internally to be done more efficiently through the market. Davis shows how the vertically integrated firm, which once offered stable lifetime employment to male breadwinners, predictable career ladders, and a broad social compact, gave way to leaner, more modular organizations and thus ultimately, to platform-based firms with minimal internal labour forces. The social implications, in Davis’s view, are stark: the decline of the traditional corporation has undermined job security, frayed the link between firms and communities, and contributed to rising inequality and precarity. If I were Davis, I would predict that AI will deliver even more of the same.
The latest spat between Elon Musk and Donald Trump has moved from petty to poisonous. After months of barely concealed tension, Musk finally went on the offensive. For an overview of the events of the last 48 hours, see here, here, and here.
The two men, once mutual admirers, now represent opposing poles in the Republican constellation. This latest clash offers more than just tabloid drama for us to gossip about it underscores a deeper institutional problem: why do entrepreneurs as talented as Musk get pulled into the dark gravity well of rent-seeking politics?
Musk’s career is a study in duality. On one hand, he is the archetype of the Schumpeterian entrepreneur: making life slightly better for millions of people through innovations in things like payment systems. On the other, he has repeatedly deployed his political acumen to extract subsidies from governments. Tesla’s early growth was underwritten by generous tax credits, SpaceX relies on NASA contracts, and his energy ventures have gorged on regressive green subsidies. Musk is certainly not unique in this regard—he merely illustrates with exceptional clarity the tragic misallocation of genius that occurs when institutions permit, or even incentivize, rent-seeking entrepreneurship. Imagine the gains to human welfare if all of Musk’s talents had been directed solely at solving engineering problems rather than navigating political patronage.
President Donald Trump meets with Conor McGregor and family in the Oval Office, March 17, 2025. (Official White House Photo by Molly Riley)
William Baumol was one of the most versatile and influential economists of the 20th century, whose career spanned over six decades and touched virtually every subfield of economics from labour markets to innovation theory to the economics of art. Baumol spent much of his academic life at Princeton and later at NYU. While he published extensively on the theory of the firm and macroeconomic policy, his most enduring contribution to entrepreneurship studies came in the form of a deceptively simple insight: that entrepreneurship is not inherently productive. In his seminal 1990 paper, Entrepreneurship: Productive, Unproductive, and Destructive, Baumol argued that while the supply of entrepreneurial talent may be more or less fixed across societies (only a certain proportion of people are born with the genes that make them good entrepreneurs), how this scarce resource is allocated is highly sensitive to the institutional environment. In societies with strong property rights, the rule of law, and competitive markets, entrepreneurs are more likely to engage in innovation and socially beneficial enterprise. In contrast, in institutional contexts that reward rent extraction, through bribery or lobbying at the royal court—the same entrepreneurial energy may be directed toward unproductive or even destructive ends. Baumol’s typology reframes the policy debate: the problem is not a shortage of entrepreneurship per se, but the set of incentives that determine where entrepreneurial effort is deployed. His work reminds us that the entrepreneur is not always the hero of capitalism; he or she is whatever the institutional context incentivises him or her to be.
William Baumol’s typology remains a commonly used lens through which to understand this phenomenon. In his formulation, entrepreneurship is neither inherently good nor bad—it is merely energy. Whether it is productive, unproductive, or destructive depends on the institutional context. In the Gilded Age, many American factory owners lobbied for tariff protection to protect their margins from foreign competition. Thomas Edison got rich by spending long hours in his laboratory producing innovative products that genuinely made life better for ordinary people. That’s socially productive entrepreneurship. Other entrepreneurs of that same era got rich by hanging around in smoke-filled rooms in Washington trying to get the details of the schedule of tariffs altered. Today, too many entrepreneurial energies are expended on capturing regulators, designing market-thwarting rules, and lobbying for subsidies. The returns to such behaviour can be immense, and perversely, in some socio-political systems, more reliable than those from genuine innovation. Hence the tragedy: the institutional architecture, not the intrinsic morality of entrepreneurs, determines whether entrepreneurial energies go toward inventing better mouse-traps or rent-seeking (e.g., hanging around the court so you can ask the monarch to give you a monopoly on the salt trade or something).
Institutional theory seeks to explain the ultimate causes of economic growth not in terms of resources or geography, but through the quality and structure of a society’s institutions, the formal and informal rules that govern human interaction. The foundational figure in this tradition is Douglass North, who won the Nobel Prize in 1993 for his work demonstrating that well-defined property rights, enforceable contracts, and predictable legal systems are essential preconditions for sustained economic development. North’s key insight was that institutions shape the incentives that individuals and organizations face: when the institutions reward productive entrepreneurship, economies flourish.
More recently, Daron Acemoglu and co-authors have built on and extended this framework, arguing that the deep determinants of prosperity lie in the presence of what he and his co-authors call “inclusive institutions”—those that create broad-based opportunities and constrain the arbitrary exercise of power. His work, especially Why Nations Fail, has powerfully influenced both academic and policy discourse, which is why he got the Nobel Prize. Acemoglu’s arguments, like North’s, are ultimately about incentives: inclusive institutions direct effort toward innovation and wealth creation, while extractive institutions channel energy into rent-seeking, repression, and elite entrenchment. Taken together, institutional theory provides a compelling answer to one of economics’ most profound questions: why some nations grow rich while others remain poor.
This brings us to the central insights of the paper I co-authored with Graham Brownlow, “Informal Institutions as Inhibitors of Rent-Seeking Entrepreneurship”. In this paper, which was published in a journal called Entrepreneurship Theory and Practice, We examined why the United States, despite having formal constitutional rules that ostensibly promote market competition, saw such variation over time in the degree to which entrepreneurs engaged in rent-seeking behaviour. One of our key findings was that the effectiveness of anti-rent-seeking provisions, such as the anti-aid clauses that were inserted into many state constitutions during the Jacksonian era, depended not merely on their formal wording, but on how judges interpreted them. Judicial scepticism towards governments using taxpayer funds to help specific firms had a chilling effect on collusion between politicians and entrepreneurs. State efforts to subsidize politically connected firms were routinely struck down. In that institutional climate, rent-seeking became a riskier and therefore less attractive strategy for an entrepreneur trying to get rich.
We argue that the shift in judicial philosophy after 1915 rendered many of these formal constraints inert. Courts began to defer to legislative decisions, even when these transparently served private interests rather than public welfare. The result was an institutional environment increasingly friendly to rent-seeking. Even though the constitutional text remained constant, its meaning had been transformed by a shift in thinking of the judges. This insight underscores the fragility of formal constraints.
The key policy implication of our research is that the ultimate effectiveness of institutions in curbing rent-seeking entrepreneurship hinges on the moral and intellectual commitments of judges. Constitutions can inhibit rent-seeking only if the judiciary interprets them in that spirit. Judicial philosophies, which are shaped by public opinion, legal culture, and broader intellectual currents, determine whether the constitutional order channels entrepreneurial energy into productive or parasitic endeavours. The tragedy is that when these interpretive norms erode, talented individuals like Musk are rationally induced to play the political game rather than innovate in the marketplace.
Still, we should not end on a pessimistic note. Recent judicial decisions suggest that at least some parts of the American judiciary are reawakening to the dangers of rent-seeking entrepreneurship. If this trend continues, it may yet be possible to restore a climate in which entrepreneurship is once again skewed toward the productive. The battle between Musk and Trump is a mere symptom. The deeper issue is institutional. If we want more Musks designing rockets and fewer Musks manoeuvring for subsidies, we need courts that stand firmly against rent-seeking.
One of the privileges of working in a business school is that you occasionally get to meet/have a meal with really remarkable and impressive business leaders. One of the advantages of being a business historian is that you get to learn about similarly remarkable business leaders who lived ago. Let me tell you about Robert Wood Johnson II, a business visionary, patriot, and, in my view, war hero. If you’re trying to understand how ideational commitments can persist inside firms for generations, Robert Wood Johnson II is a useful figure to consider. He didn’t found Johnson & Johnson, but he arguably left a deeper and more enduring imprint on the organization than its original founders. Thinking about Johnson’s impact is useful because management academics who use the theory of organizational imprinting focus on firm founders—the leaders who control companies during their infancies. The underlying idea is that for an individual to leave a lasting impact on a company, they have to be there at the earliest stage. (Check out this new paper in ASQ that applies and develops imprinting theory). I suppose that this management theory is analogous to the personality theory that says that high quality parenting matters the most when the child is under five—the child’s destiny is basically set up the lessons they absorb from their parents before they go to school. Well the case of Johnson is a bit of a problem for imprinting theory as it currently exists because he wasn’t a firm founder—he sort of inherited a rather unremarkable company and turned into something very distinctive whose distinctive features endure to this day. Johnson became president of J&J in 1932 and led it until the early 1960s. Over those three decades, he embedded a managerial philosophy that combined decentralised authority with superior organisational performance by the main metrics. What Johnson did was a sort of second founding—he wasn’t the George Washington of this company, he was its Abraham Lincoln.
Johnson (1883 to 1968) spent nearly his entire adult life inside Johnson & Johnson. He joined the company as a young man, having grown up in the orbit of its founders, and was steeped early in both its operational details and its moral rhetoric. By the time he became president, he was already well-versed in the business and carried the authority of both experience and lineage. He was also a reservist in the U.S. Army before joining the federal wartime administration, where he took on a prominent role overseeing small manufacturers who were contributing the war effort and protecting these SMEs from an unholy alliance of bureaucrats, Big Business, and Big Government. His career blended private enterprise with public service, and that dual exposure shaped many of his views on decentralisation, corporate legitimacy, and leadership.
Johnson took control during the Great Depression. He didn’t cut jobs. In fact, in 1933 he gave everyone a 5% raise and opened a new factory. While other firms were retrenching, he was arguing publicly for higher wages, shorter hours, and corporate social responsibility, voicing ideas similar to those articulated by Herbert Hoover. “It is in the interest of modern industry,” he wrote, “that service to customers comes first; service to its employees and management second, and service to its stockholders last.” He outlined these views in a short pamphlet he distributed to other industrialists, called Try Reality.
What made Johnson interesting was not just that he held these views about putting the stockholders last, which were pretty widespread among American executives in the era of total war, but that he institutionalised them. His most famous contribution was the writing of Our Credo in 1943. This document lays out J&J’s responsibilities in a strict moral order: customers first, then employees, then communities, then shareholders. He had the text literally carved in stone in the company headquarters. It remains there today. Unlike many corporate value statements, the Credo wasn’t a marketing exercise for the website. J&J leaders were expected to take it seriously. In 1982, for example, the Credo was explicitly invoked by CEO James Burke as the basis for recalling Tylenol nationwide in response to the poisoning crisis.
The middle years of the twentieth century were the peak of the high modernist belief in centralisation. Sadly, even the Western democracies were affected by this worldwide trend: in federal nations, central governments sucked power in from state/provincial governments and some analogous happened in many companies, notwithstanding the popularity in other companies of the M-Form discussed by Al Chandler. Johnson believed in decentralisation, not just as an efficiency measure but as a philosophy of life. He turned J&J into what he called a “family of companies”: a group of semi-autonomous business units, each with its own leadership, decision rights, and accountability. He thought mistakes were inevitable, and that decentralised firms made them smaller and less systemic. He believed in developing leaders by giving them room to run their own operations. He once said that in a centralised firm, “one big mistake can cripple the whole organization.”
The model wasn’t imposed arbitrarily. Johnson had seen decentralisation work in practice. His early experiments with overseas subsidiaries gave local managers a high degree of autonomy, and he was impressed with the results. During WWII, when he ran the Smaller War Plants Corporation, he also saw that distributed production by smaller firms could outperform centralised control. These experiences reinforced his conviction that responsiveness, not control, was the key to both efficiency and resilience.
While the mid-century Johnson & Johnson is sometimes cited as an exemplar of the M-form structure, its approach between the 1930s and the 1960s diverged in important respects from the canonical model articulated by Alfred Chandler. Structurally, the firm did resemble the archetypical M-form: it was organised into semi-autonomous operating units, each focused on distinct product categories or geographic markets. In a classic M-form, the divisions have responsibility for their own manufacturing and marketing, but not strategic decisions, which are made at the headquarters. From a purely formal standpoint, this places J&J within the broad tent of M-form adopters.
However, at J&J strategic decisions were also downloaded to the divisions. Functionally—and philosophically—it operated according to a radically different logic than in the classic M-form. Whereas Chandler’s M-form emphasized strategic centralization at headquarters, J&J espoused what might be called a doctrine of principled decentralization. Influenced by the managerial philosophy of Robert Wood Johnson, the company saw local autonomy not just as an efficiency mechanism but as a moral imperative. Decision rights were not merely delegated—they were embedded in the very design of the enterprise. The corporate center was relatively passive, providing capital and articulating broad values, but largely refraining from strategic coordination or intervention.
In this sense, J&J’s model was less a hierarchical allocator and more a federated network of entrepreneurial units. Where General Motors centralized planning functions and optimized across business lines, J&J tolerated—indeed, cultivated—a more pluralistic and loosely coupled structure. It is perhaps more accurate to think of J&J in this era not as a typical M-form firm but as an early prototype of post-M-form decentralization: structurally divisional, but governed through norms, local accountability, and minimal central orchestration.
Johnson viewed corporate power as a form of stewardship. He believed businesses needed to earn their legitimacy by serving others. He was also, in some sense, a Cold War liberal: convinced that capitalism needed to reform itself in order to survive the ideological battles of the mid-20th century. Corporate paternalism, in his hands, was not just a managerial strategy; it was a political response to the threat of social unrest and citizens drifting into supporting the totalitarian ideologies at the two ends of the political spectrum.
What makes Johnson’s case instructive is not just what he did, but how successfully it endured. By the time he retired in the 1960s, J&J was a global company, operating with over 100 semi-autonomous subsidiaries. That structure remains intact today. The Credo is still cited in internal deliberations. And the basic logic of his stakeholder-first philosophy continues to shape the firm’s governance. It is difficult to find many examples where the imprint of a mid-century CEO has lasted this long, especially when it runs so visibly counter to the shareholder primacy model that dominated American business thought in the decades after his retirement.
Robert Wood Johnson II was not flawless. He could be paternalistic, and his approach to moral authority might grate in a different institutional context. But in an era dominated by central planners and consolidators, he was making a different bet: that legitimacy and longevity come not from tight control, but from principled decentralisation. And eight decades on, his wager still looks pretty sound.
In some respects, Johnson and his ideas were totally representative of his generation. He was influenced by the ideas of Adolf Berle and Gardiner Means, whose 1932 book The Modern Corporation and Private Property helped shift elite thinking away from the idea that corporations should be run primarily for the benefit of shareholders. Berle and Means argued that large corporations had come to dominate American economic life and should be seen as social institutions. Their work was widely read and debated among business and policy elites in the interwar years, and Johnson’s rejection of shareholder primacy placed him firmly within that intellectual milieu.
But in other ways, Johnson was quite unrepresentative of his time. In an era increasingly captivated by high modernist visions of centralised planning and control (whether in government agencies, conglomerates, or business schools) he was actively decentralising power inside his firm. While many of his contemporaries were building central headquarters stuffed with analysts and long-range planners intent on micromanaging distant managers, Johnson was pushing authority outwards. He did this not out of ideological contrarianism, but because he believed decentralisation made firms more responsive, more resilient, and more moral.
This commitment to decentralisation is especially striking given Johnson’s own background in the U.S. military. He served in a leadership role during World War II and was socialised in a system that, at that time, embraced centralised command and rigid hierarchy. The modern doctrine of mission command, which means delegating authority and empowering subordinates, was decades away from being adopted by the U.S. armed forces. In fact, during the Second World War the U.S. military practiced the polar opposite of mission command, as their German opponents noted.
The concept of mission command has its origins in 19th-century Prussia, where military thinkers developed the idea of Auftragstaktik. This doctrine was built around giving subordinates clear objectives but leaving the means of execution to their discretion. The idea was to encourage flexibility, speed, and initiative at the tactical level while still maintaining coherence at the strategic level. Although it had long been admired by a few dissenting voices within the U.S. military, the U.S. military did not embrace mission command until the soul searching in the post-Vietnam period. The failures of top-down military, Robert McNamara-style planning in Southeast Asia, combined with a changing operational environment and the professionalisation of the officer corps, pushed U.S. military doctrine toward greater decentralisation. By the late 20th century, mission command had become a core principle in U.S. Army leadership manuals—a sharp departure from the centralised command culture that had prevailed during Johnson’s own time in uniform.
That Johnson, a mid-20th century military man, ended up building one of the most decentralised corporate structures in postwar America is testament to how deeply he believed in pushing decision-making down to those closest to the action.