Endgames for export controls
Export controls on AI chips could lead to enduring dominance.
In a previous post on the Substrate, I argued for aggressively limiting AI chip exports to China. But that post took it as a given that the US should use export controls to slow China’s AI progress. Some people are understandably skeptical: Won’t China catch up in chips, and then AI, eventually? Aren’t the export controls just accelerating China’s progress on chips1 and worsening US-China relations in the meantime?
This hypothetical critic might acknowledge that the controls give the US a temporary advantage in AI, but ask: What’s the endgame? Will this give any lasting advantage?
In this post, I sketch at least one possible answer. Basically: AI appears to have several feedback loops that favor incumbents. These feedback loops may be so strong, and the impact of AI so significant, that China’s usual strategy of protectionism fails.
Why AI is different
To give the critic their due, many past US export controls have been ineffective, because other countries have simply developed supply chains that circumvent the US, and US industry has suffered. Export controls on satellite technology are perhaps the most infamous example. In 1998, after investigations revealed that US satellite manufacturers had potentially helped improve China’s ballistic missiles through launch failure analyses, Congress responded by moving commercial satellites from Commerce Department jurisdiction to the much more restrictive International Traffic in Arms Regulations (ITAR) regime.2 As a result, European competitors began marketing “ITAR-free” satellites with zero US-origin components specifically to capture the business that US companies could no longer serve.3 The controls limited some technology transfers, but the overall effect was to cede market share while China developed its own satellite capabilities through non-US suppliers. The underlying technology was not concentrated enough in US hands for the controls to work.
The export controls on chips and semiconductor manufacturing equipment have so far been much more successful, because semiconductor manufacturing is extremely concentrated and difficult to enter. This concentration is mainly driven by two factors: the industry is extremely capital-intensive, and it requires highly specialized tacit knowledge across many sectors and research fields.
But can the chip controls translate into an enduring lead in AI? There is a plausible case that they can. AI appears likely to combine (1) dynamics similar to those in semiconductors, (2) network effects, as seen in big tech platforms, and (3) AI’s own unique feedback loops, sometimes called recursive self-improvement. Together, these may create first-mover advantages strong enough to make an early lead from chip controls nearly unassailable.
Like semiconductors, AI is capital-intensive. Grok 4 is estimated to have cost nearly half a billion dollars to train. But training costs are only part of the picture. The hyperscalers—Microsoft, Google, Meta, and Amazon—are projected to spend a combined $600-700 billion in capital expenditure in 2026, the majority of it on AI infrastructure.4 For comparison, the entire global semiconductor manufacturing industry invests roughly $160 billion per year in equipment,5 meaning five US tech companies plan to spend more than four times as much on AI infrastructure alone. Meanwhile, China’s total AI infrastructure investment—including both corporate and government spending—is estimated at roughly $80-100 billion per year,6 less than one-sixth of US hyperscaler capex. These gaps are difficult to close. If building frontier AI continues to require this kind of investment, the number of serious competitors remains small—and any country cut off from the most advanced chips will find it even harder to stay in the race.
AI companies also benefit from proprietary data flywheels. In a world where AI commoditizes engineering talent, what will remain scarce and valuable? One answer is high-quality, proprietary user interaction data. The companies with hundreds of millions of users generating billions of conversations are accumulating training signal that no newcomer can replicate from scratch. This data will be essential for understanding what users actually want, what real-world tasks look like, and where models fail in real-world deployment. And engineers’ tacit knowledge, which currently leaks between companies as employees change jobs, may become more controllable in a world where human employees generate only a small fraction of the relevant insights.
To be fair, much of the highest-quality human-origin data is collected by companies like Scale AI and smaller competitors, not sourced directly from users. And quasi-artisanal task-specific data and RL environments may prove more useful than raw user interaction data. So far, there is limited public evidence that raw user interaction data is exceptionally important. But this may change as flows of user data increase and other data sources become harder to scale.
Like existing big tech companies, AI incumbents may benefit from sticky human customers and from being a platform. ChatGPT still commands an outsized share of the consumer AI market, despite industry tastemakers having largely switched to Claude. Over the past year, thousands of users became so attached to OpenAI’s 4o model that when OpenAI tried to retire it, public demand pressured the company into bringing it back.7 Business-to-business AI may eventually become a ruthlessly efficient, low-margin business as AI agents handle both sides of transactions, but sticky, habit-driven humans may persist on the consumer side for a long time.
There may also be platform effects. If your AI becomes the “operating system” through which you interact with the world—managing your email, scheduling, finances, shopping, and work—then you want the AI with good integrations with other services, and service providers want to build integrations for AIs that have users. This is a classic multi-sided platform, which can lead to winner-takes-all dynamics. It is already the case that users’ AI choices are heavily influenced by which companies offer good integrations and tool ecosystems.
That said, the stickiness and platform arguments are the weakest of these arguments. It is unclear how sticky human preferences will be over the medium term: users might readily switch to a substantially better product, and ChatGPT’s dominance may reflect a fleeting first-mover advantage more than deep lock-in. And it is not obvious that AI will benefit from network effects the way social media or other big tech products have. Open integration standards like the Model Context Protocol could dissolve the stickiness of AI integrations—if any AI can connect to any service equally well, the platform advantage dissolves. And AI itself may dissolve this kind of stickiness: an AI coding assistant can write new integrations on the fly, and a user’s “memories” and configurations are currently just text files that could easily be ported to a competitor. So while sticky preferences and platform effects could entrench incumbents, these mechanisms are more speculative than the others discussed here.
Finally, and perhaps most importantly, AI appears to benefit from a unique feedback loop where AI speeds up AI. Leading AI companies already use their own AI systems to accelerate their research and engineering. Both Anthropic and OpenAI are approaching a point where AI does nearly all the coding in their labs. OpenAI has stated a goal of fielding “automated AI research interns” running on “hundreds of thousands of GPUs” by September 2026, and a “true automated AI researcher” by 2028. Anthropic CEO Dario Amodei claimed recently that “[we] essentially have Claude designing the next version of Claude itself”.
If the best models and scaffolds are kept internal, this creates a compounding advantage for incumbents. If AI systems can perform every component task of AI research—finding optimizations, designing experiments, writing and debugging code—then the company with the best AI engineers (silicon ones, that is) will make faster progress, yielding even better AI engineers, and so on. According to one estimate, OpenAI already spends a majority of its compute on internal experiments rather than customer-facing inference, a sign of how much compute AI companies are willing to invest in internal R&D.8
This mechanism reinforces the point about capital intensity: If you can turn capital (compute) into R&D, companies with fewer or inferior chips will struggle to catch up.
There are important caveats here. Competitors can sometimes use a leading lab’s own AI products to partly catch up. Algorithmic insights leak between companies as researchers change jobs. And returns to additional compute may not be linear—research is not perfectly parallelizable, and marginal experiments may yield diminishing returns. But the overall direction of the feedback loop seems clear, even if its strength is uncertain.
On the other hand, if recursive self-improvement results in a rapid software intelligence explosion, the explosion may burn through available fuel quickly and plateau. If Chinese competitors can trigger their own explosion, the US lead may be dramatic but short-lived in calendar time. Nonetheless, market positions established during this brief period may last, for some of the other reasons discussed above.
A sufficiently strong lead may become permanent
China will almost certainly attempt a protectionist strategy, as it has done successfully with big tech: maintaining a walled domestic market, nurturing indigenous AI champions, and keeping American products out. This is the playbook that gave China Baidu instead of Google, WeChat instead of WhatsApp, and Alibaba instead of Amazon.
But AI may differ in a crucial respect. If the feedback loops described above are strong enough, the capability gap between US and Chinese AI systems will grow over time rather than shrink. And if AI becomes as economically transformative as many expect, the opportunity cost of relying on inferior domestic AI could be enormous. Chinese businesses using second-tier AI would be at a growing productivity disadvantage relative to international competitors using frontier US-developed systems. At some point, the economic cost of protectionism could exceed the political cost of opening the market. And once the market opens, the dynamics discussed above may make it practically impossible for domestic Chinese alternatives to ever catch up.
There is a rough historical parallel. In the mid-19th century, Japan had maintained a policy of near-total isolation (sakoku) for over two centuries. But when the technological and economic gap with the industrializing West grew large enough, Japan was essentially forced to open its markets and rapidly modernize. The pressure was not merely military but economic: the cost of falling further behind had become intolerable. Something analogous could happen with AI. If Chinese AI is still limited to helpful chatbots while US companies have largely automated white-collar work—and if that gap is widening—the CCP may face mounting pressure from its own businesses, citizens, and strategists to allow superior foreign AI systems into its market. (That said, the analogy cuts both ways: Japan’s forced opening was followed by remarkably rapid catch-up. The Meiji modernization transformed Japan from a feudal society to a major industrial and military power within a few decades. Forced market opening is not the same as permanent subordination.)
Lasting advantage may simply be a series of temporary advantages
At the start, I framed this as a question of whether export controls could create a lasting advantage. But they don’t necessarily need to create a lasting advantage to be worthwhile.
It is inevitable that China will indigenize the modern semiconductor manufacturing stack eventually. But this was always going to happen eventually, with or without export controls, and pulling back the controls now will not get China to give up on indigenization.
The best way to obtain overall strategic advantage will likely be to keep getting ahead on the next thing, and the thing after that. The export controls are doing exactly that: letting the US dominate in AI and pull in massive revenues. Those revenues can be invested in whatever the next critical thing is, whether that’s humanoid robots or new approaches to manufacturing compute—perhaps using AI-enabled nanotech—or, more likely, something that I’ve entirely failed to think of.
Giving up a clear near-term advantage to preserve semiconductor manufacturing leadership decades from now requires putting far too much trust in your ability to hold on to that lead. Intel already lost the mandate of heaven. TSMC will lose it eventually. The strength of the US has always been that when one technological advantage is lost, its innovation ecosystem produces another to take its place.
AGI could be more than just another tech stack
So far, I’ve been talking about AI as just another big tech product. But if AI companies succeed in building genuinely general, and then superhuman, artificial intelligence, the analogy to previous technology competitions—search engines, social media, cloud computing—may radically understate the stakes.
If AI is more like a new factor of production, or even a new species, whoever leads will likely gain major advantages in scientific research, military capability, economic productivity, and the capacity to develop every other technology. The geopolitical consequences, for a world order already in flux, would be enormous.
Such a technology will also raise value-laden choices about how to design it and integrate it into society. These choices will likely be greatly influenced by who builds it, as we’re already seeing: It wasn’t obvious that AI assistants would have a standard “virtuous” personality, much less anything called a constitution, if they hadn’t been built by people from very particular subcultures. Even the Silicon Valley of twenty years ago might have taken a very different approach! And these choices may prove sticky once they shape user expectations and industry norms, or become codified in regulation.
The implications go beyond consumer-facing norms: whoever leads in AI will likely shape how autonomous systems are used in military and intelligence contexts, and will have outsized influence over emerging international AI norms. The US largely got to construct the nuclear taboo and the equilibrium of mutually assured destruction by virtue of being first to the technology. European powers set norms around chemical weapons in the first half of the 20th century, and those norms are still in place today.
Conversely, if the US loses its AI lead, the automation of software engineering may well undo the moats of US tech giants like Microsoft. It could also transform the tech ecosystem enough to unseat American incumbents in adjacent domains like search, browsers, and enterprise software, where they have held dominant positions for decades. The US benefits enormously from the world’s default digital infrastructure being American-built; an AI-driven reversal of that would have cascading consequences.
No one can predict exact technological trajectories. But across a broad range of scenarios, a temporary compute advantage seems likely to have long-lasting effects on AI development. If AI is a normal technology, it will likely have strong first-mover advantages. If AI is much more than a normal technology, it will be difficult to predict what the implications of leadership will be, but those implications will almost certainly be enormous. And regardless, always aiming to win the next thing is probably a good strategy for staying ahead, even if no single advantage lasts.
I don't think this acceleration effect is actually very strong, as I briefly discussed in my previous Substack post.
The Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 returned jurisdiction over commercial satellite exports from the Commerce Department to the State Department under ITAR, effective March 1999. This was prompted by investigations (culminating in the Cox Report) finding that launch failure analyses conducted by Loral and Hughes with Chinese engineers had provided information that could improve China’s ballistic missile reliability. See CSIS Aerospace Security, “The Myth of ‘ITAR-Free’”.
Bureau of Industry and Security, “Defense Industrial Base Assessment of the U.S. Space Industry” (2007). US satellite manufacturing revenue share fell from approximately 63% (1996-1998) to approximately 41% (2002-2005). The BIS also estimated that lost US satellite export sales averaged $588 million annually during 2003-2006.
Combined capital expenditure projections for Amazon, Alphabet, Microsoft, Meta, and Oracle. See Futurum Group, “AI Capex 2026: The $690B Infrastructure Sprint” (February 12, 2026); CNBC, “Tech AI spending approaches $700 billion in 2026” (February 2026). Approximately 75% of this spending is directly tied to AI infrastructure.
Semiconductor Intelligence, “Semiconductor CapEx Down in 2024, Up in 2025”, estimates total global semiconductor manufacturer capex at approximately $160 billion in 2025. SEMI forecasts global semiconductor equipment sales of $139 billion in 2026.
Estimates vary. Goldman Sachs (November 2025) projected $70 billion in data center investment from Chinese AI providers. SCMP (June 2025), citing Bank of America, estimated total Chinese AI capex including government spending could reach RMB 600-700 billion (~$84-98 billion) in 2025.
OpenAI initially attempted to retire GPT-4o in August 2025 when it launched GPT-5, but reversed the decision within days following significant backlash from paid subscribers. See TechCrunch, “The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be” (February 6, 2026); OpenAI announcement.
Epoch AI, “Most of OpenAI’s 2024 compute went to experiments” (2025), estimates that the large majority of OpenAI’s 2024 compute budget went to research experiments rather than final training runs or customer-facing inference. Of an estimated ~$5 billion R&D compute budget, less than $1 billion went to final training runs of released models.

