The political economy of artificial intelligence
Consider the architecture of today’s AI systems. The models are data-hungry and compute-intensive, and both inputs are increasingly controlled by a handful of firms. Training a frontier model requires not just talent, but access to thousands of specialized chips and proprietary datasets. Economies of scale, combined with network effects, create a feedback loop: The more a company knows about you, the better it can serve (and monetize) you, drawing in more users, more data, more dominance. The perfect lock-in effect, even though you believe you can easily move to the next chatbot. Talk to enterprise users building with large models APIs, and they all lament high switching costs.
This isn’t simply a story of technological prowess. It’s one of political structure. As Adam Thierer from R Street Institute has observed, incumbents in the AI space are not passive beneficiaries of regulation. They are often its authors. Faced with public anxiety over AI risks, lawmakers reach for rules. But in a vacuum of expertise, they turn to the very firms they aim to regulate. The result is a textbook case of regulatory capture: well-meaning guardrails that double as moats.