Bootleggers, Baptists, and AI
It is no secret that lawmakers are more interested than ever in artificial intelligence (AI), with over 1,500 AI-related bills introduced in state legislatures in the past year. Much of this new legislation is motivated by, or aims to address, issues surrounding “AI Safety,” that is, the potential risks posed to human welfare by AI development. Voiced by a diverse community of safety advocates, such concerns range from the risk of economic dislocation due to AI automation to the potential for human extinction at the hands of super-intelligent AI systems. While the specific policy prescriptions proposed vary, many within the safety community have converged on the need for government intervention to ensure that AI research and development are conducted in a manner aligned with humanity’s well-being. However, efforts to regulate AI risk carry a big risk of their own: regulatory capture. That is, when industry steers regulation to its own advantage.
Many safety advocates are undoubtedly sincere, and it would be unfair to accuse them of directly seeking to capture regulators. Rather, the political economy of regulation suggests a more subtle dynamic. As the “Bootlegger and Baptist” theory of regulation suggests, capture can emerge from coalitions of rent-seeking industries and public interest groups that both seek greater regulation for different reasons. Attempts to regulate AI safety and risk seem particularly susceptible to this dynamic.
This theory shows how such coalitions can shape regulatory agendas and policy design in ways that, while publicly justified on ethical grounds or concerns around societal impact, disproportionately advantage incumbent firms. The result is the same as if industry had directly lobbied for favorable laws, with all the same concomitant costs to consumers, growth, and innovation. Thus, the Bootlegger and Baptist theory can provide insights into ongoing debates surrounding AI regulation and its connection to safety concerns.
Regulatory Capture: A Brief Review
Regulatory capture refers to the process whereby regulated industries eventually co-opt or exert a significant influence over the direction and kinds of regulations created by the agencies they are regulated by. As George Stigler showed, capture arises when small, organized pressure groups seek beneficial laws or regulations by strategically lobbying legislators, particularly those serving on committees with jurisdiction over their industry. Bruce Yandle’s “Bootlegger and Baptist” framework extends Stigler’s model by highlighting how traditional rent-seeking industries (“Bootleggers”) partner with advocacy or public interest groups (“Baptists”) with whom they align on some regulatory issue.
Such alliances need not be cynical; rather, they can arise from members of a particular industry sharing advocates’ concerns or wanting to ameliorate the perceived downsides of their trade. Nevertheless, the results are the same. Regulations emerging from these coalitions tend to impose compliance costs that fall disproportionately on smaller firms or potential entrants. By “raising their rivals’ costs” in this way, incumbents can obtain greater pricing power or expand market share.
“Baptist” groups, through repeated interactions with lawmakers and regulators, can reshape their understanding of a particular issue area or market niche by legitimizing certain problem framings and marginalizing alternative perspectives. This can be particularly pronounced in novel policy areas, such as emerging technologies, where the public and lawmakers’ baseline understanding is usually limited, if not zero. As a result, the policy agenda becomes disproportionately biased toward addressing potential harms or imposing controls, creating opportunities for industry lobbyists and experts to bend the resulting rules and laws towards their interests.
AI and Regulatory Capture
A recent study on the risk of regulatory capture in AI outlined several attributes of the industry that make it susceptible to capture, including the presence of several “technopanics” regarding the potential societal effects of AI and the information asymmetries inherent in the development and deployment of AI systems. There is also the question of why AI developers want to capture their regulators.
AI Technopanics
AI stands poised to be the next transformative, “general-purpose technology,” whose versatility virtually guarantees that its effects will not be limited to one particular economic sector or part of society. As such, attention to AI’s societal impact or “AI risk” runs the gamut from grounded concerns over economic dislocation and adjustment to the more theoretical risk of advanced AI systems turning on humanity. Indeed, a recent Pew poll found that 57 percent of Americans believe the risks of AI to society are high, whereas only 25 percent see the benefits as high. A diverse ecosystem of “AI safety advocates” stokes these concerns to advance harm mitigation policies and ensure that AI development is aligned with human values. AI safety advocates represent the “Baptists” in the Bootlegger and Baptist theory of regulatory capture. While many risks have been articulated, it is worth noting that one of the most prominent––that of structural unemployment––may be overstated.
As with previous technological innovations, the advent of AI will transform how goods and services are produced, likely resulting in certain kinds of work becoming automated. A recent working paper from Stanford University, aptly titled “Canaries in a Coal Mine?” shows that entry-level workers in AI-exposed occupations saw a 16 percent decline in employment. In contrast, employment for more senior employees remained stable. Yet, evidence of AI labor displacement has not been uniform.
Several studies published since the release of the Stanford working paper have found that AI has more modest or no effects on current employment trends. Nevertheless, it seems inevitable that, as AI systems improve and become more widely integrated into economic processes, some forms of work that can be performed more efficiently by AI will be automated. However, focusing just on the destructive side of creative destruction ignores the new tasks and industries AI will give rise to as entrepreneurs find new ways of combining labor and intelligent systems. As with previous technological revolutions, such a process will likely be broadly beneficial to human welfare.
Whether the benefits of innovation are forthcoming, however, depends critically on the rules and legislation that officials craft. Fears of technological unemployment are but one of many risks safety advocates and some industry leaders have articulated, which have also attracted the attention of state and federal policymakers. This is particularly troublesome because such risks can become dominant heuristics for lawmakers and voters navigating the still-nascent AI landscape, thereby limiting the range of potential regulatory options considered. At the state level, these fears have led legislators in several prominent states, including New York, California, and Illinois, to name a few, to introduce various strict, proscriptive bills aimed at addressing various perceived harms.
At the federal level, several lawmakers have introduced bills designed to create strong safety “stopgaps” in the development and marketing of AI systems, including granting governments the ability to seize the assets of companies believed to be developing super-intelligent AI. Others have advocated for nationwide pauses on the construction of data center infrastructure critical for the development of AI. Concerns over AI safety, and legislative initiatives premised on them, serve as the moral and political justification––the “Baptist” fodder––for the rent-seeking efforts of incumbent firms.
Why Capture?
The presence of widespread concerns about the potential societal effects of AI opens the door for regulatory capture; however, it does not explain why AI firms may wish to engage in regulatory capture. At its core, capture, and indeed all rent-seeking, is aimed at preserving or enhancing firm profitability. Part of this is motivated by a desire among some firms within the AI “stack” to get ahead of future draconian regulatory and legislative policymaking, inspired in part by the ongoing moral panic surrounding AI. That is, capture occurs because the industry anticipates more costly regulation if it does not insert itself into the political process. Another, and likely more significant, driver of capture efforts is the current competitive landscape in the generative AI market, which threatens incumbent firm profitability.
Since the release of ChatGPT in 2022, startups like Anthropic and OpenAI have dominated the market for generative AI models, alongside several legacy technology firms such as Google, Microsoft, Meta, and Amazon. Despite the predominance of these firms, initial studies of the generative AI market indicate that it remains both dynamic and contestable. The introduction of Deepseek-R1 in January 2025, for example, is just the latest development in the marketplace, with capital markets continuing to show interest in investing in new startups.
Moreover, new entrants have not lagged behind incumbents, with several models matching or even exceeding the capabilities of those offered by leading firms. Research has found that leadership at the technological frontier has alternated between five or six companies, with another ten close behind. This competitive dynamic extends across the broader AI supply chain. For consumers, this competition has been a boon: quality-adjusted prices for generative AI models have declined by approximately 80 percent since 2023. For AI developers, particularly big incumbent players, intensifying competitive pressures have necessitated continual innovation and improvement of their product offerings, lest they lose market share to rivals. This pressure is compounded by the added necessity of attracting and retaining the highly specialized talent needed to compete with the rest of the AI ecosystem.
As has been the case in other industries, firms facing intense competition may turn to the state to insulate themselves from creative destruction. By supporting the passage of onerous regulations, rent-seeking developers can impose costs on their competitors, particularly smaller firms, and forestall the development and introduction of innovative products on the market that would compete with their own. In effect, state power can be used to regulate the competition out of existence.
The informational asymmetries created by the complexity of AI systems further facilitate these strategies. Contemporary AI models are highly complex and require specialized technical knowledge to understand their operation. This complexity grants industry players an informational advantage over regulators and lawmakers, who must therefore rely upon industry expertise when writing laws and rules. This advantage can be used to shape regulations in a manner favorable to their interests, like defining legal categories of AI developers and users such that the burden of compliance costs is shifted onto excluded firms or by securing targeted subsidies or loan guarantees for themselves.
Big incumbent players enjoy the added advantage of having already borne the fixed costs of organizing themselves politically and thus accumulated political capital with lawmakers, an advantage not shared by many smaller tech firms. Moreover, the political process is, by necessity, less representative of the unorganized “latent” group of entrepreneurs and innovators who have yet to enter the market. All of these factors aid large rent-seeking incumbents seeking to raise their rivals’ costs via regulation and lawmaking. Because these Bootlegger efforts at capture are cloaked with appeals to Baptist concerns over AI risk and safety, they lower the political costs of supporting these efforts by reducing the risk of public outcry over what would otherwise appear to be self-interested behavior.
In sum, under current conditions, engaging in regulatory capture may be a rational strategy from the standpoint of incumbent firms. Given the risks associated with current regulatory efforts at the state and federal levels, a governance framework that embraces private ordering may be the best option for AI.
Harnessing the Market for AI Governance
Overlooked in many discussions of AI governance is the fact that public regulation tends to crowd out alternative private governance mechanisms. Private governance or order here refers to “the various forms of private enforcement, self-governance, self-regulation, or informal mechanisms that private individuals, companies, or clubs (as opposed to government) use to create order, facilitate exchange, and protect property rights.” Given the comparative costs associated with the emerging state regulatory patchwork, to say nothing of recent federal initiatives, such mechanisms may be the best means by which to capture the economic benefits offered by AI, while mitigating its downside effects. Such an approach to AI governance enjoys a number of advantages over public options.
Firstly, private ordering allows for a better alignment between rules and knowledge. The current proliferation of different AI models, serving a variety of purposes and niches, combined with their highly technical nature, makes a single federal regulatory framework (or fifty different frameworks at the state-level) liable to be misaligned with the particularities of each model type and their uses.
Private ordering allows for the rules governing the development, use, and risks of AI to be tailored to the specific qualities of different model types, capitalizing on the fact that developers and user communities possess a better understanding of how these specific applications are best used. Moreover, by allowing for multiple, competing governance arrangements to emerge, private solutions also reduce the risk of regulatory capture. Likewise, a private ordering approach ensures that AI governance remains flexible and adaptive, reflecting the “time and place” contingencies associated with the development of AI and the novel applications and externalities that will arise as a result.
Second, private solutions provide for a better alignment of the incentives of rule makers. By forcing competition between multiple purveyors of governance, private ordering creates incentives that will tend toward the production of the optimum quantity of rules and enforcement. A for-profit provider of AI governance that fails to design and enforce rules that provide users with credible assurances of model safety and quality would, over time, face penalties in the form of declining credibility and economic losses. Users and developers would most likely pivot toward governance providers better able to generate trustworthy signals of model performance and risks. Insurance companies and credit-rating agencies provide such services across a variety of domains and credit markets. Thus, private ordering creates incentives that tend toward the efficient co-production of rules over time.
Self-governance is already at work in the AI industry. As others have pointed out, several organizations, such as the Association for the Advancement of Artificial Intelligence or the Partnership on AI, have emerged as voluntary standard-setting bodies, while many insurance companies have started to offer or develop products specifically tailored to AI risks. Nevertheless, despite these promising developments, the continued encroachment of state and federal public regulation threatens to undermine private governance mechanisms before they can be fully scaled.
Conclusion
Recent concerns over the risks that the development and diffusion of AI may pose to humanity have ignited a flurry of lawmaking at both the federal and state levels of government. Regardless of the actual magnitude of these risks, such legislation creates opportunities for “Bootlegger and Baptist” style regulatory capture, as rent-seeking industry players partner with public interest AI safety advocates to raise rivals’ costs, thereby granting themselves more market power. The incentive to pursue such strategies may be particularly strong given the current competitive pressures in the AI marketplace. Ultimately, the risks of capture, combined with the economic costs associated with current legislative initiatives, suggest that an alternative approach to AI governance––private ordering––may be preferable. By harnessing market-based mechanisms, such an approach can address AI risks effectively while preserving innovation, competition, and the broader dynamism of the AI ecosystem.