The artificial intelligence (AI) ecosystem is evolving rapidly, with many developers and subsectors contributing toward the next great technological revolution. While headlines tend to focus on AI announcements made by large, private tech companies, there are many other firms and organizations creating a broad range of algorithmic products and applications.

Open-source AI plays a major role in this story and is helping to further diversify the marketplace with even more innovation and competition. Unfortunately, open-source AI also faces resistance—not just from proprietary competitors, but also from many regulatory activists and policymakers.

This is unsurprising, as there has been tension between open and closed systems throughout the history of computing and digital technology. Today, however, open-source AI could face a perfect storm of cultural, political, and business opposition as new regulatory proposals threaten to curtail its enormous potential.

The Potential for Open Source AI

The Open Source Initiative identifies 10 key properties of open-source software, including free distribution of transparent and modifiable source code. Open-source software relies on collaboration among diverse communities working together across the globe to constantly tweak and improve software. Open source powers a massive range of digital services on the market today.

By extension, open-source AI models rely on the same philosophy of collaborative and iterative application building. As these models proliferate, they will usher in many services uniquely tailored to the specific needs of consumers, companies, and the general public. Experts have argued that open-source technologies “are the bedrock for grassroots innovation in AI” and can help “promote a diverse AI ecosystem,” just as they have for other sectors.

The open-source AI market is growing rapidly and attracting considerable investment. According to a recent article in The Wall Street Journal, venture capital investment in open-source AI startups jumped from $900 million in 2022 to $2.9 billion last year. Leading open-source AI players include the Allen Institute, Cerebras, EleutherAI, Hugging Face, Mistral AI, Stability AI, Together AI, and Writer, but the list of open-source developers and applications is extraordinarily long and constantly growing.

Larger tech players are also launching open-source AI solutions. Elon Musk recently announced that his new xAI startup will open-source its Grok chatbot to compete with OpenAI’s ChatGPT. Meanwhile, Facebook’s parent company, Meta, went all-in on open-source AI with its massive 70-billion-parameter large-language model (LLM), LLaMA 2, which launched last summer.

Meta’s LLaMA only held the title of largest open-source LLM for about two months, however. Last September, the United Arab Emirates’ Technology Innovation Institute launched its greatly expanded Falcon 180B open-source LLM, which was 2.5 times larger than LLaMA 2. This government-supported initiative demonstrated that serious open-source AI competition is developing globally.

Even the Chinese government is now “bolstering the development of a vibrant open-source ecosystem,” perhaps in an attempt to counter the United States’ early lead in this area. Chinese startup 01.AI achieved a $1 billion valuation following its 2023 debut, leading Wired to declare, “This Chinese Startup Is Winning the Open Source AI Race.” DeepSeek AI, another powerful Chinese open-source AI model, launched at the same time as 01.AI.

Open-source AI is poised to play a crucial role in how countries achieve competitive advantage in advanced computational capabilities, and it would be dangerous for the U.S. government to restrict the nation’s open-source capabilities while others advance their own. If the United States restricts open-source development domestically, it will just blossom elsewhere—undermining our nation’s technology base and security. As two leading software security experts have noted, “[T]here are simply too many researchers doing too many different things in too many different countries.”

Policy Pitfalls

The contours of “open” and “closed” systems are fluid, and society benefits from the broad array of constantly evolving systems along that spectrum. The development of more open systems—even when they are not perfectly open—is crucial to keep proprietary providers on their toes by infusing more competition and choice in digital systems.

This week, a coalition of academic researchers and civil society groups sent a joint letter to U.S. Secretary of Commerce Gina Raimondo encouraging her agency to exercise great caution when crafting policy for open AI systems. It highlighted the benefits of open-source AI technologies in advancing innovation, competition, civil rights, and safety and security. Signatories to the letter, which included the R Street Institute, are particularly worried about the potential for federal regulators to impose export controls or licensing schemes on open-source AI models. Restrictions on the sharing of open-source model “weights” (a component of artificial neural networks that help them learn and make predictions), would undermine the development of open systems significantly. It would also counter the broader goal of making algorithmic systems more transparent.

The National Telecommunications and Information Administration (NTIA), part of the U.S. Department of Commerce, is considering some of these issues in a proceeding on “Openness in AI.” This proceeding comes at a time when some critics of open systems call them “uniquely dangerous” and raise fears about how they might be used to build weapons or fuel misinformation efforts. A recent U.S. Department of State-funded study even floated the idea of jail time for open-source coders who share their models.

While open systems could exacerbate some risks, they could also play a critical role in addressing them in a faster and more collaborative fashion. And as R Street pointed out in comments filed with the NTIA last week:  

[M]any of the supposed risks of open AI systems are shared with many other “open” information mediums and technologies. Descriptions about how to build dangerous weapons have appeared in books, magazines, blog posts, and online videos. Likewise, “misinformation,” however defined, is a problem that goes back to the rise of the printing press.

In this filing, as well as in an earlier one, R Street encouraged policymakers to use more balanced governance strategies to address the risks that powerful AI systems pose. Importantly, AI regulation should focus primarily on problematic actors and applications—not on the underlying process by which algorithmic systems operate. While open-source systems should be allowed to develop without arbitrary limitations on their capabilities, regulators can continue to use a mix of multi-stakeholder processes, iterative standards, and existing regulatory remedies to address specific problems.

The opposite approach of imposing licensing requirements and other heavy-handed compliance mandates on these systems would decimate open-source development and represent a return to the so-called “crypto wars” of the mid-1990s, when some government officials wanted powerful computation and encryption controlled by law as dangerous “munitions.” Luckily, defenders of open computing got the government to back down, paving the way for the open-source systems and encryption technologies that make digital systems more diverse and secure today.

The Most Important Safety Concern of All

If policymakers make the opposite choice on open-source AI today, the ramifications will be profoundly deleterious—and not just in terms of lost innovation and options. R Street testimony before a House Oversight Committee hearing last week noted that policymakers must appreciate how advanced algorithmic systems play a crucial role in strengthening our overall technology base, thus promoting both global competitiveness and geopolitical security: “It is essential that we strike the right policy balance as we face serious competition from China and other nations who are looking to counter America’s early lead in computational systems and data-driven digital technologies.”

How the U.S. government treats open-source AI will determine whether our nation takes this challenge seriously.