The R Street Institute is warning that over-regulation could undermine the “vibrancy” of artificial intelligence development in the United States, which the pro-market group says would pose the greatest danger of any AI-related threat to safety and national security.

“A loss of competitive advantage in advanced computation through burdensome regulatory policies could have broader implications for U.S. geopolitics and national security,” R Street said in comments submitted March 20 to NTIA. “There is a symbiotic relationship between the strength of a nation’s technology base and its ability to address various threats to its security.”

NTIA on Feb. 26 published a request for comment under President Biden’s artificial intelligence executive order, seeking stakeholder views on risks, benefits and possible regulation of “dual-use foundation models for which the model weights are widely available.” The comment deadline is March 27.

R Street signed onto a separate filing by Mozilla, the Center for Democracy and Technology and other public interest groups, think tanks and researchers that also urged NTIA to recognize the benefits of “openness and transparency” in foundational AI models, while calling for caution in setting export controls.

In its own filing, R Street said, “We strongly recommend the agency ensure that open-source AI systems are allowed to continue to develop without arbitrary limitations on their capabilities. Instead, our focus should be on how to maximize their benefits while addressing risks in the most flexible fashion possible using iterative standards and multistakeholder processes. The agency already possesses the tools and methods needed to achieve that goal.”

The group pointed to “a potential contradiction that lies at the heart of the debate over open AI systems and the tension between this and previous NTIA proceedings,” in particular NTIA’s AI accountability inquiry, which garnered 1,500 public responses that will inform an upcoming report from the agency.

“In fact, the NTIA has been considering questions about how to make AI systems more transparent as part of its ‘AI Accountability Policy’ proceeding, which the agency launched last April,” according to R Street. “Ironically, with this latest Request for Comment, the NTIA raises the opposite concern: whether open source systems might actually be too transparent and widely available.”

R Street said, “What is perhaps overlooked is that we have a range of constantly expanding options along the ‘open vs. closed’ continuum of software and hardware systems, including new AI models.” The group pointed out that NTIA itself says “openness” and “wide availability” are terms “without clear definition or consensus” and that there are “gradients” of openness.

“This is correct,” R Street said, “but it is important to understand that no formula exists whereby policymakers can get things ‘just right’ when it comes to determining the optimal amount of openness or transparency of algorithmic systems. It would be a mistake for government to unnaturally tip the balance in either direction when the optimal amount of model openness or transparency is unclear.”

R Street said “the proper policy position for government toward open vs. closed systems should be one of technological agnosticism, and policymakers should not look to artificially tip the scales in either direction. With this proceeding, the agency must avoid doing that by imposing too great a burden on open AI systems.”

The group closes its filing by saying, “Finally, history offers us some lessons in terms of the need for policy humility. There were many fears in the late 1990s about the rise of open-source systems, and for a time, the U.S. government also treated powerful computation and encryption as dangerous ‘munitions’ that should be subjected to export controls.”

“Luckily,” R Street said, “this moment passed and citizens today benefit greatly from open-source systems and encryption technologies. We need to exercise similar forbearance toward modern digital systems, especially open-source AI.”