The R Street Institute is offering policymakers an alternative to “all-or-nothing” approaches in the enduring open- versus closed-source artificial intelligence debate, calling for federal guidelines on best practices for deploying open-source AI models, government-private sector partnerships to develop validation methods and “risk-tiered” liability protection.

“Rather, the path forward lies in crafting flexible solutions that mitigate the challenges and potential risks of open-source AI while unlocking its capacity to accelerate innovation at unprecedented speed and scale,” according to an analysis by Haiman Wong, resident fellow at the free-market R Street Institute.

Wong in the analysis offers five policy steps to promote “secure development and deployment” of open-source AI:

  1. Establish clear, voluntary, and risk-based federal guidelines outlining best practices for securely deploying open-source AI.
  2. Foster public-private partnerships dedicated to rigorous validation methods for AI models.
  3. Implement risk-tiered liability shields to encourage innovation, especially for lower-risk open-source AI projects.
  4. Invest in the development and integration of emerging technological solutions — such as embedded provenance tracking, AI-driven anomaly detection, and adaptive guardrails — to advance open-source AI security.
  5. Promote industry-led best practices and licensing standards, such as copyleft agreements, to ensure community-driven accountability and sustained innovation.

Wong writes, “Collectively, these recommendations chart a balanced path toward securing open-source innovation — not only as an immediate national security imperative, but as a strategic foundation for sustained U.S. leadership in emerging technological domains like AI agents and robotics.”

President Trump’s AI action plan addresses “Open-Source and Open-Weight AI” at length, saying decisions on “open versus closed” belong to the developers but that “the Federal government should create a supportive environment for open models.”

One of the action plan’s five recommendations on open-source says, “Ensure access to large-scale computing power for startups and academics by improving the financial market for compute.”

Stanford University’s institute for Human centered AI said the plan “offers the strongest federal endorsement to date of open-source and open-weight AI models,”

However, the Stanford analysis also cited the plan’s failure to adequately discuss major risks associated with the technology.

“The plan’s emphasis on technical evaluations and other information gathering mechanisms reflects an important move toward evidence-based policymaking,” according to the researchers. “However, it leaves key risks underaddressed.”

Wong in the R Street analysis says, “While proprietary models still dominate U.S. AI development, private-sector initiatives like Meta’s Llama models and OpenAI’s highly anticipated open-source model have helped open-source AI gain meaningful traction. Policymakers can build on this momentum by positioning open-source AI as a national security priority.”

She writes, “If left unchecked and unchallenged, China’s open-source AI initiatives could erode U.S. technological leadership and threaten national security.”