R Street urges federal research strategy with focus on open-source AI, standards and metrics
The R Street Institute, a free market-oriented policy think tank, calls for supporting open-source artificial intelligence development — and addressing related risks — in its package of recommendations for the Trump administration’s national AI research strategy, while also urging efforts to help secure agentic AI, and to develop voluntary auditing standards and AI metrics.
“AI security is not a constraint on innovation — it is a prerequisite for ensuring that America’s AI advancements are scalable and resilient. As the United States crafts its 2025 National AI R&D Strategic Plan, it is imperative to recognize that securing evolving AI systems, agentic capabilities, and open-source ecosystems is foundational work that will sustain our momentum, competitiveness, and global leadership,” R Street says in its comments to the administration.
The group says the federal research strategy should focus on:
- Strengthening AI security and national security by advancing a scientific understanding of AI’s capabilities and emerging risks;
- Leading and securing open-source AI development to promote resilience, transparency, and competitiveness against adversarial threats; and
- Shaping the contours of agentic AI and the future of human-machine collaboration, including expanded efforts in explainability, compute integrity, and accountability mechanisms.
The National Science Foundation, on behalf of the White House Office of Science and Technology Policy, in April issued a request for public input on revising the Biden administration’s AI research strategy, with a May 29 comment deadline.
The Business Software Alliance’s submission calls for expanding the National AI Research Resource, continued support for the AI Safety Institute and a focus on science, standards and global coordination.
R Street in its comments says, “If secured, supported, and guided through strategic AI R&D, open-source AI ecosystems can serve as both an engine of innovation and a pillar of our national security.”
It says, “The 2025 National AI R&D Strategic Plan should support comparative evaluation initiatives aimed at assessing vulnerabilities in open- and closed-source AI systems, focusing on emerging threats like model tampering in public repositories. These R&D efforts can inform robust security practices, ensuring open-source AI remains a driver of American innovation while proactively mitigating evolving security risks.”
Further, it says the plan “should facilitate public-private partnerships or initiatives focused on developing automated validation tools for open-source repositories, datasets, models, libraries, and packages.”
And it says the plan “should prioritize research that examines how malicious threat actors may seek to compromise open-source software and AI supply chains, with a focus on identifying emerging techniques, vulnerabilities, and attack vectors.”
R Street in an April report provided recommendations on ways to support secure development and deployment of open-source AI systems.
On a related point, R Street says the plan “should prioritize the development of reliable metrics to assess data security, model protection, and resilience against emerging threats and vulnerabilities. These AI metrics could also serve as a foundation for the development of more consistent AI audits across industry sectors based on their distinct sector-specific needs.”
On securing agentic AI, it says “cross-cutting risks are significant because they are unlikely to be fully addressed by private-sector innovation alone, thereby requiring dedicated federal R&D leadership and investment to advance security, accountability, and trust in agentic AI systems — especially as these capabilities are increasingly deployed in national security and critical infrastructure applications.”
The group calls for “foundational research that strengthens the security and resilience of agentic AI systems across their lifecycle. This includes investing in adversarial testing, agent-specific risk modeling, and resilience evaluations focused on architectural features like memory integrity, decision autonomy thresholds, and emergent behaviors.”
“Since AI agents are capable of independently learning and executing tasks,” it says, “many of their emerging risks lack clear ownership and liability, making federal R&D leadership essential to ensure ongoing efforts are aligned with national security priorities, shared openly, and used to inform cross-sector coordination and best practices.”
Among its recommendations, R Street suggests “research into persistent agent identifiers and dynamic logging systems capable of capturing the full lifecycle of agentic activity” and also calls for “creation of secure, open-access testbeds and sandbox environments where these systems can be safely deployed and evaluated at scale.”
R Street expressed support for “the thoughtful steps the Trump administration has taken to reinforce America’s AI leadership through both this RFI” and the March RFI on the upcoming AI action plan.
“These parallel processes reflect a deliberate, multistakeholder approach that fosters improved alignment between federal AI R&D priorities and the broader, shared strategic imperative of advancing America’s long-term leadership in technological innovation, national security, and economic competitiveness,” according to R Street.