The R Street Institute argues the Cybersecurity and Infrastructure Security Agency can play a role to support secure deployment of “agentic” artificial intelligence through building out information sharing channels and providing security tools for mitigating potential cybersecurity risks, in a new policy paper.

“In cybersecurity, AI agents are already proving to be valuable copilots to human analysts — enhancing threat detection, expediting incident response, and supporting overstretched cyber teams,” the May 29 paper says. “Yet with greater power across the agentic infrastructure stack — spanning perception, reasoning, action, and memory — comes greater responsibility to ensure that agents are secure, explainable, and reliable.”

The paper is written by R Street resident fellow Haiman Wong and Tiffany Saade, a graduate student from Stanford University who contributed to the paper as a volunteer researcher.

“Given their potential to serve as force multipliers for offensive, defensive, and adversarial cyber operations, AI agents require equally coordinated and dynamic strategies for timely, cross-sector information sharing about emerging agentic risks, observed unintended agentic behaviors, deployment challenges, and successful risk mitigation strategies,” the paper argues.

It says, “Specifically, the White House should direct federal agencies like the Cybersecurity and Infrastructure Security Agency to collaborate with sector-specific regulatory bodies and industry stakeholders to expand information-sharing forums and develop publicly available software tools and resources for testing and evaluating agentic security and performance.”

“These efforts should emphasize use-case-specific transparency, such as anonymized incident reports and adversarial testing results, to accelerate collective learning and cyber preparedness,” according to Wong and Saade.

The paper also highlights a need to promote cyber best practices as agentic AI is adopted across sectors.

The paper says, “Cybersecurity fundamentals remain essential, but they must now extend into each layer of the agentic infrastructure stack.”

“Core cyber best practices, such as robust identity and access management, secure API usage, and zero-trust architectures should be implemented when designing new AI agents or adapting existing agents for customized applications,” according to the paper. “These measures can help reduce the risk of cascading failures across the agent’s workflow and maintain system integrity. As agents operate with increasing autonomy, maintaining strong cyber hygiene best practices remains the first line of defense.”

The paper speaks to the role of innovation to strengthen the “cybersecurity posture of AI agents across their full lifecycle.” Wong and Saade argue Congress should make investments in research and development for the intersection of agentic AI and cybersecurity.

The paper adds, “While private companies naturally have strong incentives to secure their own products and services, many agentic risks–such as model hijacking, memory poisoning, and emergent multi-agent behavior–can cut across proprietary systems and lack clearly defined ownership or liability.”

Workforce considerations

The paper details a role for the National Institute of Standards and Technology and the Department of Labor to play in developing “sector-specific guidelines that support secure, transparent, and human-centered agentic deployments.”

“Rather than prescribing a rigid, one-size-fits-all mandate, these guidelines should encourage organizations — including AI laboratories, private-sector companies, and universities — to define tailored human — agent interaction frameworks,” the paper argues.

Frameworks should specifically address “when agents may be deployed, under what conditions they may act autonomously, whether they are permitted to learn independently, when human oversight is required, how responsibility is assigned in the event of failures, and what protocols exist for detecting, escalating, and correcting errors,” the paper says, to “support — not replace — human decision-making and talent.”

Guidelines should also be aimed at promoting “organizational readiness for human–agent collaboration by offering recommendations for redesigning jobs and reskilling or upskilling current employees,” the paper says.