Report warns of agentic AI cyber risks
One of the biggest and most exciting use cases for artificial intelligence in state and local government is so-called agentic AI, where the technology becomes an agent to help residents.
The agents can handle routine inquiries, help with straightforward cases and applications and even suggest other benefits or programs that an applicant might be eligible for. The theory is that it can then free up employees to focus on more complex tasks and applications that still require the human touch, while making applying for benefits and other government programs much less daunting.
But a new report from the R Street Institute think tank warned of the cybersecurity risks of agentic AI, even as it acknowledged the opportunities associated with the technology, including continuous attack surface monitoring, real-time threat detection and incident response and the ability to support the cybersecurity workforce.
“The rise of AI agents presents a critical window of opportunity to take a closer look — not only at how AI agents are being developed — but also at how they can be secured and governed,” the report said. “As agentic systems begin streamlining business operations, problem-solving, and human–machine collaboration, the implications extend far beyond technological innovation.”
Researchers identified four layers of agentic AI infrastructure that could pose cybersecurity risks. The agent’s perception layer, where it observes a given environment through cameras, sensors and data, is the first cyber threat, as that data could be compromised. Data poisoning is “one of the most prominent security risks” in this layer of agentic AI, as it tampers with the information agents rely on to make decisions and determine next steps. Even small data issues can “meaningfully affect an agent’s learning process,” the report warns.
The AI agent’s reasoning module, which governs its internal decision-making process, presents another cybersecurity risk, the report found. The report warned that “vulnerabilities and bad cyber hygiene in this layer can lead to incorrect decisions or mischaracterizations, particularly if adversaries manipulate the signals or exploit vulnerabilities in the models or supporting infrastructure.” Those vulnerabilities could be in any underlying model vulnerabilities, and in exploiting them, it would undermine public trust in the agent’s reasoning.
The third layer of an AI agent, its action module that translates the decision-making process into real-world actions, could also be compromised by bad actors, the report warns. That could be through injecting malicious prompts to manipulate the agent, while hackers could even compromise the agent itself by hijacking its commands and making it perform unauthorized functions.
“Because this is the stage where actions are executed, even seemingly minor manipulations can lead to unintended — and potentially harmful — consequences,” the report said. “This makes the action module particularly sensitive to attacks that exploit an agent’s ability to interface with external systems.”
The final layer of agentic AI the report identified that could face cyber risks is the memory module, which retains context across tasks, stores data and informs future decisions based on the agent’s previous interactions.
Hackers could potentially manipulate the agent’s memory to change its understanding of situations or introduce incorrect historical data. Unauthorized data retention is also a risk, as agents could retain data they weren’t supposed to, and create privacy and compliance issues. The memory layer can also reinforce any vulnerabilities or risks introduced earlier in the process.
“In this way, memory does not simply inform an AI agent’s future performance — it can also carry forward mistakes and risks from its past,” the report said.
While there are plenty of risks associated with agentic AI, the report also offered policy recommendations to ensure that the technology can be used responsibly while addressing cyber risks.
The report recommended that federal agencies should develop voluntary, sector-specific guidelines for agentic AI’s use in a way that would allow them to “define tailored human–agent interaction frameworks.”
“These frameworks should clarify when agents may be deployed, under what conditions they may act autonomously, whether they are permitted to learn independently, when human oversight is required, how responsibility is assigned in the event of failures, and what protocols exist for detecting, escalating, and correcting errors,” the report said.
Researchers also urged better information sharing on cyber threats to agentic AI, and called for governments to prioritize public-private partnerships to advance cybersecurity. The report also suggested using several emerging technologies, including automated defenses and hallucination detection, to help.
They also called on developers and users to maintain strong cyber hygiene, ensure an agent’s scope is well defined, and deploy them incrementally with the option to roll them back if necessary. Researchers said deploying agents in a “sandboxed environment” allows for testing and evaluation before they make real-world decisions. The time is right now to get started, the report said.
“Ultimately, the broader imperative is not simply to keep pace with emerging technologies but to guide and shape their trajectory, ensuring that they augment human talent and skills, reinforce America’s technological leadership and economic competitiveness, and remain grounded in our founding values,” it said.