AI Is Appearing On Police Body Cameras—Will It Make Policing Safer or Riskier?
In addition to helping generate reports, AI integration in body cameras could help increase public trust in policing by banking examples of police deescalation tactics on calls, says JillianSnider, a former police officer who is now a resident senior fellow at think tank R Street Institute. She co-authored a July 2025 report on AI and body cameras.
Some of the AI tools on the market can recognize de-escalation tactics in real time and reward officers, which Snider tells A&E Crime + Investigation “incentivizes officers to use de-escalation attempts more.”
“When you have an AI tool that is looking to find positivity in interactions, it’s actually something that more officers would be willing to wear without hesitancy or reluctance,” she says.
If there is a complaint against an officer, a deep repository of body camera footage analyzed by AI can also aid police departments in determining whether there is a pattern in an officer’s behavior, rather than relying on reports with limited evidence or perspectives.
Snider says that these AI tools aren’t flawless when it comes to concerns about bias and profiling. For example, the software could tag an interaction with cursing as hostile or violent when the officer or civilian may have been speaking in familiarity or jest. That’s why it’s important that a human reviews all reports the AI generates, she says.
AI can also fail to detect tonal shifts or significant body language leading to an unrepresentative report, says Logan Seacrest, a resident fellow in criminal justice and civil liberties at the R Street Institute and another author of the report.
“AI can also struggle to detect sarcasm, which relies on subtle tonal shifts that are not obvious when reviewing a transcript in isolation,” he tells A&E Crime + Investigation. “An officer making a sarcastic comment to a partner may be flagged as a policy violation, or a suspect’s sarcastic compliance might be misread as full cooperation.”
Perhaps more problematic than misinterpretations are AI hallucinations in police reports. In tests of generative AI to draft police reports, systems have been caught inventing legal justifications to fill gaps.
“If an AI drafts a report that includes a probable cause, threat or weapon that was not actually there, and an officer signs off on it without catching the error, that false information becomes part of the official legal record,” Seacrest says…
Seacrest says that law enforcement leaders and policymakers need to maintain a “human in the loop,” emphasizing the AI as a tool rather than a replacement. As it stands, we are behind in developing regulations around AI in the criminal justice system, which follows suit historically as new technology is deployed in the field before we collectively think about its implications on society. While Seacrest believes this technology has enormous potential to improve public safety, safety cannot come at the cost of our civil liberties.
“We cannot outsource final judgments on arrests or use-of-force to a computer,” he says. “In short: AI must assist officers, not replace them.”