Police test AI tools to speed investigations and flag new risks

When a patrol officer in Oklahoma City finishes a domestic disturbance call, the paperwork used to take the better part of an hour. Now, in a growing number of departments, body camera audio from that call is fed into a generative AI system that spits out a structured incident report in minutes. The officer reviews it, makes corrections, and files it before the next call comes in. Detectives get written accounts faster. Supervisors clear backlogs. On paper, everyone wins.

But a federal courtroom in Chicago has already shown what can go wrong. U.S. Magistrate Judge Jeffrey Cummings found in early 2025 that an immigration enforcement agent used ChatGPT to draft a use-of-force report that directly contradicted what body camera footage captured. The AI-generated text was fluent and confident. It was also wrong. That case has become a reference point for a question now facing police departments, courts, and city governments nationwide: what happens when the tool that saves officers time also distorts the evidentiary record?

How the technology works in practice

Oklahoma City’s police department is among the municipal agencies cited in that federal reporting as early adopters. Exactly how many departments nationwide have deployed AI-assisted report writing is unclear; no federal agency or industry group has published a comprehensive count, and estimates from vendors and press coverage vary widely. The pattern driving adoption, however, is consistent across the departments that have gone public: staffing shortages, rising call volumes, and the persistent reality that officers often spend more time writing about incidents than responding to them. Vendors marketing these products describe them as “public safety-grade” tools, a label that implies reliability standards but has no uniform definition across the industry.

At the federal level, the DOJ now maintains a public inventory of its own AI systems, linking each entry to associated Privacy Impact Assessments. The inventory reveals how broadly artificial intelligence has already been woven into justice-related functions, from case triage to document analysis. A separate 2024 report from the DOJ’s Office of Legal Policy lays out risk categories and governance frameworks for criminal justice AI, emphasizing transparency, human oversight, and mechanisms for redress when automated systems fail.

The standards that exist and the gaps they leave

The National Institute of Standards and Technology published its Generative Artificial Intelligence Profile, known as NIST AI 600-1, in July 2024. The framework addresses validity and reliability of AI outputs, privacy protections, explainability, and accountability when generated content is used in high-stakes settings. For law enforcement specifically, NIST treats each application of generative AI, whether for report drafting, transcription, redaction, or analysis, as carrying a distinct risk profile that agencies must evaluate before deployment…

Story continues

TRENDING NOW

LATEST LOCAL NEWS