14-04-2025
Seattle watchdog urges police to set AI policy
Seattle's police watchdog is urging the department to create a clear policy for the use of artificial intelligence following a complaint about a sergeant using tools like ChatGPT to help write emails and internal reports.
Why it matters: SPD has no department-specific policy governing AI use, per the Office of Police Accountability, creating a gray area for officers as generative tools become more common.
The complaint last year raised concerns about transparency, accountability, and the handling of sensitive information, even though the sergeant was ultimately cleared of wrongdoing.
Driving the news: In a letter this month to Police Chief Shon Barnes, OPA interim director Bonnie Glenn said SPD's policy should detail whether AI use is permitted, the conditions under which AI may be used, and the permissible uses of AI-generated content.
Seattle police spokesperson Patrick Michaud confirmed reception of the letter but referred questions to OPA.
Catch up quick: OPA sent Axios the August 2024 complaint that claimed a sergeant used ChatGPT, Grammarly and other AI tools to help write internal reports.
The sergeant acknowledged using AI but denied entering sensitive information into the programs, per the case summary.
SPD's technology and innovation captain told OPA that officers were cautioned against using unsecured AI tools like Grammarly but the department lacked its own policy on AI use.
Zoom out: The King County Prosecuting Attorney's Office last year barred law enforcement from submitting reports drafted by AI, warning that tools like ChatGPT could introduce factual errors, compromise privacy, and weaken the legal reliability of police narratives, office spokesperson Casey McNerthney told Axios.
The prosecutor's office acknowledges that some police departments are experimenting with AI for scheduling, data sorting, or surveillance, but also says most are still wary of using it to write official documents.
The American Civil Liberties Union has also spoken out, saying AI in policing can reduce transparency and accountability while amplifying bias and eroding officer memory.
What they're saying:"Because police reports play such an important role in criminal investigations and prosecutions, introducing novel AI language-generating technology into the criminal justice system raises significant civil liberties and civil rights concerns, the ACLU wrote in December.
Between the lines: Seattle already has a citywide Generative AI policy, adopted in 2023, that requires staff to disclose when they use generative AI and bans the use of non-approved platforms.