
Seattle watchdog urges police to set AI policy
Seattle's police watchdog is urging the department to create a clear policy for the use of artificial intelligence following a complaint about a sergeant using tools like ChatGPT to help write emails and internal reports.
Why it matters: SPD has no department-specific policy governing AI use, per the Office of Police Accountability, creating a gray area for officers as generative tools become more common.
The complaint last year raised concerns about transparency, accountability, and the handling of sensitive information, even though the sergeant was ultimately cleared of wrongdoing.
Driving the news: In a letter this month to Police Chief Shon Barnes, OPA interim director Bonnie Glenn said SPD's policy should detail whether AI use is permitted, the conditions under which AI may be used, and the permissible uses of AI-generated content.
Seattle police spokesperson Patrick Michaud confirmed reception of the letter but referred questions to OPA.
Catch up quick: OPA sent Axios the August 2024 complaint that claimed a sergeant used ChatGPT, Grammarly and other AI tools to help write internal reports.
The sergeant acknowledged using AI but denied entering sensitive information into the programs, per the case summary.
SPD's technology and innovation captain told OPA that officers were cautioned against using unsecured AI tools like Grammarly but the department lacked its own policy on AI use.
Zoom out: The King County Prosecuting Attorney's Office last year barred law enforcement from submitting reports drafted by AI, warning that tools like ChatGPT could introduce factual errors, compromise privacy, and weaken the legal reliability of police narratives, office spokesperson Casey McNerthney told Axios.
The prosecutor's office acknowledges that some police departments are experimenting with AI for scheduling, data sorting, or surveillance, but also says most are still wary of using it to write official documents.
The American Civil Liberties Union has also spoken out, saying AI in policing can reduce transparency and accountability while amplifying bias and eroding officer memory.
What they're saying:"Because police reports play such an important role in criminal investigations and prosecutions, introducing novel AI language-generating technology into the criminal justice system raises significant civil liberties and civil rights concerns, the ACLU wrote in December.
Between the lines: Seattle already has a citywide Generative AI policy, adopted in 2023, that requires staff to disclose when they use generative AI and bans the use of non-approved platforms.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
27 minutes ago
- Yahoo
Former Google CEO Eric Schmidt's AI Expo serves up visions of war, robotics, and LLMs for throngs of tech execs, defense officials, and fresh recruits
Drones buzz overhead, piercing the human hum in the crowded Walter E. Washington Convention Center. On the ground, tech executives, uniformed Army officers, policy wonks, and politicians compete for attention as swarms of people move throughout the vast space. There are pitches about 'next generation of warfighters,' and panels about winning the 'AI innovation race.' There are job seekers and dignitaries. And at the center of it all, there is Eric Schmidt. The former Google CEO is the cofounder of the non-profit that organizes the confab, known as the AI+ Expo for National Competitiveness. The Washington DC event, now in its second year, is a fascinating, real world manifestation of the Schmidt worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy. If it was once the province of futurists and think tank researchers, this worldview is now increasingly the consensus view. From innovations like self-driving robotaxis circulating in multiple U.S. cities to Silicon Valley defense startups snagging large government contracts, evidence of change—and of the stakes involved—are everywhere. The AI+ Expo, hosted by Schmidt's Special Competitive Studies Project (SCSP), is ground zero for the stakeholders of this new world order: Thousands of Washington insiders, military brass, tech executives, policymakers, students, and curious professionals are drawn together under one stars and stripes-style banner—to ensure America leads the AI age. It's a kind of AI county fair, albeit one where Lockheed Martin stages demos, defense tech companies hand out swag, Condoleezza Rice takes the stage, and protesters outside chant 'No tech for genocide' during Schmidt's keynote. The event is free to attend, and lines to enter stretch around the block. The only thing missing? GPU corn dogs on a stick, perhaps. Soft lobbying is omnipresent at the event. Tesla, for instance, offers self-driving tech demos just as the company prepares to launch its first robotaxi service in Austin and as Elon Musk pushes lawmakers on autonomous vehicle regulations. OpenAI is also working the room, touting its o3 reasoning model, newly deployed on a secure government supercomputer at Los Alamos. 'The transfer of model weights occurred via highly secured and heavily encrypted hard drives that were hand carried by OpenAI personnel to the Lab,' a company spokesperson told Fortune. 'For us, today's milestone, and this partnership more broadly, represents more than a technical achievement. It signals our shared commitment to American scientific leadership.' It's not just about the federal government, either – even state leaders are angling for attention. Mississippi Governor Tate Reeves, who is publicizing his state's AI data center investments and gave a keynote at the Expo told Fortune, 'The leaders in this space are here, and I want to be talking to the leaders that are going to make decisions about where they're making capital investments in the future.' The Expo is hosted by The Special Competitive Studies Project (SCSP), a nonprofit Schmidt cofounded in 2021 and for which he remains the major funder and chair. SCSP operates as a subsidiary of The Eric & Wendy Schmidt Fund for Strategic Innovation, the Schmidt family's private foundation, and is an outgrowth of the now-defunct National Security Commission on Artificial Intelligence — the temporary federal advisory body Schmidt also led from 2018 to 2021. The Expo is about building a community that brings private sector, academia and government into one place, said Ylli Bajraktari, president and CEO of SCSP and Schmidt's co-founder, who previously served as chief of staff to National Security Advisor H.R. McMaster and joined the Department of Defense in 2010. 'Washington is not a tech city, but yet, this is a city where [tech and AI] policies are being developed,' he said. Still, all of this seems really about Schmidt's vision for the future of AI, which he shared in depth in a highly-publicized TED talk last month. In it, he argued that humans should welcome the advancement of AI because our society will not have enough humans to be productive in the future–while delivering a dire warning about how the race for AI dominance could go wrong as the technology becomes a geopolitically destabilizing force. He repeated his hypothetical doomsday scenario in his Expo keynote. Schmidt posits an AI competitor who is advancing quickly, and is only about six months behind the U.S. in developing cutting-edge AI. In other international competitions, this could mean a relatively stable balance of power. But the fear is that once a certain level of AI capability is reached, a steep acceleration curve means the other side will never catch up. According to Schmidt, the other side would have to consider bombing their opponent's data centers to stop them from becoming permanently dominant. It's a scenario – with a proposed doctrine called Mutual AI Malfunction that would slow down each side and control progress – that Schmidt introduced alongside co-authors Henry Kissinger and Daniel Huttenlocher in their 2021 book The Age of AI and Our Human Future. Schmidt shared a thought exercise about what he would do if he could build the U.S. military completely from scratch – no Pentagon, no bureaucracy, no old, obsolete technology – that would basically resemble a tech company: agile, software-driven, and centered around networked, AI-powered systems. 'I would have two layers of drones,' he said. 'I'd have a set of ISR drones [unmanned aerial vehicles used for intelligence, surveillance, and reconnaissance missions]. Those are ones that are high and they have deep looking cameras, and they watch everything. They're connected by an AI network, which is presumably designed to be un-jammable. And then I would have essentially bomber drones of one kind or another, the ISR drones would observe, and they would immediately dispatch the bomber.' With that kind of a defensive system, he added, 'it would be essentially impossible to invade a country by land,' and said that he wanted US assets to be protected by defensive 'drone swarms,' adding that 'there's an entire set of companies in America that want to build this…many of them are here at our show – I want a small amount of the government's money to go into that industry.' Schmidt's Expo is open to all, and there were those in attendance who would beg to differ with his gung-ho takes on the future of battle. The International Committee of the Red Cross, for example, showcased a booth with thought-provoking, graffiti-style questions like 'Does AI make wars better or worse?' Even student attendees well-versed in wargaming, like Luke Miller, a rising sophomore studying international relations at the College of William and Mary and a member of a wargames club, said that today's era of AI and national security is 'supposed to be a sobering moment.' For a country like Ukraine to deploy drones to attack Russian air bases — as they did with great effect just days before the conference— 'is something we should definitely be concerned about going forward,' he said. Still, it was Schmidt's vision of the future of warfare and national security that was front and center at an event 'designed to strengthen U.S. and allied competitiveness in critical technologies.' 'Have you all had a chance to go hang out at the drone cage?' he asked the audience, pointing out the young age of many of the competitors. '[They are] beating the much bigger adults,' he said. 'This is the future. They're inventing it with or without us, that's where we should go.' This story was originally featured on


The Verge
38 minutes ago
- The Verge
US removes ‘safety' from AI Safety Institute
The US Department of Commerce has renamed its AI Safety Institute to the Center for AI Standards and Innovation (CAISI), shifting its focus from overall safety to combating national security risks and preventing 'burdensome and unnecessary regulation' abroad. Secretary of Commerce Howard Lutnick announced the change on June 3rd, calling the agency's overhaul a way to 'evaluate and enhance US innovation' and 'ensure US dominance of international AI standards.' The AI Safety Institute was announced in 2023 under former President Joe Biden, part of a global effort to create best practices for governments mitigating AI system risk. It signed memorandums of understanding with major US AI companies, including OpenAI and Anthropic, to get access to new models and suggest improvements before release. Near the end of Biden's term in early 2025, it released draft guidelines for managing AI risks that included using systems to create biological weapons or other clear threats to national security, but also more common categories of harmful content like child sexual abuse material (CSAM). Lutnick's statement says that the new institute will 'focus on demonstrable risks, such as cybersecurity, biosecurity, and chemical weapons' in its evaluations. It will also investigate 'malign foreign influence arising from use of adversaries' AI systems,' a category that likely includes DeepSeek, a Chinese large language model that shook up the American AI industry earlier this year. The move is part of a larger Trump administration effort to accelerate the expansion of American AI companies. On his first day in office Trump rescinded a Biden executive order that ordered new safety standards for large AI systems and a report evaluating the potential risks for US consumers and the labor market. His own executive orders have encouraged increasing generative AI adoption in fields like education and promoting coal as a source of power for energy-hungry AI data centers. And the current Republican budget bill includes a 10-year moratorium on state-level AI regulations — a provision even some in Trump's party have come to oppose.


Axios
an hour ago
- Axios
Axios Event: U.S. needs more critical minerals to compete with China, experts say
WASHINGTON – Competition with China and tariff threats are fueling U.S. efforts to secure more critical minerals used to make semiconductor chips and other defense infrastructure, speakers said at a May 6 Axios event. Why it matters: Critical minerals have become a national security asset and subject of global politics, with the Trump administration recently signing a deal with Ukraine locking in preferential access to strategic elements from that country. Axios' Colin Demarest spoke with Reps. Jack Bergman (R-Mich.) and Jill Tokuda (D-Hawaii), as well as Gracelin Baskaran, critical minerals security program director at the Center for Strategic and International Studies (CSIS), at the May 6 Axios event, sponsored by South32 Hermosa. Zoom out: China has been building up its technological capabilities for decades under its Belt and Road Initiative, investing in minerals and other critical industries, speakers said – and the U.S. is behind. What they're saying: "Some minerals are here at home, we have good copper reserves … but some minerals we just don't have enough of," Baskaran said. The U.S. Energy Act of 2020 defined"critical minerals" as "any non-fuel mineral, element, substance or material that … has a high risk of supply chain disruption [and] serves an essential function in one or more energy technologies." The list of about 50 such elements includes lithium, nickel, cobalt and graphite. Catch up quick: The U.S. relies on China for many critical minerals, but export controls and tariffs have limited access amid growing trade tensions. China's recent restrictions on rare earth elements and magnets are a "dangerous game," Bergman said. However, removing China suppliers from the critical minerals equation – even with the security risks – wouldn't be realistic, he said. "The attainable goal would be to marginalize their dominance in it, because there's always going to be more than one player." The latest: It's not clear what impact the recent Ukraine minerals deal will have on U.S. supply, speakers said. "We don't know what the outcome is going to be in Ukraine yet, but we do know if we don't establish the relationship at a different level that hasn't been tried … that Ukraine is still vulnerable," Bergman said. "So it's a good thing to have those deals inked." "For me, it doesn't change the needle any," Tokuda said of the deal. Ukraine doesn't process these materials as well as China, where most of the processing also takes place, she said. State of play: "Let's be very honest, right? This is about making sure that we are in a state of readiness, that we're able to deter," Tokuda said. "Right now we have so much vulnerability because of our dependence on these critical and rare earth minerals with China, that we really, we need to start yesterday in terms of really shoring up that supply chain." Content from the sponsored segment: In a View From the Top conversation, Pat Risner, president of South32 Hermosa, said gaps in the zinc supply haven't gotten enough attention in the critical minerals conversation. "Zinc is used to galvanize steel, it's very important for infrastructure, all forms of energy … even battery storage, and other defense applications," Risner said.