
How AI is being used by police departments to help draft reports
In his nine years at the Fort Collins, Colorado, police department, Officer Scott Brittingham says he has taken a lot of pride in the process of writing reports after each call for service.
But when the department decided to test a tool to speed things up, he was intrigued. Now, a report that might have previously taken him 45 minutes to write takes just 10 minutes.
'I was a little bit skeptical, I'm not a big technology person,' Brittingham said in a March interview at the Fort Collins police station for CNN's Terms of Service podcast. But spending less time writing reports means Brittingham can 'take more calls for service' and 'be proactive in preventing crime,' he said.
Brittingham is referring to Draft One, artificial intelligence-powered software that creates the first draft of police reports, aiming to make the process faster and easier. And his experience may increasingly become the norm for police officers as departments across the country adopt the tool. It's gaining traction even as some legal experts and civil rights advocates raise concerns that AI-drafted police reports could contain biases or inaccuracies, as well as presenting potential transparency issues.
Axon — the law enforcement tech company behind the tool that also makes tasers and body cameras — said Draft One has been its fastest growing product since it launched last year. And Axon isn't the only player in this industry; law enforcement tech company Truleo makes a similar AI police report tool called Field Notes.
Police reports sit at the heart of the criminal justice process — officers use them to detail an incident and explain why they took the actions they did, and may later use them to prepare if they have to testify in court. Reports can also inform prosecutors, defense attorneys, judges and the public about the officer's perspective on what took place. They can influence whether a prosecutor decides to take a case, or whether a judge decides to hold someone without bond, said Andrew Guthrie Ferguson, an American University law professor who studies the intersection of technology and policing.
'Police reports are really an accountability mechanism,' Ferguson said. 'It's a justification for state power, for police power.'
For that reason, proponents of Draft One tout the potential for AI to make reports more accurate and comprehensive, in addition to its time-saving benefits. But skeptics worry that any issues with the technology could have major ramifications for people's lives. At least one state has already passed a law regulating the use of AI-drafted police reports.
Draft One's rollout also comes amid broader concerns around AI in law enforcement, after experiments elsewhere with facial recognition technology have led to wrongful arrests.
'I do think it's a growing movement. Like lots of AI, people are looking at how do we update? How do we improve?' Ferguson said of AI police report technology. 'There's a hype level, too, that people are pushing this because there's money to be made on the technology.'
An efficiency tool for officers
After an officer records an interaction on their body camera, they can request that Draft One create a report. The tool uses the transcript from the body camera footage to create the draft, which begins to appear within seconds of the request. The officer is then prompted to review the draft and fill in additional details before submitting it as final.
Each draft report contains bracketed fill-in-the-blanks that an officer must either complete or delete before it can be submitted. The blank portions are designed to ensure officers read through the drafts to correct potential errors or add missing information.
'It really does have to be the officer's own report at the end of the day, and they have to sign off as to what happened,' Axon President Josh Isner told CNN.
Draft One uses a modified version of OpenAI's ChatGPT, which Axon further tested and trained to reduce the likelihood of 'hallucinations,' factual errors that AI systems can randomly generate. Axon also says it works with a group of third-party academics, restorative justice advocates and community leaders that provide feedback on how to responsibly develop its technology and mitigate potential biases.
Draft One, an AI software that creates police reports from body cam audio, is demonstrated on a screen at OKCPD headquarters on Friday, May 31, 2024 in Oklahoma City, Oklahoma.
Nick Oxford/AP
The idea for Draft One came from staffing shortages that Axon's police department clients were facing, Isner said. In a 2024 survey of more than 1,000 US police agencies, the International Association of Chiefs of Police found that agencies were operating at least 10% below their authorized staffing levels on average.
'The biggest problem in public safety right now is hiring. You cannot hire enough police officers,' Isner said. 'Anything a police department can adopt to make them more efficient is kind of the name of the game right now.'
Axon declined to say how many departments currently use Draft One, but police have also adopted it in Lafayette, Indiana; Tampa, Florida; and Campbell, California. And given that 'almost every single department' in the United States uses at least one Axon product, according to Isner, the growth potential for the product appears high.
In Fort Collins, Technology Sergeant Bob Younger decided to test Draft One last summer after seeing a demo of the tool.
'I was blown away at the quality of the report, the accuracy of the report and how fast it happened,' he said. 'I thought to myself, 'This is an opportunity that we cannot let go.''
The department initially made the technology available to around 70 officers; now all officers have access. Younger estimates the tool has reduced the time officers spend writing reports by nearly 70%, 'and that's time we can give back to our citizens,' he said.
'Radical transparency is best'
Isner said he's received largely positive feedback from prosecutors about Draft One.
But last September, the prosecutor's office in King County, Washington, said it would not accept police reports drafted with the help of AI after local law enforcement agencies expressed interest in using Draft One. The office said using the tool would 'likely result in many of your officers approving Axon drafted narratives with unintentional errors in them,' in an email to police chiefs.
An Axon spokesperson said that the company is 'committed to continuous collaboration with police agencies, prosecutors, defense attorneys, community advocates, and other stakeholders to gather input and guide the responsible evolution of Draft One.' They added that the AI model underlying Draft One is 'calibrated … to minimize speculation or embellishments.'
But King County prosecutors aren't the only ones concerned about errors or biases in AI-drafted police reports.
'When you see this brand new technology being inserted in some ways into the heart of the criminal justice system, which is already rife with injustice and bias and so forth, it's definitely something that we sit bolt upright and take a close look at,' said Jay Stanley, a policy analyst with the ACLU Speech Privacy and Technology Project, who published a report last year recommending against using Draft One.
Even Ferguson, who believes the technology will likely become the norm in policing, said he worries about mistakes in transcripts of body camera footage impacting reports.
'The transcript that you get, which becomes a police report, might be filled with misunderstandings, because the algorithm didn't understand, like, a southern accent or a different kind of accent,' Ferguson said. He also added that nonverbal cues — for example, if a person nodded rather than saying 'yes' out loud — might not be reflected.
Axon tries to prevent errors or missing details with those automatic blank fields. However, in a demo at the Fort Collins Police Department, CNN observed that it is possible to delete all of the prompts and submit a report without making any changes. And once a report is submitted as final, the original, AI-generated draft isn't saved, so it's not possible to see what an officer did or didn't change.
Axon says that's meant to mimic the old-school process where, even if an officer was writing by hand, their drafts wouldn't be saved along with their final report. The company also offers an opt-in setting that lets police departments require a certain percentage of the report be edited before the draft is submitted.
And then there's the question of transparency, and whether a defendant might know the police report in their case was drafted by AI.
Final reports created with Draft One include a customizable disclaimer by default, noting that they were written with the help of AI, but departments can turn that feature off. The Fort Collins Police Department does not include disclaimers, but officers are incentivized to make reports their own and ensure their accuracy, Younger said.
'What an officer is worried about is being critiqued or held responsible for an error or doing something and being inaccurate,' he said. 'Officers are super hyper-focused on the quality and quantity of their work.'
But Ferguson said he believes 'radical transparency is the best practice.' In Utah, state lawmakers passed a law earlier this year that requires police departments to include that disclaimer on final reports that were drafted by AI.
Ultimately, like so many other applications of AI, Draft One is a tool that relies on responsible, well-meaning users.
'My overall impression is that it's a tool like anything else,' Brittingham said. 'It's not the fix. It's not replacing us writing reports. It's just a tool to help us with writing reports.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Egypt Independent
2 days ago
- Egypt Independent
Russia built a massive drone factory to pump out Iranian-designed drones. Now it's leaving Tehran out in the cold
'Finally, something no one else has,' a Russian journalist says during a TV documentary on the country's largest drone factory. 'Such mass production of two-stroke engines doesn't exist anywhere else in Russia.' The factory in question, Alabuga, 600 miles east of Moscow in Russia's Tatarstan region, has been pumping out increasing numbers of the Iranian-designed Shahed-136 attack drone (known in Russia as Geran), but the man behind the site believes this may be one of its greatest achievements. 'This is a complete facility,' CEO Timur Shagivaleev adds in the documentary, explaining most of the components for the drone are now produced locally. 'Aluminium bars come in, engines are made from them; microelectronics are made from electric chips; fuselages are made from carbon fiber and fiberglass – this is a complete location.' The claim signals that production of the Iranian-designed Shahed, which has been the backbone of Moscow's drone war on Ukraine, has now been mostly absorbed into Russia's military industrial machine. Analysts and intelligence officials believe 90% of production stages now happen at Alabuga or other Russian facilities. To that end, recent satellite imagery shows the site is continuing to expand, with new production facilities and dorms that would allow it to scale up production exponentially. Analysts CNN spoke with believe this growth would allow Russia to potentially export an updated and battle-tested version of the drone it originally imported from Iran – maybe even to Tehran itself. But a Western intelligence source says the expansion and the complete Russian integration of the Shahed-136, have effectively marginalized Iran, revealing a rift between Moscow and Tehran. They say Tehran has been growing increasingly impatient with the little return it's received from Russia, despite having supported Moscow's war effort with not just drones, but missiles and other assets. That discontent effectively boiled over throughout Israel's 12-day bombing campaign of targeting Iran's nuclear weapons program in June, during which Russia's statements of condemnation were seen as paltry support for a country that has been helping Moscow since the start of its full-scale invasion of Ukraine. 'Iran may have expected Russia to do more or take more steps without being required to do so,' Ali Akbar Dareini, an analyst for the Tehran-based Center for Strategic Studies, the research arm of the Iranian President's office, told CNN. They may not intervene militarily, but they may beef operative support, in terms of weapons shipments, technological support, intelligence sharing, or things like that.' But Russia's distant approach was not surprising for the Western intelligence official CNN spoke with, who argued it showed the 'purely transactional and utilitarian nature' of Russian cooperation with Iran. 'This explicit disengagement demonstrates that Russia never intervenes beyond its immediate interests, even when a partner – here an essential supplier of drones – is attacked,' they said. Strategic partnership After Russia launched its full-scale invasion of Ukraine in February 2022, it began importing Iranian Shahed drones. By early 2023, Moscow and Tehran had inked a $1.75 billion deal for Russia to make the drones domestically. The 6,000 drones by September 2025 stipulated in the initial contract were manufactured about a year ahead of schedule and, according to Ukraine's Defense Intelligence, Alabuga is now pumping out more than 5,500 units per month. It's also doing so in a more efficient and cost-effective way. 'In 2022, Russia paid an average of 200,000 US dollars for one such drone,' a Ukrainian Defense Intelligence source said. 'In 2025, that number came down to approximately 70,000 US dollars.' Ukraine also said Russia has also modernized the drone, with improved communications, longer-lasting batteries and much larger warheads, making them deadlier and harder to bring down. The Western intelligence official said Iran initially seemed to embrace Russia's efforts to localize roughly 90% of production of the Shahed 136 at Alabuga but Moscow's upgrades seem to have caught it off guard. 'This evolution marks a gradual loss of control for Iran over the final product, which is now largely manufactured locally and independently,' the source explained. They added Moscow's end goal is 'to fully master the production cycle and free itself from future negotiations with Tehran.' Dareini says Russia's predatory behavior is not surprising and describes the relationship between the two countries as 'both cooperation and competition.' 'It's obvious that Russians want more, to get more and give less, and this is this applies to Iran as well,' he explained. 'Iran has provided Russia with drones and technology and the factory, and it has not been for free.' But in the process of expanding, the official says, Alabuga has been unable to meet obligations to its Iranian partners. According to them, in addition to the loss of control over the final product, Iranian authorities and companies, namely Sahara Thunder, have complained that some payments have not been made, in part because of the suffocating international sanctions the Russian economy has been under for more than three years. CNN has been unable to independently verify this. CNN has reached out to the Alabuga administration for comment but has yet to hear back. 'These obstacles add to Tehran's frustration with the blockages hindering the transfer of Russian aeronautical technologies to Iran, which were promised by Moscow in exchange for its support,' the official added. Salvaging the relationship? The ceasefire between Israel and Iran has seen Tehran mostly withdraw from the international sphere to regroup, reorganize and rebuild what was destroyed during the conflict. And in addition to the well-publicized damage to Iran's nuclear facilities, Israel targeted several other Iranian facilities. David Albright, a former UN weapons inspector and head of the Institute for Science and International Security (ISIS) think tank, believes Alabuga's expansion may allow Moscow to provide some meaningful support and send some of the updated versions of the Shahed back to Iran. 'Some of [Iran's] drone production facilities were bombed and they fired a lot of [drones], so as a way to build back stock, they may do that,' Albright said. 'And then then Iran could reverse engineer or receive the technology to make the better quality Shahed.' 'I think it's very dangerous,' he added. Other military equipment may be making its way to Tehran as well. Open-source flight tracking data shows a Gelix Airlines Ilyushin–76 military cargo plane flew from Moscow to Tehran on July 11. The IL-76 is a heavy transport plane frequently used by the Russian military to ferry troops and military equipment, and Gelix Airlines has been associated with the transport of military equipment in the past. The aircraft spent around three hours on the ground and then flew back to Moscow. CNN was unable to confirm what was on board but Iranian media reported it was the final components of a Russian S-400 air defense system. CNN asked the Russia Ministry of Defense for comment on the tension between the two countries but has not received a response. Similarly, CNN also reached out to the Iranian government, both in Tehran and via its embassy in the UK, but has yet to hear back. These latest developments highlight Dareini's core belief about relations between the two countries: while there may be tension, ultimately Iran will also reap the benefits of the partnership. 'Iran has got, and very likely will get the things it needs for its own security,' he explained. 'Whether it's military hardware, whether it's in terms of economic cooperation, technology and whatever it needs.'


Egypt Independent
2 days ago
- Egypt Independent
Top exec reveals the ‘stupidest thing' companies adopting AI can do
Las Vegas — The president of Cisco rejects the doomsday warnings from some tech leaders that artificial intelligence will make entry-level jobs vanish. 'I just refuse to believe that humans are going to be obsolete. It just seems like it's an absurd concept,' Jeetu Patel, who's also the chief product officer at AI infrastructure company Cisco, told CNN. While Patel acknowledged there will be 'growing pains where people will get disrupted,' he strongly pushed back on Anthropic CEO Dario Amodei's comments saying AI will spike unemployment to as high as 20% and eliminate half of all white-collar entry level jobs. He's one of several tech leaders that have pushed back on Amodei's narrative; others have said AI is likely to change jobs by requiring workers to adopt new skills rather than wiping out jobs completely. Still, his comments come amid a plunge in entry-level hiring and as tech giants are increasingly using AI in the workplace, raising questions about the future of work. 'Dario is a friend. We are investors in Anthropic. I have a ton of respect for what he's done. In this area though, I have a slightly different opinion on a couple of different dimensions,' Patel said Wednesday at Ai4, an AI conference in Las Vegas. 'I reject the notion that humans are going to be obsolete in like five years, that we're not going to have anything to do and we're going to be sitting on the beach… It doesn't make any sense.' In particular, Patel said he has a 'huge concern' with Amodei's line of thinking that AI could wipe out entry-level jobs because companies benefit from adding younger workers who often better understand new technologies. 'If you just say, 'I'm going to eradicate all entry-level jobs,' that's the stupidest thing a company can do in the long term because what you've done is you've actually taken away the injection of new perspective,' the Cisco exec said. 'A really bad strategy' Patel argued that for some jobs, having significant experience can be a 'massive liability.' For instance, he said people often hold assumptions about things that may not have worked five years ago, but do now. That's why Patel said he spends 'an enormous amount of time' with younger employees and interns. Jeetu Patel, the president of Cisco, in April 2023. He said younger employees and interns often give new perspectives. Al Drago/Bloomberg/Getty Images 'I learn a lot from people who've just gotten out of college because they have a fresh and unique perspective. And that perspective coupled with (my) experience makes magic happen,' Patel said. 'It would be a really bad strategy to not have early in career people and entry level people injected in your workplace.' Is AI already hurting entry-level workers? However, some economists say there are early signs suggesting AI may already be depressing entry-level jobs. Even though the overall job market has been mostly healthy, the Class of 2025 faces the worst job market for new college graduates in years. For the first time since tracking started in 1980, the unemployment rate for recent graduates (those 22 to 27 years old with at least a bachelor's degree) is higher than the national unemployment rate, according to Oxford Economics. Entry-level hiring has tumbled by 23% between March 2020 and May 2025, outpacing the 18% decline in overall hiring over that span, according to data from LinkedIn. This is happening for a variety of reasons, some of them unrelated to AI. The Class of 2025 faces the worst job market for new college graduates in years. Allison Robbert/TheBut AI does seem to be playing a role, some economists say. For instance, Oxford Economics noted that employment in two industries vulnerable to AI disruption — computer science and mathematics — has dropped by 8% since 2022 for recent graduates. By comparison, employment has little changed in those industries for older workers. 'AI is definitely displacing some of these lower-level jobs,' Matthew Martin, senior US economist at Oxford Economics, told CNN in June. 'AI can't buy you a steak dinner' Economists and AI researchers say the jobs most at risk involve repetitive tasks that can be automated, such as data input. 'The less interesting clerical jobs will go away. They will be automated. And if you don't automate, you'll go out of business,' Alan Ranger, vice president of marketing at Cognigy, told CNN on the sidelines of Ai4. Cognigy would know: It sells conversational AI agents that provide customer support for banks, airlines and other companies. Ranger said Cognigy's AI agents came to the rescue when German airline Lufthansa had to cancel every flight due to a strike in Germany earlier this year. The technology allowed Lufthansa to rebook thousands of flights per minute, he said. Ranger argued that companies won't massively lay off customer support workers because humans still need to manage the AI agents, design the software and tackle other complex issues. Yet he did concede that companies will have fewer customer support workers in the future as people leave the industry and retire, and because firms will hire for different roles. 'Account management and sales roles won't get replaced anytime soon,' Ranger said. 'An AI can't buy you a steak dinner.' Patel, the Cisco executive, said the onus is on the tech industry and society as a whole to ensure a smooth transition to superintelligent AI. 'In tech, we live in a bubble. We keep thinking, 'Oh, disruption is just part of it.' But when a steel mill worker gets disrupted, they don't become an AI prompt engineer,' he said. Patel said there is a lot of retraining and reskilling that must be done in tandem with governments and educators. 'The tech community has to actually take some responsibility for this,' he said. 'Because if we don't, you will create some level of pain in society and we want to make sure we avoid that.'


Egypt Independent
5 days ago
- Egypt Independent
How AI is being used by police departments to help draft reports
Fort Collins, Colorado — In his nine years at the Fort Collins, Colorado, police department, Officer Scott Brittingham says he has taken a lot of pride in the process of writing reports after each call for service. But when the department decided to test a tool to speed things up, he was intrigued. Now, a report that might have previously taken him 45 minutes to write takes just 10 minutes. 'I was a little bit skeptical, I'm not a big technology person,' Brittingham said in a March interview at the Fort Collins police station for CNN's Terms of Service podcast. But spending less time writing reports means Brittingham can 'take more calls for service' and 'be proactive in preventing crime,' he said. Brittingham is referring to Draft One, artificial intelligence-powered software that creates the first draft of police reports, aiming to make the process faster and easier. And his experience may increasingly become the norm for police officers as departments across the country adopt the tool. It's gaining traction even as some legal experts and civil rights advocates raise concerns that AI-drafted police reports could contain biases or inaccuracies, as well as presenting potential transparency issues. Axon — the law enforcement tech company behind the tool that also makes tasers and body cameras — said Draft One has been its fastest growing product since it launched last year. And Axon isn't the only player in this industry; law enforcement tech company Truleo makes a similar AI police report tool called Field Notes. Police reports sit at the heart of the criminal justice process — officers use them to detail an incident and explain why they took the actions they did, and may later use them to prepare if they have to testify in court. Reports can also inform prosecutors, defense attorneys, judges and the public about the officer's perspective on what took place. They can influence whether a prosecutor decides to take a case, or whether a judge decides to hold someone without bond, said Andrew Guthrie Ferguson, an American University law professor who studies the intersection of technology and policing. 'Police reports are really an accountability mechanism,' Ferguson said. 'It's a justification for state power, for police power.' For that reason, proponents of Draft One tout the potential for AI to make reports more accurate and comprehensive, in addition to its time-saving benefits. But skeptics worry that any issues with the technology could have major ramifications for people's lives. At least one state has already passed a law regulating the use of AI-drafted police reports. Draft One's rollout also comes amid broader concerns around AI in law enforcement, after experiments elsewhere with facial recognition technology have led to wrongful arrests. 'I do think it's a growing movement. Like lots of AI, people are looking at how do we update? How do we improve?' Ferguson said of AI police report technology. 'There's a hype level, too, that people are pushing this because there's money to be made on the technology.' An efficiency tool for officers After an officer records an interaction on their body camera, they can request that Draft One create a report. The tool uses the transcript from the body camera footage to create the draft, which begins to appear within seconds of the request. The officer is then prompted to review the draft and fill in additional details before submitting it as final. Each draft report contains bracketed fill-in-the-blanks that an officer must either complete or delete before it can be submitted. The blank portions are designed to ensure officers read through the drafts to correct potential errors or add missing information. 'It really does have to be the officer's own report at the end of the day, and they have to sign off as to what happened,' Axon President Josh Isner told CNN. Draft One uses a modified version of OpenAI's ChatGPT, which Axon further tested and trained to reduce the likelihood of 'hallucinations,' factual errors that AI systems can randomly generate. Axon also says it works with a group of third-party academics, restorative justice advocates and community leaders that provide feedback on how to responsibly develop its technology and mitigate potential biases. Draft One, an AI software that creates police reports from body cam audio, is demonstrated on a screen at OKCPD headquarters on Friday, May 31, 2024 in Oklahoma City, Oklahoma. Nick Oxford/AP The idea for Draft One came from staffing shortages that Axon's police department clients were facing, Isner said. In a 2024 survey of more than 1,000 US police agencies, the International Association of Chiefs of Police found that agencies were operating at least 10% below their authorized staffing levels on average. 'The biggest problem in public safety right now is hiring. You cannot hire enough police officers,' Isner said. 'Anything a police department can adopt to make them more efficient is kind of the name of the game right now.' Axon declined to say how many departments currently use Draft One, but police have also adopted it in Lafayette, Indiana; Tampa, Florida; and Campbell, California. And given that 'almost every single department' in the United States uses at least one Axon product, according to Isner, the growth potential for the product appears high. In Fort Collins, Technology Sergeant Bob Younger decided to test Draft One last summer after seeing a demo of the tool. 'I was blown away at the quality of the report, the accuracy of the report and how fast it happened,' he said. 'I thought to myself, 'This is an opportunity that we cannot let go.'' The department initially made the technology available to around 70 officers; now all officers have access. Younger estimates the tool has reduced the time officers spend writing reports by nearly 70%, 'and that's time we can give back to our citizens,' he said. 'Radical transparency is best' Isner said he's received largely positive feedback from prosecutors about Draft One. But last September, the prosecutor's office in King County, Washington, said it would not accept police reports drafted with the help of AI after local law enforcement agencies expressed interest in using Draft One. The office said using the tool would 'likely result in many of your officers approving Axon drafted narratives with unintentional errors in them,' in an email to police chiefs. An Axon spokesperson said that the company is 'committed to continuous collaboration with police agencies, prosecutors, defense attorneys, community advocates, and other stakeholders to gather input and guide the responsible evolution of Draft One.' They added that the AI model underlying Draft One is 'calibrated … to minimize speculation or embellishments.' But King County prosecutors aren't the only ones concerned about errors or biases in AI-drafted police reports. 'When you see this brand new technology being inserted in some ways into the heart of the criminal justice system, which is already rife with injustice and bias and so forth, it's definitely something that we sit bolt upright and take a close look at,' said Jay Stanley, a policy analyst with the ACLU Speech Privacy and Technology Project, who published a report last year recommending against using Draft One. Even Ferguson, who believes the technology will likely become the norm in policing, said he worries about mistakes in transcripts of body camera footage impacting reports. 'The transcript that you get, which becomes a police report, might be filled with misunderstandings, because the algorithm didn't understand, like, a southern accent or a different kind of accent,' Ferguson said. He also added that nonverbal cues — for example, if a person nodded rather than saying 'yes' out loud — might not be reflected. Axon tries to prevent errors or missing details with those automatic blank fields. However, in a demo at the Fort Collins Police Department, CNN observed that it is possible to delete all of the prompts and submit a report without making any changes. And once a report is submitted as final, the original, AI-generated draft isn't saved, so it's not possible to see what an officer did or didn't change. Axon says that's meant to mimic the old-school process where, even if an officer was writing by hand, their drafts wouldn't be saved along with their final report. The company also offers an opt-in setting that lets police departments require a certain percentage of the report be edited before the draft is submitted. And then there's the question of transparency, and whether a defendant might know the police report in their case was drafted by AI. Final reports created with Draft One include a customizable disclaimer by default, noting that they were written with the help of AI, but departments can turn that feature off. The Fort Collins Police Department does not include disclaimers, but officers are incentivized to make reports their own and ensure their accuracy, Younger said. 'What an officer is worried about is being critiqued or held responsible for an error or doing something and being inaccurate,' he said. 'Officers are super hyper-focused on the quality and quantity of their work.' But Ferguson said he believes 'radical transparency is the best practice.' In Utah, state lawmakers passed a law earlier this year that requires police departments to include that disclaimer on final reports that were drafted by AI. Ultimately, like so many other applications of AI, Draft One is a tool that relies on responsible, well-meaning users. 'My overall impression is that it's a tool like anything else,' Brittingham said. 'It's not the fix. It's not replacing us writing reports. It's just a tool to help us with writing reports.'