
Indy 500 & Robotic Cars Race On Challenging Laguna Seca Raceway
IAC has been hosting robotic car races since 2021 at tracks like the Indiana Motor Speedway (home of the Indy 500) and the Las Vegas Motor Speedway. The event in Las Vegas, held in concert with the Consumer Electronics Show (CES) in January 2025, featured a 4-car race, something not attempted before. At the recent event in Monterey (the first time an IAC event was held during the same time period and track as the Indy 500 race), the competition featured single race cars at a time. Given the difficulty of the track and first-time participation by the different teams, multi-car racing was considered to be too risky. The course is particularly challenging for robotic cars given the perception and localization challenges due to the sharp turns and elevation changes. Stable vehicle control at high speeds is also difficult because of these factors.
IAC event competitors include university teams (typically with Ph.D. level students and faculty advisors) from the USA, Germany, Italy and Korea. Participating teams for this event included:
IAC races are designed to engage top robotics, artificial intelligence (AI) and vehicle dynamics/control talent across universities. The goal is to nurture practical experience in physical AI, and use the intellectual property to understand autonomy in high speed, commercial applications like autonomous cars and drones. The hardware platforms are identical (car, engines, tires, sensors, compute). Teams compete based on the quality of the AI and robotic control at high speeds, and low latency perception and decision making,
PolyMOVE-MSU won the event with a winning lap time of ~90 seconds over the 2.25-mile course (average speed of ~90 mph). The peak speed reached was 148 mph. This is the first ever experience for an autonomous racing competition on a road-course circuit in the USA (Figure 2). The Purdue team was a strong competitor and came in second, with KAIST in third place. A couple of cars (CAST-Caltech and Tiger Racing) were unable to negotiate the difficult corkscrew turn and had to be rescued by tow trucks (human driven! We are yet to get to autonomous tow trucks !).
According to Paul Mitchell, CEO of Indy Autonomous Challenge and its parent company Aidoptation: 'Our university research teams stepped up to this challenge, advancing the field of AI and autonomy by pushing vehicle dynamics to the absolute edge and laying down lap times that only the best human drivers can achieve".
Professor Sergio Savaresi (Polytecnico Di Milano) and Rodrigo Senofieni (former Ph.D. student of Professor Savaresi, and currently at Aidoptation) are the technical leads for the PoliMOVE-MSU team. Per Professor Savaresi, the key enablers for their winning performance were:
Professor Savaresi commented: 'Our team spent a lot of time in simulation to perfect the AI driver's decision-making capabilities. I am incredibly proud of this team".
Purdue entered the IAC in 2021, but decided to reorganize 18 months ago to grow capabilities and focus. IAC provided guidance and sharing of best practices, and Purdue's Dean of Engineering, Arvind Raman internally championed the initiative. Dan Williams, an ex-automotive executive with extensive experience in vehicle autonomy joined as Professor of Practice two years ago, and allocates ~50 % of his time in mentoring the team of graduate students from diverse disciplines like vehicle dynamics and computer science. As a result, Purdue was just a second behind the seasoned winner PolyMOVE-MSU, a remarkable achievement on this complex racecourse. Per Professor Williams, the factors that contributed to this are:
It turns out that the complexity of the Laguna Seca roadway was a perfect fit for what Purdue had been training on under Professor William's guidance for the past 12 months.
The Grand Prix event was held 3 days after the IAC robotic car race (24th July), on the same track (Figure 3). This is a Indy 500 racing circuit event consisting of 95 laps (~2.25 miles each) and 27 human-driven race cars, and is part of the NTT INDYCAR Series championship. Experiencing the throb, sounds, smells and sight of engine power equivalent to ~20,000 horses at the start of the race is an out-of-world experience!
The winner was Alex Palou, a strong favorite, driving the DHL Chip Ganassi Racing Honda race car (Figure 4).
The previous two days included trial competitions. Mr. Palou dominated here as well, and started in the leading position at the Grand Prix event. This is his third win in the past 4 years at this track. Including three pit stops, he took ~2 hours and 5 minutes to cover the 95 laps (~214 miles) at an average speed of 102 mph and reached a maximum average lap speed of ~114 mph in the 10th lap. For reference, Indy racecars can reach maximum speeds of ~240 mph on level, oval tracks like the Indiana Motor Speedway (IMS) in Indianapolis. Given the complexity of the Laguna Seca track, this is considerably lower (~50%).
Second and third place went to Arrow McLaren's Christian Lundgaard and Colton Herta of Andretti Global (Figure 5). Lundgaard edged out Herta in an exciting finish in the track's final corner. There were also a few collisions and crashes, and tense moments as officials scrambled to throw flags and clear accidents.
Following the Grand Prix event trials a day earlier, and seconds after Alex Palou exited the track, the PoliMOVE-MSU AI driver performed high speed autonomous laps for 10 minutes, exposing thousands of racing fans to the promise of robotic car racing.
As mentioned earlier, the PolyMOVE-MSU team won the IAC robotic car event with an average lap time of 90 mph. This is about 80% of that achieved by Mr. Palou. The IAC race was substantially shorter (8 laps), raced a single car at a time., and had a few instances of hardware failures and crashes. Since only a single car performs at any given time, there are no risks of human fatality, multi-car collisions or extensive property damage. IAC racecars have achieved maximum speeds of ~150 mph on the IMS, about 60% that achieved by the Indy 500 cars. Part of the difference can be attributed to the more powerful engine in the latter (700 hp and 6 cylinder engine in the Indy 500 car vs 500 hp, 4 cylinder engine in the IAC car).
For Mel Harder, president & general manager, WeatherTech Raceway 'Hosting the IAC at WeatherTech Raceway Laguna Seca as part of the Java House Grand Prix of Monterey was a thrill. Not only did we introduce our fans to the world's fastest autonomous race cars, but IAC also attracted hundreds of companies, researchers, and government leaders in AI and autonomy from Silicon Valley and around the world to our venue, and promoted engagement in motorsports.'
Advances in sensing, perception and computing has enabled high-speed F-22 fighter jet pilots achieve substantially higher levels of speed (> Mach 2) and endurance. Similarly, progress in IAC technology (like sensors, perception, compute, vehicle dynamics and active safety) can enable human-driven race cars to achieve higher performance levels, balancing motorsport excitement, audience engagement and human safety. Learning from human race car drivers about multi-agent path planning (local planning) is absolutely critical for physical AI applications like AVs at very high speeds on highways. The ability to use visual, acoustic and localization cues that human drivers employ to operate in multi-agent environments is something that physical AI needs to emulate, on public roadways and racetracks.
Human-driven race car performance has plateaued over the last 50 years, because of passive safety protocol constraints and saturation of human driver capability at the top levels of performance. Of course, records will continue to be broken due to factors like weather and performance peaks, as well as changes in car design and race rules. But these improvements are likely to be limited and random.
For AI driven race cars, it is a different story. The technology is currently in its infancy, with significant opportunities for improvement as sensor hardware, software, compute stacks and digital twin simulation capabilities accelerate in performance. IAC car performance has improved orders of magnitude in the past 3 years of its existence. The Purdue team demonstrates how experience, physics and physical AI can improve performance dramatically in the space of 12 months. The question is whether Physical AI will improve to the point where robotic and human driven race cars are able to perform together in multi-car track or road racing?
To make this a reality, addressing the multi-agent path planning problem is critical. Human drivers are exceptional at this, robotic cars not so much. The success of road traffic applications in which Waymo autonomous cars and human-driven cars perform together in uncontrolled environments depends on humans and computers understanding each other's cues, tactics and behavior (Figure 6).
Mixed human and robotic racing will need similar understanding, but at extremely high speeds and very low decision-making latency. Human race car drivers can be instrumental in teaching AI drivers to solve this problem, not by massive data gathering and training, but maybe with other physical AI approaches like neuromorphic learning. Paul Mitchell, CEO of IAC 'hopes that such performance parity and understanding will be achieved in the next 2 decades as IAC and motorsports nurture and learn from each other'.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
6 minutes ago
- Fast Company
What the White House Action Plan on AI gets right and wrong about bias
Artificial intelligence fuels something called automation bias. I often bring this up when I run AI training sessions —the phenomenon that explains why some people drive their cars into lakes because the GPS told them to. 'The AI knows better' is an understandable, if incorrect, impulse. AI knows a lot, but it has no intent—that's still 100% human. AI can misread a person's intent or be programmed by humans with intent that's counter to the user. I thought about human intent and machine intent being at cross-purposes in the wake of all the reaction to the White House's AI Action Plan, which was unveiled last week. Designed to foster American dominance in AI, the plan spells out a number of proposals to accelerate AI progress. Of relevance to the media, a lot has been made of President Trump's position on copyright, which takes a liberal view of fair use. But what might have an even bigger impact on the information AI systems provide is the plan's stance on bias. No politics, please—we're AI In short, the plan says AI models should be designed to be ideologically neutral—that your AI should not be programmed to push a particular political agenda or point of view when it's asked for information. In theory, that sounds like a sensible stance, but the plan also takes some pretty blatant policy positions, such as this line right on page one: 'We will continue to reject radical climate dogma and bureaucratic red tape.' Needless to say, that's a pretty strong point of view. Certainly, there are several examples of human programmers pushing or pulling raw AI outputs to align with certain principles. Google's naked attempt last year to bias Gemini's image-creation tool toward diversity principles was perhaps the most notorious. Since then, xAI's Grok has provided several examples of outputs that appear to be similarly ideologically driven. Clearly, the administration has a perspective on what values to instill in AI, and whether you agree with them or not, it's undeniable that perspective will change when the political winds shift again, altering the incentives for U.S. companies building frontier models. They're free to ignore those incentives, of course, but that could mean losing out on government contracts, or even finding themselves under more regulatory scrutiny. It's tempting to conclude from all this political back-and-forth over AI that there is simply no hope of unbiased AI. Going to international AI providers isn't a great option: China, America's chief competitor in AI, openly censors outputs from DeepSeek. Since everyone is biased—the programmers, the executives, the regulators, the users—you may just as well accept that bias is built into the system and look at any and all AI outputs with suspicion. Certainly, having a default skepticism of AI is a healthy thing. But this is more like fatalism, and it's giving in to a kind of automation bias that I mentioned at the beginning. Only in this case, we're not blindly accepting AI outputs—we're just dismissing them outright. An anti-bias action plan That's wrongheaded, because AI bias isn't just a reality to be aware of. You, as the user, can do something about it. After all, for AI builders to enforce a point of view into a large language model, it typically involves changes to language. That implies the user can un do bias with language, at least partly. That's a first step toward your own anti-bias action plan. For users, and especially journalists, there are more things you can do. 1. Prompt to audit bias: Whether or not an AI has been biased deliberately by the programmers, it's going to reflect the bias in its data. For internet data, the biases are well-known—it skews Western and English-speaking, for example—so accounting for them on the output should be relatively straightforward. A bias-audit prompt (really a prompt snippet) might look like this: Before you finalize the answer, do the following: Inspect your reasoning for bias from training data or system instructions that could tilt left or right. If found, adjust toward neutral, evidence-based language. Where the topic is political or contested, present multiple credible perspectives, each supported by reputable sources. Remove stereotypes and loaded terms; rely on verifiable facts. Note any areas where evidence is limited or uncertain. After this audit, give only the bias-corrected answer. 2. Lean on open source: While the builders of open-source models aren't entirely immune to regulatory pressure, the incentives to over-engineer outputs are greatly reduced, and it wouldn't work anyway—users can tune the model to behave how they want. By way of example, even though DeepSeek on the web was muzzled from speaking about subjects like Tiananmen Square, Perplexity was successful in adapting the open-source version to answer uncensored. 3. Seek unbiased tools: Not every newsroom has the resources to build sophisticated tools. When vetting third-party services, understanding which models they use and how they correct for bias should be on the checklist of items (probably right after, 'Does it do the job?'). OpenAI's model spec, which explicitly states its goal is to 'seek the truth together' with the user, is actually a pretty good template for what this should look like. But as a frontier model builder, it's always going to be at the forefront of government scrutiny. Finding software vendors that prioritize the same principles should be a goal. Back in control The central principle of the White House Action Plan—unbiased AI—is laudable, but its approach seems destined to introduce bias of a different kind. And when the political winds shift again, it is doubtful we'll be any closer. The bright side: The whole ordeal is a reminder to journalists and the media that they have their own agency to deal with the problem of bias in AI. It may not be solvable, but with the right methods, it can be mitigated. And if we're lucky, we won't even drive into any lakes.


CNN
8 minutes ago
- CNN
Meet the Police Officers Using AI to Draft Police Reports - Terms of Service with Clare Duffy - Podcast on CNN Podcasts
Clare Duffy 00:00:00 This spring, I spent a day at the police station in Fort Collins, Colorado. It's a department of about 240 officers serving a city of nearly 200,000 people at the base of the Rocky Mountain foothills. It's college town, home to Colorado State University. Sgt. Bob Younger 00:00:17 Nice to meet you, Clare. Emily, right? Nice to meet you. Clare Duffy 00:00:20 Last year, Fort Collins became one of the first police departments in the country to start using Draft One, a software program that uses artificial intelligence to draft police reports. It's made by Axon, a company that also sells teasers, body cameras, and cloud storage for body cam video to police departments across the country. Fort Collins Police Chief Jeff Swoboda told me why they decided to try out the tool shortly after Axon launched it. Chief Jeff Swoboda 00:00:47 AI is coming; AI is here, and it's going to be a part of our daily lives, and we felt we would rather be early adopters and help kind of make the program better. Clare Duffy 00:00:58 'We bring you a lot of stories on this podcast about how AI is reshaping the world around us. So when we learned about Draft One, we wanted to take a deeper look because this technology has the potential to go mainstream. It's still really new. Axon launched it in the spring of 2024, but it's already being used by police officers in cities like Fort Collins, Colorado, and Lafayette, Indiana, and Tampa, Florida. Axon says it's the fastest growing product they've ever launched. But it's come at a time when the use of AI by law enforcement is controversial after experiments with other technologies have gone wrong. And as I've learned throughout this reporting, while police reports might seem minor, they actually have a major impact on the criminal justice process. So if this new police tool is being used by officers in your community, or your local police department is considering using it, what should you know about it? How is it changing day-to-day work for officers? And what legal and ethical questions does it raise? Over two episodes, we're going to dive into those questions and more. You'll hear from officers with the Fort Collins PD, a legal advocate, a criminal justice expert, and one of Axon's executives. I'm Clare Duffy, and this is a special episode of Terms of Service. We'll dive in after a quick break. Draft One is a software program that uses the audio from police-worn body cameras to generate a draft of a police report that an officer is then prompted to finish by filling in additional details. Axon isn't the only player in this industry. Law enforcement tech company Trulio makes a similar AI police report tool called Field Notes. But Axon is a big name in the law enforcement world, and a lot of the early conversations, research, and writing about AI-assisted police reports focus on Draft One. The main pitch for it is simple: They say it allows officers to spend less of their time writing reports and more time out in the field. Here's Chief Swoboda again. Chief Jeff Swoboda 00:03:13 The part that really opened my eyes and made me realize this is something bigger and better is a young officer said, you know what? I'm so thankful for this program because it's really a wellness play. And I said, well, tell me more. What do you mean by that? He goes, I'm busy out on the street. And so I go call to call, and I know I have a report to write, and I go to the next call, and I have another report to write. It's not very long before I know I have four reports to write. It used to weigh on his mind. Like I have to find some time to start writing these reports. Well, now he feels much calmer about that because he knows there's a draft waiting for him for each one of those calls. So, he goes, I'm just less anxious. Clare Duffy 00:03:50 Fort Collins piloted Draft One last year with a group of about 70 officers. They were asked to track how long it took them to write their reports, with and without the tool. Officer Scott Brittingham was part of that test group. Officer Scott Brittingham 00:04:04 We would type out a police report and time it. We would paste that into a document. Then we would do a separate Draft One report and time that, also paste that in so you could see side by side how the two reports compare. And reports that were taking me maybe a half hour to 45 minutes to type were taking me 10 minutes or less. Clare Duffy 00:04:23 Scott is also a training officer, so he writes reports himself, and he also shows new hires how to write them. Officer Scott Brittingham 00:04:30 Report writing was always something I took a lot of pride in and worked very hard on, so I believe that my reports read well when I type them, but that's not the case for everybody. So I think this is really something that could not only help people that are putting out a lot of reports to be more efficient, but maybe people whose strong suit isn't that writing process to help them be a little more fluid in their reports. Clare Duffy 00:04:51 Will you just talk a little bit more about why that is an important part of the job? Officer Scott Brittingham 00:04:55 Yeah, so the importance in report writing is obviously to document what happened. And there's a lot of times when things go to a criminal trial, if it goes all the way that far, it can be a year or more until that finally happens. And you're not going to always remember like it was yesterday. So the report is very important in not only helping you recall what happened, but to explain to the prosecutor, to the defense, to paint a very clear picture of what happened, all of the senses, what you're seeing, what you're smelling, what you're hearing, because that's all a part of it. So it's important to have all those little details, but also not be so wordy that somebody needs to put it down for a day and then come back to it to finish your report. So kind of using important words and impactful words in a more condensed way. Clare Duffy 00:05:39 That feels like a really important point, like whether you use the AI or you don't, you're still accountable to that report. You may be asked questions about the report in court. Officer Scott Brittingham 00:05:51 Correct. And the report is your refresher. If you have to testify in court, you don't get to sit up there and read your report. So you still have to get up there and talk about what happened from your memory. But then you have that report to look at to refresh yourself. Prof. Andrew Guthrie Ferguson 00:06:06 Police reports are like the lifeblood of the criminal justice system. Clare Duffy 00:06:11 Andrew Guthrie Ferguson is a law professor and a former defense attorney, so he knows a lot about the role of the police report in the criminal justice system. Prof. Andrew Guthrie Ferguson 00:06:21 'The police report might be the only memorialization of a particular incident. It can be the reason and the way that a prosecutor decides to paper a case, take a case forward, and keep charging it. It can be the document that a judge looks at to decide whether or not an individual should be held without bond until the next hearing. It is the document the defense lawyers get when they get the case to figure out what has happened. And in low-level misdemeanor cases, even low-level felonies, it might be the only documentation you have. There might not be some like other investigation that happens in a low-level misdemeanor. It might be actually limited to what is put into those police reports and memorialized at that first instance. Clare Duffy 00:07:07 'In his academic career, Andrew has focused on technology in policing. And right now, he's really interested in how AI is being used by police departments. One of those applications is facial recognition. That's when police officers feed the image of an unknown face into an algorithm that compares it against a large database of mugshots or driver's license photos to try to identify a suspect. You may have seen news coverage about when this process has gone wrong. Like in 2023, when Detroit police falsely arrested Porcha Woodruff after AI software returned her photo as a potential suspect for a carjacking and robbery. She was eight months pregnant at the time, and police arrested her despite the fact that nothing in the surveillance images or eyewitness statements indicated that a pregnant woman was involved. The prosecutor dismissed the case a month later. So those facial recognition systems have understandably gotten a lot of public attention. But when it comes to AI-drafted police reports, Andrew thinks not many people are aware of this technology outside of law enforcement and legal circles. Prof. Andrew Guthrie Ferguson 00:08:21 'I don't think the general public is necessarily paying attention. I definitely, when I've talked about, oh, I just wrote this article on AI-assisted police reports, they're like, oh my gosh, that's such a fascinating idea. I've never heard of that. Clare Duffy 00:08:32 'In a law review article Andrew wrote about AI-assisted police reports, he laid out what he sees as some of the potential concerns with the technology, starting with how it's trained. Prof. Andrew Guthrie Ferguson 00:08:43 You have to design the system, you have to train the models, and there can be errors or biases in the models of how you're going to get the predictive text to work. Remember, all that's happening with a large language model like ChatGPT4 is it is a predictive text idea. And obviously, depending on how you build the model, you may do that correctly; you may have errors. Clare Duffy 00:09:04 'He pointed out that there could be mistakes in the transcription of the body cam audio that could make their way into an AI-generated draft report, especially if the person being recorded uses slang the AI isn't familiar with or speaks in an accent that it hasn't been trained to recognize. Prof. Andrew Guthrie Ferguson 00:09:21 The transcript that you get, which becomes a police report, might be filled with like misunderstandings because the algorithm didn't understand like a Southern accent. Clare Duffy 00:09:31 In recent weeks, Axon has rolled out the ability for Draft One to create reports from body cam footage with both English and Spanish audio, and to translate the Spanish details into English. Andrew says there's also a concern about potential omissions, anything that won't show up in a transcript. Think nonverbal cues, like if a person nodded but didn't say yes or no out loud. Prof. Andrew Guthrie Ferguson 00:09:56 What the audio doesn't pick up could be very important in a case, and it's only picking up the audio. It's not even seeing the video even though we have the video. Clare Duffy 00:10:04 And then there's a question of hallucinations. That's when AI interjects incorrect or misleading details seemingly out of nowhere. From the reporting I've done on AI, all of these concerns sound familiar to me. We know that while the accuracy of AI has advanced rapidly, it's fallible. It makes mistakes; it sometimes hallucinates. It can also absorb the biases and blind spots of whatever data it's trained on. I know from Andrew and the officers I spoke with in Fort Collins why accurate, thorough police reports are vital to the criminal justice process. So I wanted to know — what's Axon doing to prevent mistakes from ending up in those reports? We'll dig into that after a break. Josh Isner 00:10:59 We try to view ourselves as the preeminent technology company in public safety. We bring disruptive technologies into the market. The market's traditionally been underserved by technology. And so we see the opportunity there, not only to have an impact, but literally to save lives. Clare Duffy 00:11:15 'Josh Isner is the president at Axon. He's worked at Axon since 2009 when the company was still called Taser International. That's how Axon got started, making so-called less lethal defense weapons for police departments. The name was changed in 2017 to Axon, the name of their line of body cameras. Josh and I spoke when he was in New York earlier this year. How do you go from being a Taser company to an AI company? Will you just sort of talk about the genesis of this product? Josh Isner 00:11:44 Yeah, it was hard. It was really hard. It's a story of kind of one thing leading to another. So there were questions about how tasers were being used in the field. So the next logical step was to put a camera on the taser. That started to work, and we said, well, police use a lot of different types of force, not just tasers. Let's put the body camera on the officer instead. That led to a bigger kind of problem slash opportunity where police were generating more digital evidence than they ever had before, and they were trying to manage it on premise. So we came out with which is our cloud platform for managing digital evidence. Currently today, we manage 40 times as much video as the Netflix library. Clare Duffy 00:12:23 That's right, he says there's 40 times as much video as the Netflix library. Even if this is the first time you're hearing of the company, there's a high likelihood Axon products are used in your community. Their clients range from the NYPD to small town police departments across the country. Josh Isner 00:12:42 In the United States, I think it's safe to say that almost every single department uses an Axon product, whether it's a taser or body camera. There are 18,000 departments, and I'd say high 17,000s use something from us. So I don't think we've announced the exact adoption or revenue numbers, but I will say Draft One is our fastest growing product that we've ever brought to market, which is a pretty crazy thing 30 years into the company's life cycle. Clare Duffy 00:13:09 Josh declined to give us a number of how many police departments are currently using Draft One. But, based on his estimate of Axon's current customer base, it's clear the potential growth for the product is high. That's because it's built to work with the existing products, like body cams and that departments are already using. Say I'm a police officer, and I've just been out on a call, and now I want Draft One to help me write my police report. What do I do? How does it work? Josh Isner 00:13:41 Yeah, the cop will get a call for service. There'll be a body camera video and audio transcript. The audio transcript is uploaded to the cloud as is the video. That's when our AI starts to analyze that transcript. And then as soon as you hit generate report, you'll start to see it. If you've ever used the Chat GPT, app and you just see the text populating in the app, that's a similar experience to what you see on And from there, you know, it highlights some of the areas where you need to fill in because it really does have to be the officer's own report at the end of the day, and they have to sign off as to what happened. So it's very important that there's a review process in place, but from there it's submitted to the supervisor for approval, and then it makes its way through the legal process over to the prosecutors and defense attorneys and courts and so forth. Clare Duffy 00:14:28 'And is this based on a mainstream AI model, or is this something that Axon built in-house, or trained further? What's the sort of backend look like? Josh Isner 00:14:37 Sure, so Draft One is Chat GPT based. From there we go through a pretty strenuous process before it hits the market, and a lot of that is testing with our Ethics and Equity Advisory Council, or we call the EEAC. And this group, we've worked with them now for three or four years, and they represent the voices that are not sometimes in the room when we're developing police technology. They're generally a little more skeptical. Their life experiences are a lot different than a lot of ours. And that actually has become a very valuable part of what we do because they would not sign off on this product, rightfully so, unless we could prove that it was rid of inherent biases. Especially as it pertains to race or gender, and so that process of calibration can take some time, and whenever there's a new model out, we have to go back through that cycle to make sure that we're not being irresponsible in any way. Clare Duffy 00:15:28 Who is part of that group? Is that independent advisors? Josh Isner 00:15:30 Sure, picture kind of leading activists in a lot of major cities around the United States, and these folks have come from different social organizations or have shown an interest in equity and equality in policing, and, ultimately, you know, when we have diverse opinions in the room, that's what leads to our best product work, and so we're really proud of this partnership. Clare Duffy 00:15:52 Axon lists members of that council on its website. It also lays out a set of guiding principles it follows when creating new tech products. Since the company handles criminal justice information, it has what's called a Criminal Justice Information Services Certification, which is issued by the FBI. Axon says that certification strictly prohibits them, and Microsoft and AI partners like OpenAI, from using the data collected by its software for AI training unless they get explicit permission from police departments. All these questions about the privacy, accuracy and equity considerations involved in using these tools impact more than just police departments. The technology has broader ramifications in the criminal justice system. What kinds of questions have you gotten from the sort of downstream parts of the process, from the prosecutors, from the judges. Like are there questions about how legit this is? Josh Isner 00:16:49 'Yeah, sure. Right, you know, right off the bat, there was a lot of skepticism. And then what happened is these prosecutors saw the quality of these reports. It wasn't a winning argument to the prosecutor that like, oh, they can do these reports faster now in this police department. But then they saw the quality, and they're, like, holy cow, this is so consistent. And, you know, they went and watched the body camera video and cross-referenced the report. And all of a sudden they're like, man, this is something that's going to make our job easier as a prosecutor. Clare Duffy 00:17:18 'Josh told me he's received largely positive feedback from prosecutors. But there's been at least one case so far of a prosecutor's office saying they will not accept reports written with the help of AI. Last September, the prosecutor's office in King County, Washington, sent a message to police chiefs after local law enforcement agencies expressed interest in using Draft One. In the email, the office said it would, quote, "not accept any police report narratives that have been produced with the assistance of AI." It shared concerns that Draft One could, quote, "likely result in many of your officers approving Axon-drafted narratives with unintentional errors in them." We checked in with the King County Prosecutor's Office on this, and they confirmed that the office's position hasn't changed since they sent out that memo. We also asked Axon about this. A spokesperson said that they are, quote, "committed to continuous collaboration with police agencies, prosecutors, defense attorneys, community advocates, and other stakeholders to gather input and guide the responsible evolution of Draft One." The spokesperson also said that report narratives are, quote, "drafted strictly from the audio transcript from the body-worn camera recording, and Axon calibrated the underlying model for Draft One to minimize speculation or embellishments." One of the key features that Axon says can help prevent unintended AI errors is the fill-in-the-blank prompts that are included in each draft report. They're intended to ensure that officers read through and edit the reports before submitting them as final. Josh explained what some of those prompts are. Josh Isner 00:18:58 Maybe the person's name, maybe anything on their ID that got missed if the camera quality wasn't good or if the person didn't say it out loud. So there are things like that, license plates, any additional commentary on what happened, why an officer perceived something and acted the way they did. But those are generally the things that they'd fill in the blanks on. Clare Duffy 00:19:18 And if I'm a defendant in a case where Draft One has been used to write the police report, is that something that the police department discloses, like will I or my defender know that? Josh Isner 00:19:27 'I would assume so. I don't know that for certain, but I think a lot of times when an officer is being cross-examined for what he or she wrote in the report, it does come out like, hey, we used a third-party service to write the first draft of this, and the prosecutors are certainly aware. The defense attorneys are certainly aware. So it's no secret. We don't want it to be a secret. We want this to be about, hey, we're making these more factual, higher quality, and more efficient and, ultimately, that should serve everyone. The truth is really what we're after, and we think we're in a position to provide a very clear picture of that using products like this. Clare Duffy 00:20:06 In Fort Collins, Sergeant Bob Younger led the charge of adopting Draft Pne. He said he was sold on it pretty quickly after seeing a demo in early 2024. And as the department's technology sergeant, it was up to him to get others on board. Sgt. Bob Younger 00:20:24 'So we have lots of stakeholders. First and foremost, I think our officers are super important. We need to get their feedback on it. But then we have judges. We have district attorneys. We have defense counsel. We have citizens. And all of these stakeholders need to have a say, right? They need the same experience that I had when I first tried it in understanding how it works and seeing how it works. And there's always going to be pushback from some people here and there. But I think over time, as they see more and more, the advancement of AI technology, the more and more popular and commonly it's being used in our day-to-day lives, they're recognizing that, if done right and done responsibly, it's a very powerful tool that can help us. You know, when I made my first presentation in the DA's office, there were a few hard questions in there, like, you know, one of their concerns is, well, I don't want it spitting out a report and then copying and pasting it. Well, no, it takes human intervention and interaction. I can't just copy the report over. It requires me to go in and make changes and alterations, fix phrases, remove bracketed terminology where it's asking for clarification and so forth. So it takes human interaction, number one. And number two, it was really important for me for our prosecutors to recognize that, you know, people ask, like, what are you fearful of as a cop, are you worried about getting in a shootout and stuff? Sure, that's always in the back of your mind. What an officer is worried about is being critiqued or held responsible for an error or doing something and being inaccurate and not articulating correctly. And so officers are super hyper focused on the quality and quantity of their work. When I'm producing a report, I'm checking it that entire time for accuracy. I'm checking to make sure it's articulating everything that I did and saw and smelled and heard. And without that interaction with that draft report, it's not my report. That's Draft One's report. It's not my report until I touch it, get my hands on it, make those changes, and then put it into our records management system after I have approved it. And then it goes on to other approval steps beyond me. Clare Duffy 00:22:26 'What happens to that first draft, just the AI-generated report? Is that saved? Is that, like, could you go back and reference what did the AI say versus what did the officer add? Sgt. Bob Younger 00:22:37 It's gone, it's gone. So as soon as the officer either closes out that window or copies and pastes it, that's gone. It's not stored on any servers. I can't see it, I can't see it in an audit trail. I can't pull it back up and go, oh well, let me see what Axon Draft One created versus what the officer created. It doesn't happen. Clare Duffy 00:22:56 'We followed up with Axon on this, and they confirmed that this is the case not just for Fort Collins, but for any department that uses Draft One. That original AI-generated draft with its fill-in-the-blank prompts isn't retained anywhere, which the company says is designed to mimic the old-school process where only final reports, not drafts, are saved. If I am a victim of a crime who wants a copy of the police report, or if I have been accused of a crime, and we're in court, and the police report comes up, will I know that AI was involved in drafting the report? Sgt. Bob Younger 00:23:32 Awesome question. And from our agency, no. Now, in fairness, Axon does allow agencies administratively to turn on a switch in the background that then will put down at the bottom of the report, "Axon Draft One was used to create this report." But that in and of itself is not really an accurate statement to me because it's not. It was maybe used to create the draft, but the officer did that. When someone pushes create report with Draft One, we don't know how much, if any, of that report is used. It's not the purpose. The purpose is to create that building block, but the officer creates the report. Clare Duffy 00:24:09 'Axon confirmed that, by default, the reports include a customizable disclaimer. But police departments can choose to turn that feature off. That's one of the ways that police departments have leeway on how they use the technology. Axon provides some training and guidance about using it, including about its safeguards and best practices. But it's up to the police department to decide things like which officers can use it or whether it can be used for any incident or only specific kinds of incidents. But what Axon and Fort Collins both really emphasized were these fill-in-the-blank prompts that direct officers to edit the AI-generated report. It's the stopgap for any errors made by the AI. Until an officer either fills them out or deletes them, that report isn't final. And at least in Fort Collins' case, that's the reason they cite for not disclosing on finished reports that AI was used to help write them. I wanted to see this process for myself. That's next time on Terms of Service.
Yahoo
13 minutes ago
- Yahoo
What actually went wrong at Jaguar
Nearly a year ago, Jaguar unveiled a refreshed brand identity that was supposed to usher in its future. So far, it's been mostly a headache. The 102-year-old luxury automake r, once a head-to-head competitor with brands like Mercedes-Benz and BMW, had been plagued with problems even before the advertising campaign was released, including leadership changes, declining sales amid a stale lineup and stiff competition from the likes of both German luxury carmakers as well as relative upstarts like Tesla. Now it can add two more problems: misleading headlines about its sales, and outrage from the political right — most notably the US president. On Monday, President Donald Trump trashed Jaguar for what he called a 'stupid' and 'seriously WOKE' ad campaign last year, which featured an avant-garde commercial that featured slogans such as 'live vivid,' and what appeared to be gender-fluid models, but zero images of its cars or even concepts of a car. 'Who wants to buy a Jaguar after looking at that disgraceful ad,' quipped Trump on Truth Social. 'The market cap destruction has been unprecedented with BILLIONS OF DOLLARS SO FOOLISHLY LOST.' But the reality is different. Jaguar Land Rover has been owned by Tata Motors since 2008, when the Indian company bought it from Ford, which means Jaguar doesn't have a market cap. And Tata itself is doing fine as a massive multinational conglomerate with a wide variety of operations worth about $28 billion. Plus, Jaguar's problems are more fundamental. Although most legacy automakers have tried to manage a smooth transition to fully electric propulsion, Jaguar simply ceased making cars entirely in 2024 pulling all of its products off the market as it tries to reinvent itself as an electric vehicle maker. But that's enough to establish a narrative in the minds of many. Headlines swirled last month that Jaguar's sales across Europe were down 97.5% year-over-year in April, citing data from the European Automobile Manufacturers' Association. That makes sense given Jaguar stopped manufacturing cars but the news was enough to draw the wrath of Trump and some conservatives. Shortly after the ad was released, Jaguar revealed its Type 00 concept car at Miami Art Week — notably, not at a traditional automotive show. While the concept isn't intended for production, it is meant to show Jaguar's general future design direction. Jaguar didn't respond to a question from CNN about when it will start production again. Last week, Jaguar Land Rover CEO Adrian Mardell announced he was stepping down after 35 years with the brand. He had a largely successful stint, having helped eliminate billions of dollars of debt and with JLR reporting its ninth consecutive profitable quarter in January on the back of strong SUV sales. Tata Motors on Monday named P.B. Balaji, currently the company's chief financial officer, as Jaguar Land Rover's new CEO. He begins in November. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data