
Police and Courts Are Turning to AI. Is the System Ready?
Today, AI tech has reached nearly every aspect of the criminal justice system. It is being used in facial recognition systems to identify suspects; in 'predictive policing' strategies to formulate patrol routes; in courtrooms to assist with case management; and by public defenders to cull through evidence. But while advocates point to an increase in efficiency and fairness, critics raise serious questions around privacy and accountability.
Last month, the Council on Criminal Justice launched a nonpartisan task force on AI, to study how AI could be used in the criminal justice system safely and ethically. The group's work will be supported by researchers at RAND, and they will eventually take their findings and make recommendations to policymakers and law enforcement.
'There's no question that AI can yield unjust results,' says Nathan Hecht, the task force's chair and a former Texas Supreme Court Chief Justice. 'This task force wants to bring together tech people, criminal justice people, community people, experts in various different areas, and really sit down to see how we can use it to make the system better and not cause the harm that it's capable of.'
Risks of AI in law enforcement
Many courts and police departments are already using AI, Hecht says. 'It's very piecemeal: Curious people going, 'Oh, wow, there's this AI out here, we could use it over in the criminal court.'
But because there are few standards for how to deploy AI, civil rights watchdogs have grown concerned that law enforcement agencies are using it in dangerous ways. Thousands of agencies have come to rely upon facial recognition technology sold by companies like Clearview, which hosts a database of billions of images scraped off the internet. In many databases, Black people are overrepresented, in part because they live in communities that are overpoliced. AI technology is also worse at discerning differences in Black people's faces, which can lead to higher misidentification rates.
Last year, the Innocence Project, a legal nonprofit, found that there have been at least seven wrongful arrests from facial recognition technology, six of which involved wrongfully accused Black people. Walter Katz, the organization's director of policy, says that police sometimes make arrests solely based on AI's facial recognition findings as opposed to having the AI serve as a starting point for a larger investigation. 'There's an over-reliance on AI outputs,' he says.
Katz says that when he went to a policing conference last fall, 'it was AI everywhere.' Vendors were aggressively hawking technology tools that claimed to solve real problems in police departments. 'But in making that pitch, there was little attention to any tradeoffs or risks,' he says. For instance, critics worry that many of these AI tools will increase surveillance of public spaces, including the monitoring of peaceful protesters—or that so-called 'predictive policing' will intensify law enforcement's crackdowns on over-policed areas.
Where AI could help
However, Katz concedes that AI does have a place in the criminal justice system. 'It'll be very hard to wish AI away—and there are places where AI can be helpful,' he says. For that reason, he joined the Council on Criminal Justice's AI task force. 'First and foremost is getting our arms wrapped around how fast the adoption is. And if everyone comes from the understanding that having no policy whatsoever is probably the wrong place to be, then we build from there.'
Hecht, the task force's chair, sees several areas where AI could be helpful in the courtroom, for example, including improving the intake process for arrested people, or helping identify who qualifies for diversion programs, which allow offenders to avoid convictions. He also hopes the task force will provide recommendations on what types of AI usage explicitly should not be approved in criminal justice, and steps to preserve the public's privacy. 'We want to try to gather the expertise necessary to reassure the users of the product and the public that this is going to make your experience with the criminal justice system better—and after that, it's going to leave you alone,' he says.
Meanwhile, plenty of other independent efforts are trying to use AI to improve the justice processes. One startup, JusticeText, hopes to use AI to narrow the gap between resources of prosecutors and public defenders, the latter of whom are typically severely understaffed and underresourced. JusticeText built a tool for public defenders that sorts through hours of 911 calls, police body camera footage, and recorded interrogations, in order to analyze it and determine if, for example, police have made inconsistent statements or asked leading questions.
'We really wanted to see what it looks like to be a public defender-first, and try to level that playing field that technology has in many ways exacerbated in past years,' says founder and CEO Devshi Mehrotra. JusticeText is working with around 75 public defender agencies around the country.
Recidiviz, a criminal justice reform nonprofit, has also been testing several ways of integrating AI into their workflows, including giving parole officers AI-generated summaries of clients. 'You might have 80 pages of case notes going back seven years on this person that you're not going to read if you have a caseload of 150 people, and you have to see each one of them every month,' says Andrew Warren, Recidiviz's co-founder. 'AI could give very succinct highlights of what this person has already achieved and what they could use support on.'
The challenge for policymakers and the Council on Criminal Justice's task force, then, is to determine how to develop standards and oversight mechanisms so that the good from AI's efficiency gains outweigh its ability to amplify existing biases. Hecht, at the task force, also hopes to protect from a future in which a black box AI makes life-changing decisions on its own.
'Should we ensure our traditional ideas of human justice are protected? Of course. Should we make sure that able judges and handlers of the criminal justice system are totally in control? Of course,' he says. 'But saying we're going to keep AI out of the justice system is hopeless. Law firms are using it. The civil justice system is using it. It's here to stay.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 minutes ago
- Yahoo
Xpand raises $6m to support autonomous retail stores growth
Israel-headquartered retail technology startup Xpand has raised $6m in a funding round to expand its AI-powered autonomous retail stores globally. The investment, led by Ibex Investors and Emerge, will support the launch of Xpand's first smart autonomous store in Vienna, Austria, and 'the company's plan to expand access to efficient, frictionless retail infrastructure worldwide'. The new funding will be instrumental for Xpand as it enters the next phase of growth, with a focus on the deployment of its "store-in-a-box" units. The modular stores operate without staff, using robotics, computer vision and real-time inventory management to provide a 24/7 shopping experience. The new capital will expedite the rollout of stores across Europe and North America, as well as expanding the company's sales, marketing and technology teams. The executive team's participation in the funding round included chairman and CEO Joel Bar-El. Bar-El, who co-founded Trax Retail, brings industry expertise to the company, along with other senior team members from SAP, Retalix, Bringg and Magic Leap. 'With this new investment, we are ready to move from vision to global execution,' Bar-El stated. 'The Vienna store is only the beginning. Our leadership team is deeply committed to bringing scaleable, autonomous retail to life.' Xpand was originally founded as 1MRobotics in 2021. It underwent rebranding to better represent its transition from research and development to commercialisation. Bar-El added: 'Our mission is to power the next generation of retail - autonomous, scaleable, and always on. We are excited to partner with global retailers to bring this model to life.' "Xpand raises $6m to support autonomous retail stores growth" was originally created and published by Retail Insight Network, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
7 minutes ago
- Forbes
Responsible AI Leadership: Why Kindness And Ethics Will Win The Future
Nav Thethi, well recognized and reputed Top DX Contributor, Executive Advisor, Podcast Host, Corporate Trainer, and Mentor. In the age of AI, the most successful leaders will innovate with heart. As AI reshapes industries, C-suite executives must choose between short-term gains and building future-ready organizations rooted in ethics, empathy and sustainability. The data is clear. The Thomson Reuters "2025 C-Suite Survey" shows that 85% of executives see AI as transformational, yet only 34% have "equipped employees with AI tools." This gap reveals a critical need for responsible AI leadership that prioritizes ethical governance, employee trust and alignment with long-term societal impact. Ethical AI Governance: Building Trust Through Transparency Ethical AI governance is a cornerstone of digital maturity. Consumers, regulators and employees are watching how companies deploy AI. Missteps like biased algorithms or poor data practices can kill trust and invite legal risk. Microsoft's Responsible AI principles offer a blueprint for fairness, transparency, accountability and inclusiveness, not as buzzwords but as practical steps: Fairness involves regularly auditing models for bias, and transparency means clearly communicating how AI makes decisions, including labeling AI-generated content, an emerging requirement in the EU. Leaders can activate these principles by establishing AI ethics boards, integrating bias checks into development and training teams on compliance. A KPMG study shows that 86% of consumers are concerned about data privacy, while McKinsey reports that 71% want personalized experiences. Responsible leaders bridge this gap by using consent-driven data and clearly explaining AI's role in plain language. This approach builds trust, a currency more valuable than any algorithm. Publicly sharing AI ethics guidelines can boost brand loyalty while satisfying regulatory demands. The Business Case For Kindness: Empathy As A Competitive Edge Kindness in leadership isn't soft; it's strategic. Empathetic leaders who prioritize people alongside profits foster loyalty, spark innovation and accelerate digital maturity. AI can amplify this impact. A 2025 Communications Psychology study, cited by Live Science, found that "AI-generated responses were rated 16% more compassionate than human responses and were preferred 68% of the time, even when compared to trained crisis responders." Internally, kindness means investing in employees. The Thomson Reuters '2025 C-Suite Survey' revealed that only 34% of organizations have equipped teams with AI tools. This isn't just a tech gap; it's a trust gap. Employees fear being replaced, not empowered. Responsible leaders close the gap by prioritizing upskilling. Amazon's Upskilling 2025 initiative, which trained 100,000 employees in skills like machine learning and cloud computing between 2019 and 2025, proves how investing in people fuels innovation. Ethical, transparent AI personalization shows customers they're understood. A 2024 Accenture study found that "85% of consumers say personal data protection is important when using conversational AI tools." Empathetic leadership, when paired with responsible AI, transforms goodwill into sustained growth, for both people and performance. Closing The Vision-Execution Gap: Leadership For The Long Game Responsible leaders align AI strategies with ethical and sustainable goals. They integrate AI thoughtfully by mandating audits, forming ethics committees and prioritizing training to build literacy, skills and collaboration. Sustainability is another pillar of responsible AI leadership. According to Goldman Sachs Research, "the overall increase in data center power consumption from AI [will] be on the order of 200 terawatt-hours per year between 2023 and 2030." This underscores the urgent need for energy-efficient innovation. Forward-thinking leaders are already investing in renewable-powered infrastructure and optimizing algorithms to meet ESG goals. Microsoft's pledge to achieve carbon neutrality by 2030 illustrates how sustainability enhances both impact and brand reputation. Finally, measure what matters. Responsible leaders go beyond traditional metrics. They track customer trust scores, algorithmic bias reduction and other ethical metrics that reflect AI's broader role in shaping brand equity and societal value. A 2024 Forbes study projects that by 2030, brands will dedicate 10% to 15% of marketing budgets to sustainability messaging, driven by consumer demand and regulatory momentum. Leaders who act now will stay ahead of the curve. Leading With Heart In An AI World Ethical governance builds trust, empathetic leadership drives loyalty and sustainability ensures long-term relevance. To close the gap between AI's promise and reality, leaders must act with clear audits, team training and holistic impact measurement. Consider IBM's approach: Its AI ethics board, transparent AI policies and focus on upskilling have made it a leader in responsible innovation. Or consider Unilever, which utilizes AI to optimize supply chains while advancing its sustainability goals, demonstrating that ethics and profitability can coexist. These companies show that kindness and innovation go hand in hand. My Brand's Perspective I see responsible AI leadership as essential. At The Nav Thethi, our policy is rooted in ethics-first design, people-centered personalization and sustainable innovation. Drawing from our work with enterprise leaders, we've learned that effective AI must be human-aligned, not just technically sound. The Digital Maturity Model we offer clients, spanning sustainability, financial economics, operational efficiency and customer experience, outlines five stages from Awareness to Transformation, where ethical AI becomes a driving factor. We don't just chase trends; we vet every tool for fairness, transparency and privacy. Our strategies emphasize consent-first data and emotional intelligence. We train in AI ethics and stay aligned with ESG goals. The impact: stronger trust, deeper engagement and better-qualified leads. For us, AI amplifies human insight; it doesn't replace it. While the current political climate leans toward deregulation, we steer with purpose. We are bullish on AI, but always within clear ethical boundaries. We lean with conviction, aligned with purpose, people and planet. Why It Matters Now AI today isn't just about capability, but conscience. My team has seen that kindness, accountability and digital maturity aren't in conflict, but rather are interconnected drivers of sustainable growth. AI's potential is vast, but its risks are real. Responsible leaders will succeed by combining technical expertise with human values. They'll train their teams and respect their customers. Forbes Coaches Council is an invitation-only community for leading business and career coaches. Do I qualify?


The Verge
7 minutes ago
- The Verge
Elon Musk says he's suing Apple for rigging App Store rankings
Elon Musk says that his artificial intelligence company xAI 'will take immediate legal action' against Apple for allegedly manipulating its App Store rankings to the advantage of rival AI apps. In a series of X posts on Monday night, Musk suggested that Apple was 'playing politics' by not placing either X or xAI's Grok chatbot in the App Store's list of recommended iOS apps, and that he had no choice but to file a lawsuit. 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,' Musk said. 'Why do you refuse to put either X or Grok in your 'Must Have' section when X is the #1 news app in the world and Grok is #5 among all apps?,' the xAI CEO asked Apple in another post, which is now pinned to his profile. Musk provided no evidence for his claims, and it's unclear if he has made good on his threats and filed the lawsuit yet. The Verge has reached out to xAI, Apple, and OpenAI for comment. At the time of writing, ChatGPT is listed as Apple's top free iPhone app in the US, with Grok ranked as the sixth. China's DeepSeek AI briefly managed to take the top App Store position from ChatGPT in January, however, disproving Musk's claim that it would be impossible for other AI apps to do so. Accusations of meddling from Musk are ironic, given allegations about alterations made to X after he acquired the company in 2022, then known as Twitter. A research study in 2024 suggested that X's algorithm was manipulated to boost posts from Musk's account. OpenAI CEO Sam Altman responded to Musk's posts by sharing a Platformer report from 2023 that similarly found Musk had a system built to promote his X posts to the entire platform's userbase. In June, the 'maximally truth-seeking' Grok chatbot was also found to be consulting Musk's opinion before providing answers to controversial questions around topics like Israel and Palestine, US immigration, and abortion. This isn't the first time that Musk has beefed with Apple and OpenAI. Musk was an early founder at OpenAI, and previously targeted the AI startup via lawsuits and a unanimously rejected $97.4 billion buyout offer after taking issue with its pivot to being a profit-driven business. Following OpenAI's partnership with Apple to integrate ChatGPT into iPhones, iPads, and Macs last year, Musk threatened to ban Apple devices at his companies if OpenAI's tech was fused with Apple's OS. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jess Weatherbed Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Apple Posts from this topic will be added to your daily email digest and your homepage feed. See All Apps Posts from this topic will be added to your daily email digest and your homepage feed. See All Elon Musk Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Twitter - X Posts from this topic will be added to your daily email digest and your homepage feed. See All xAI