logo
How innocent people could be jailed because of sinister AI bot ChatGPT… and leave dangerous criminals walking free

How innocent people could be jailed because of sinister AI bot ChatGPT… and leave dangerous criminals walking free

The Irish Sun10-05-2025

JURORS could secretly be using ChatGPT to decide crunch verdicts, top US lawyers fear.
Dean Strang and Jerry Buting, who starred in Netflix documentary Making a Murderer, warned it could see innocent people convicted - or let criminals off the hook.
4
Top lawyers have warned how ChatGPT could be being used to decide verdicts
Credit: Reuters
4
Jerry Buting and Dean Strang, who were Steven Avery's defence lawyers
The lawyers, who defended Steven Avery on the hit show, insisted it is vital jurors are stopped from accessing the AI tool during trials.
Strang said jurors could be tempted to use ChatGPT as a 'sounding board' if they are uncertain about a defendant.
But he stressed it could have "disastrous" consequences as jurors could get skewed answers that force a mistrial or bring them to the wrong decision.
Strang, who has worked in criminal defence for more than 30 years, told The Sun: 'Jurors should be banned from using ChatGPT.
More on AI
'I can't prove this and I'm not a computer engineer, but my sense is that AI, in part, is assessing what it thinks we want to hear and tailoring answers to its perception of what our human wishes and purposes are.
'That would be disastrous in a jury room, to the extent that AI decides the juror is inclined to a conviction.
'You're going to get a skewed answer. To the extent AI perceives the jurors are inclined to acquittal or if AI develops its own will, which I read is at least possible, kind of developing a consciousness.
'Either way, you're getting a skew and neither skew is good, and you're substituting out what you hope is human judgment, a human assessment of what's happening in the courtroom, credibility determinations.'
Most read in Tech
Asked if he fears jurors are already using the tool, or soon will, Strang answered: 'Yes.
'Some never would, and I fear that some would be tempted to or use it as a sounding board, use it as an advisor.
China's new cheap AI DeepSeek sparks ALARM as it outperforms West's models like ChatGPT amid race to superintelligence
'Even if the juror says I'm going to make the decision, but I'm getting advice from AI, that's going to be advice that's probably hard to ignore.'
Strang and Buting defended Avery in the 2015 documentary, which claimed Avery had been framed for the murder of Teresa Halbach.
Avery has been behind bars for the crime since 2007.
Before that, he spent 18 years in jail for a rape and attempted murder he was later found innocent of.
Strang and Buting continue to maintain his innocence, as does Avery himself.
Now, Buting fears even more alleged miscarriages of justice through the use of ChatGPT.
Buting, author of Illusion of Justice, told The Sun: 'I've seen people online take situations like the Steven Avery case or the Karen Reed case, a second trial going on right now because the jury was hung, try and use GPT by putting in the facts as they know them.
'Then they ask ChatGPT whether the person is guilty or innocent and they come up with an answer.
'It's bulls*** basically, because it depends on what you put in as the facts.
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.
Ethics - When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation - As AI tools pull information from the internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.
'This would be the problem with jurors doing it because jurors listen to the evidence.
"If they go back in there and they can't really resolve something themselves, well then that's probably reasonable doubt in most cases.
'But if they say, well, we can't resolve this, you think this and I think that and there's a dispute, let's put it into ChatGPT and see what it says, and then it comes up with an answer, then somebody may be swayed by that.
'AI, at least in its current iteration, has built in biases because of the algorithms.'
Buting previously told The Sun he
ChatGPT exploded onto the scene in 2022 and has since become an essential tool for individuals and businesses worldwide.
Buting said: 'We don't know at this point exactly how it's filtering, how it's learning.
'The idea is that it learns from more fact situations presented over and over, but what fact situations, what trials, for instance, is it looking at?
4
Steven Avery listens to testimony in the courtroom at the Calumet County Courthouse in 2007
Credit: AP:Associated Press
4
Jerry Buting argued to jurors that Steven Avery had been framed in Making a Murderer
Credit: NETFLIX
'Are there already ones where there has been a built-in bias? Because there is a lot of bias in America's legal system. Bias against minorities in particular.
'So are they kind of underrepresented in the algorithm, machine learning that is happening with AI? A lot of people wonder about that.
'I just know I've seen people use ChatGPT. I can use it and put in facts, and leave out facts that I want to, and it'll come up with an answer probably that I want.
'So I think there's a real problem.'
When The Sun asked ChatGPT if Avery was guilty, the response was: "Legally: Steven Avery is guilty, he was convicted and remains in prison.
What does the law say?
UNDER UK law, judges are allowed to use ChatGPT to assist them when making rulings.
AI assistance was given the green light by a landmark ruling in 2023.
Guidelines from the Judicial Office sent to thousands of judges across England and Wales said that the tech can be useful for summarising large volumes of text or carrying out administrative tasks.
However, it cautions that chatbots should relied upon for researching legal frameworks, because there is a risk it will fabricate cases or legal documents.
The guidelines also warn that generative AI technology could be used to create false evidence - such as deepfake pictures or videos.
The US is currently grappling with the integration of AI in its legal proceedings.
Spearheading progress is the Illinois Supreme Court, which issued guidance on the use of AI by judges and lawyers in December 2024.
The document encouraged the responsible and supervised use of AI, and suggested that the use of chatbots in drafting pleadings need not be declared.
Its tone is generally pro-AI, and emphasises that existing legal and ethical guidance can be applied.
In early May 2025, a federal judicial panel advanced a proposal to seek the public's feedback on a draft rule designed to ensure AI-produced evidence meets the same standards as human evidence.
'Public opinion and expert debate: Divided. Many believe he may have been wrongfully convicted again, especially given the suspicious handling of evidence.
'No definitive proof of innocence or of a frame-up has yet convinced the courts.'
It comes as fears continue to be raised about its threat to jobs, and the dangers of the software outsmarting humans.
Many experts have also warned of the security dangers of advanced AI – and how cyber-crooks could abuse them to
The Sun has approached ChatGPT for a response.
Who is Steven Avery?
STEVEN Avery is serving a life sentence at Wisconsin's Waupun Correctional Institution.
He and his nephew
He has been fighting for his freedom ever since he was found guilty of murder in 2007.
Avery argued that his conviction was based on planted evidence and false testimony.
In 1985,
It took 18 years for his conviction to be overturned and he was given a $36million (£28.2million) payout in compensation.
But days later, he was re-arrested for the murder of
The 62-year-old is continuing serving life in prison without the possibility of parole.
In the 2015 Netflix original series Making a Murderer, Avery documented his struggle for "justice."
In the last episode of the series, viewers were told that Avery had exhausted his appeals and was no longer entitled to state-appointed legal representation.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Miguel Uribe shooting: Surgery on Colombian senator ‘went well', wife reveals
Miguel Uribe shooting: Surgery on Colombian senator ‘went well', wife reveals

Irish Independent

timean hour ago

  • Irish Independent

Miguel Uribe shooting: Surgery on Colombian senator ‘went well', wife reveals

Boy aged under 15 held by police after presidential candidate was targeted in attack in Bogota Julia Symmes Cobb and Lucinda Elliott ©Reuters Colombian senator Miguel Uribe, a ­potential presidential contender, survived an initial operation for his injuries after being shot in Bogota on Saturday, according to his wife and the hospital treating him, although he remains in intensive care. Mr Uribe (39) is a member of the opposition right-wing Democratic Centre party and was shot in the head during a campaign event in a public park in the Fontibon neighbourhood.

‘Significant challenges' in use of AI within UK screen sector
‘Significant challenges' in use of AI within UK screen sector

Irish Independent

time3 hours ago

  • Irish Independent

‘Significant challenges' in use of AI within UK screen sector

Other issues highlighted by the British Film Institute (BFI) report include the safeguarding of human creative control, the fear of jobs being lost as positions are replaced through the use of AI, and investment in training in new skills. High energy consumption and carbon emissions, and the risks to creative content around biased data, are also described as being of concern. The report, which has been carried out in partnership with CoSTAR universities Goldsmiths, Loughborough and Edinburgh, analyses how the screen sector is using and experimenting with rapidly evolving generative AI technologies. It warned that the 'primary issue' was the use of copyrighted material – such as hundreds of thousands of film and TV scripts – in the training of generative AI models, without payment or the permission of rights-holders. 'This practice threatens the fundamental economics of the screen sector if it devalues intellectual property creation and squeezes out original creators,' the report said. But it added that the UK's strong foundation in creative technology – as it is home to more than 13,000 creative technology companies – means that the UK screen sector is well positioned to adapt to the technological shift. The report – titled AI in the Screen Sector: Perspectives and Paths Forward – said generative AI promises to democratise and revolutionise the industry, with the BBC, for example, piloting AI initiatives. Meanwhile, projects such as the Charismatic consortium, which is backed by Channel 4 and Aardman Animations, aim to make AI tools accessible to creators regardless of their budget or experience. It said this could empower a new wave of British creators to produce high-quality content with modest resources, though concerns about copyright and ethical use remain significant barriers to full adoption. The report sets out nine key recommendations it suggests should be addressed within the next three years to enable the UK screen sector to thrive in using AI. ADVERTISEMENT These include establishing the UK as a world-leading market of IP licensing for AI training, and embedding sustainability standards to reduce AI's carbon footprint. It also calls for structures and interventions to pool knowledge, develop workforce skills and target investments in the UK's creative technology sector, while it urges support for independent creators through accessible tools, funding and ethical AI products. The BFI's director of research and innovation, Rishi Coupland, said: 'AI has long been an established part of the screen sector's creative toolkit, most recently seen in the post-production of the Oscar-winning The Brutalist, and its rapid advancement is attracting multimillion investments in technology innovator applications. 'However, our report comes at a critical time and shows how generative AI presents an inflection point for the sector and, as a sector, we need to act quickly on a number of key strategic fronts. 'Whilst it offers significant opportunities for the screen sector such as speeding up production workflows, democratising content creation and empowering new voices, it could also erode traditional business models, displace skilled workers, and undermine public trust in screen content. 'The report's recommendations provide a roadmap to how we can ensure that the UK's world-leading film, TV, video games and VFX industries continue to thrive by making best use of AI technologies to bring their creativity, innovations and storytelling to screens around the globe.' Professor Jonny Freeman, director of CoSTAR Foresight Lab, said: 'This latest CoSTAR Foresight Lab report, prepared by the BFI, navigates the complex landscape of AI in the screen sector by carefully weighing both its transformative opportunities and the significant challenges it presents. 'The report acknowledges that while AI offers powerful tools to enhance creativity, efficiency, and competitiveness across every stage of the production workflow – from script development and pre-production planning, through on-set production, to post-production and distribution – it also raises urgent questions around skills, workforce adaptation, ethics, and sector sustainability.' CoSTAR is a £75.6 million national network of laboratories that are developing new technology to maintain the UK's world-leading position in gaming, TV, film, performance, and digital entertainment. Last month stars including Sir Elton John, Sir Paul McCartney and Sir Ian McKellen, wrote a joint letter to Sir Keir Starmer, urging the Prime Minister to introduce safeguards against work being plundered for free.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store