
How innocent people could be jailed because of sinister AI bot ChatGPT… and leave dangerous criminals walking free
ROBOCOP How innocent people could be jailed because of sinister AI bot ChatGPT… and leave dangerous criminals walking free
Click to share on X/Twitter (Opens in new window)
Click to share on Facebook (Opens in new window)
JURORS could secretly be using ChatGPT to decide crunch verdicts, top US lawyers fear.
Dean Strang and Jerry Buting, who starred in Netflix documentary Making a Murderer, warned it could see innocent people convicted - or let criminals off the hook.
4
Top lawyers have warned how ChatGPT could be being used to decide verdicts
Credit: Reuters
4
Jerry Buting and Dean Strang, who were Steven Avery's defence lawyers
The lawyers, who defended Steven Avery on the hit show, insisted it is vital jurors are stopped from accessing the AI tool during trials.
Strang said jurors could be tempted to use ChatGPT as a 'sounding board' if they are uncertain about a defendant.
But he stressed it could have "disastrous" consequences as jurors could get skewed answers that force a mistrial or bring them to the wrong decision.
Strang, who has worked in criminal defence for more than 30 years, told The Sun: 'Jurors should be banned from using ChatGPT.
More on AI
TRUTH TWISTING AI could jail innocent people with fake CCTV, Making a Murderer lawyer says
'I can't prove this and I'm not a computer engineer, but my sense is that AI, in part, is assessing what it thinks we want to hear and tailoring answers to its perception of what our human wishes and purposes are.
'That would be disastrous in a jury room, to the extent that AI decides the juror is inclined to a conviction.
'You're going to get a skewed answer. To the extent AI perceives the jurors are inclined to acquittal or if AI develops its own will, which I read is at least possible, kind of developing a consciousness.
'Either way, you're getting a skew and neither skew is good, and you're substituting out what you hope is human judgment, a human assessment of what's happening in the courtroom, credibility determinations.'
Asked if he fears jurors are already using the tool, or soon will, Strang answered: 'Yes.
'Some never would, and I fear that some would be tempted to or use it as a sounding board, use it as an advisor.
China's new cheap AI DeepSeek sparks ALARM as it outperforms West's models like ChatGPT amid race to superintelligence
'Even if the juror says I'm going to make the decision, but I'm getting advice from AI, that's going to be advice that's probably hard to ignore.'
Strang and Buting defended Avery in the 2015 documentary, which claimed Avery had been framed for the murder of Teresa Halbach.
Avery has been behind bars for the crime since 2007.
Before that, he spent 18 years in jail for a rape and attempted murder he was later found innocent of.
Strang and Buting continue to maintain his innocence, as does Avery himself.
Now, Buting fears even more alleged miscarriages of justice through the use of ChatGPT.
Buting, author of Illusion of Justice, told The Sun: 'I've seen people online take situations like the Steven Avery case or the Karen Reed case, a second trial going on right now because the jury was hung, try and use GPT by putting in the facts as they know them.
'Then they ask ChatGPT whether the person is guilty or innocent and they come up with an answer.
'It's bulls*** basically, because it depends on what you put in as the facts.
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.
Ethics - When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation - As AI tools pull information from the internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.
'This would be the problem with jurors doing it because jurors listen to the evidence.
"If they go back in there and they can't really resolve something themselves, well then that's probably reasonable doubt in most cases.
'But if they say, well, we can't resolve this, you think this and I think that and there's a dispute, let's put it into ChatGPT and see what it says, and then it comes up with an answer, then somebody may be swayed by that.
'AI, at least in its current iteration, has built in biases because of the algorithms.'
Buting previously told The Sun he fears AI could destroy the entire justice system by sending innocent people to jail with fake CCTV.
ChatGPT exploded onto the scene in 2022 and has since become an essential tool for individuals and businesses worldwide.
Buting said: 'We don't know at this point exactly how it's filtering, how it's learning.
'The idea is that it learns from more fact situations presented over and over, but what fact situations, what trials, for instance, is it looking at?
4
Steven Avery listens to testimony in the courtroom at the Calumet County Courthouse in 2007
Credit: AP:Associated Press
4
Jerry Buting argued to jurors that Steven Avery had been framed in Making a Murderer
Credit: NETFLIX
'Are there already ones where there has been a built-in bias? Because there is a lot of bias in America's legal system. Bias against minorities in particular.
'So are they kind of underrepresented in the algorithm, machine learning that is happening with AI? A lot of people wonder about that.
'I just know I've seen people use ChatGPT. I can use it and put in facts, and leave out facts that I want to, and it'll come up with an answer probably that I want.
'So I think there's a real problem.'
When The Sun asked ChatGPT if Avery was guilty, the response was: "Legally: Steven Avery is guilty, he was convicted and remains in prison.
What does the law say?
UNDER UK law, judges are allowed to use ChatGPT to assist them when making rulings.
AI assistance was given the green light by a landmark ruling in 2023.
Guidelines from the Judicial Office sent to thousands of judges across England and Wales said that the tech can be useful for summarising large volumes of text or carrying out administrative tasks.
However, it cautions that chatbots should relied upon for researching legal frameworks, because there is a risk it will fabricate cases or legal documents.
The guidelines also warn that generative AI technology could be used to create false evidence - such as deepfake pictures or videos.
The US is currently grappling with the integration of AI in its legal proceedings.
Spearheading progress is the Illinois Supreme Court, which issued guidance on the use of AI by judges and lawyers in December 2024.
The document encouraged the responsible and supervised use of AI, and suggested that the use of chatbots in drafting pleadings need not be declared.
Its tone is generally pro-AI, and emphasises that existing legal and ethical guidance can be applied.
In early May 2025, a federal judicial panel advanced a proposal to seek the public's feedback on a draft rule designed to ensure AI-produced evidence meets the same standards as human evidence.
'Public opinion and expert debate: Divided. Many believe he may have been wrongfully convicted again, especially given the suspicious handling of evidence.
'No definitive proof of innocence or of a frame-up has yet convinced the courts.'
It comes as fears continue to be raised about its threat to jobs, and the dangers of the software outsmarting humans.
Many experts have also warned of the security dangers of advanced AI – and how cyber-crooks could abuse them to scam innocent victims.
The Sun has approached ChatGPT for a response.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
an hour ago
- Daily Mail
Minister accused of cosying up to Big Tech admits that Artificial Intelligence DOES lie
The Technology Secretary yesterday admitted that AI is 'not flawless' – but defended snubbing attempts to beef-up copyright protections. Peter Kyle acknowledged the fast-emerging technology 'does lie' as he insisted the Government would 'never sell downstream' the rights of artists in the UK. He also admitted 'mistakenly' saying his preferred option on AI and copyright was requiring rights-holders to 'opt out' of their material being used by Big Tech. Sir Elton John last week described the situation as an 'existential issue'. He has also branded the Technology Secretary 'a moron'. Mr Kyle has been accused of cosying up to Big Tech chiefs, meeting with Apple, Google, Amazon and Meta – Facebook 's owner – ten times in little more than three months. The Government is locked in a standoff with the House of Lords, which demands artists be offered immediate copyright protection as an amendment to the Data (Use and Access) Bill. Without this, the new law would hand a copyright exception to firms developing AI. Critics warn the Government's proposed 'opt out' system would allow the current 'Wild West' set-up, in which copyrighted material can be 'scraped' from the internet to 'train' AI models, to continue. Speaking to Sky News, Mr Kyle said: 'I mistakenly said [the opt-out clause] was my preferred option... I've now gone back to the drawing board on that.' When asked about the risk of AI producing unreliable content, Mr Kyle said: 'AI is not flawless... AI does lie, as it's based on human characteristics.' The Government has said it will address copyright issues after the 11,500 responses to its consultation on AI's impact have been reviewed, rather than in what it has branded 'piecemeal' legislation such as the Lords' amendment. UK film industry jobs at risk in tech revolution By Daily Mail Reporter The use of AI in the UK screen sector risks jobs, copyright breaches and creative integrity, a report has warned. The British Film Institute report, which analysed how the sector is using the technology, warned the industry must safeguard human creative control, with job losses likely as roles are replaced by AI. It warned that the 'primary issue' is the use of copyrighted material – such as film and TV scripts – in the training of AI models, without payment or the permission of rights-holders. The issue has been highlighted by the Mail's 'Don't let Big Tech steal it ' campaign, which calls for the Government to protect the UK's creative genius. Rishi Coupland, the BFI's director of research and innovation, said: 'AI could erode traditional business models, displace skilled workers and undermine trust in screen content.'


North Wales Chronicle
an hour ago
- North Wales Chronicle
AI skills drive in schools to ‘put power in hands of next generation'
Some on million students will be given access to learning resources to start equipping them for 'the tech careers of the future' as part of the Government's £187 million 'TechFirst' scheme, Downing Street said. Meanwhile, staff at firms across the country will be trained to 'use and interact' with chatbots and large language models as part of a plan backed by Google and Microsoft to train 7.5 million workers in AI skills by 2030. The TechFirst programme will be split into four strands, with TechYouth – the £24 million 'flagship' arm – aimed at giving students across every secondary school in the UK the chance to gain new AI skills training over three years. The other strands are: – TechGrad, backed by £96.8 million in funding and designed to support 1,000 domestic students a year with undergraduate scholarships in areas such as AI and computer science. – A £48.4 million TechExpert scheme aiming to give up to £10,000 in additional funding to 500 domestic PhD students carrying out research in tech. – TechLocal, backed by £18 million, will offer seed funding to small businesses developing new tech products and adopting AI. The Prime Minister is also launching a new Government partnership with industry to train 7.5 million UK workers in essential skills to use AI by 2030. Tech giants including Google, Microsoft, IBM, Nvidia, BT and Amazon have signed up to make 'high-quality' training materials widely available to workers free of charge over the next five years, Number 10 said. It comes as research commissioned by the Department for Science, Innovation and Technology (DSIT) showed that by 2035, AI will play a part in the roles and responsibilities of around 10 million workers. The Prime Minister said: 'We are putting the power of AI into the hands of the next generation – so they can shape the future, not be shaped by it. 'This training programme will unlock opportunity in every classroom – and lays the foundations for a new era of growth. 'Too many children from working families like the one I grew up in are written off. I am determined to end that.' Sir Keir hosted a private reception at Chequers on Sunday with leading technology bosses and investors, including former Google chief executive Eric Schmidt, Faculty AI co-founder Angie Ma, Google DeepMind chief Demis Hassabis and Scale boss Alex Wang. On Tuesday, he will invite industry figures to Downing Street, including 16-year-old AI entrepreneur Toby Brown, who recently secured 1 million dollars in Silicon Valley funding for his startup, Beem.


Glasgow Times
an hour ago
- Glasgow Times
AI skills drive in schools to ‘put power in hands of next generation'
Some on million students will be given access to learning resources to start equipping them for 'the tech careers of the future' as part of the Government's £187 million 'TechFirst' scheme, Downing Street said. Meanwhile, staff at firms across the country will be trained to 'use and interact' with chatbots and large language models as part of a plan backed by Google and Microsoft to train 7.5 million workers in AI skills by 2030. The TechFirst programme will be split into four strands, with TechYouth – the £24 million 'flagship' arm – aimed at giving students across every secondary school in the UK the chance to gain new AI skills training over three years. The other strands are: – TechGrad, backed by £96.8 million in funding and designed to support 1,000 domestic students a year with undergraduate scholarships in areas such as AI and computer science. – A £48.4 million TechExpert scheme aiming to give up to £10,000 in additional funding to 500 domestic PhD students carrying out research in tech. – TechLocal, backed by £18 million, will offer seed funding to small businesses developing new tech products and adopting AI. The Prime Minister is also launching a new Government partnership with industry to train 7.5 million UK workers in essential skills to use AI by 2030. Tech giants including Google, Microsoft, IBM, Nvidia, BT and Amazon have signed up to make 'high-quality' training materials widely available to workers free of charge over the next five years, Number 10 said. It comes as research commissioned by the Department for Science, Innovation and Technology (DSIT) showed that by 2035, AI will play a part in the roles and responsibilities of around 10 million workers. Sir Keir hosted a private reception at Chequers on Sunday with leading technology bosses and investors (PA) The Prime Minister said: 'We are putting the power of AI into the hands of the next generation – so they can shape the future, not be shaped by it. 'This training programme will unlock opportunity in every classroom – and lays the foundations for a new era of growth. 'Too many children from working families like the one I grew up in are written off. I am determined to end that.' Sir Keir hosted a private reception at Chequers on Sunday with leading technology bosses and investors, including former Google chief executive Eric Schmidt, Faculty AI co-founder Angie Ma, Google DeepMind chief Demis Hassabis and Scale boss Alex Wang. On Tuesday, he will invite industry figures to Downing Street, including 16-year-old AI entrepreneur Toby Brown, who recently secured 1 million dollars in Silicon Valley funding for his startup, Beem.