logo
AI could destroy entire justice system by sending innocent people to JAIL with fake CCTV, Making a Murderer lawyer warns

AI could destroy entire justice system by sending innocent people to JAIL with fake CCTV, Making a Murderer lawyer warns

Scottish Sun27-04-2025

Click to share on X/Twitter (Opens in new window)
Click to share on Facebook (Opens in new window)
AI could wreak havoc in the justice system by sending innocent people to jail, a top lawyer has warned.
Jerry Buting, who defended Steven Avery in Netflix hit Making a Murderer, said video doctoring is becoming so sophisticated it is increasingly hard to spot.
Sign up for Scottish Sun
newsletter
Sign up
3
Deepfake technology is advancing to clone the features of a person and map them onto something else. Stock picture
Credit: Alamy
3
Jerry Buting argued to jurors that Steven Avery had been framed in Netflix documentary Making a Murderer
Credit: NETFLIX
3
Avery remains is prison after being given a lift sentence
Credit: Splash News
He believes advanced AI convincingly fabricating evidence could lead to innocent people being thrown behind bars.
Buting, author of Illusion of Justice, told The Sun: 'More and more people could get convicted.'
Deepfake technology is becoming worryingly advanced and exceedingly more difficult to regulate.
Experts have previously told The Sun that deepfakes are the "biggest evolving threat" when it comes to cybercrime.
More on AI
MORE TO LIFE Four ways humans are trying to 'resurrect' & come back from the dead
Deepfakes are fraudulent videos that appear to show a person doing - and possibly saying - things they did not do.
Artificial intelligence-style software is used to clone the features of a person and map them onto something else.
It could see people accused of crimes they didn't commit in a chilling echo of BBC drama The Capture.
The show saw a former British soldier accused of kidnap and murder based on seemingly definitive CCTV footage which had actually been altered.
Buting said: "The tricky part is when AI gets to the point where you can doctor evidence without it being obvious, where you can alter videos.
'There are so many CCTV cameras in the UK, virtually every square foot is covered.
Deepfakes: A Digital Threat to Society
'But if that could be altered in some way so that it is designed to present something that's not true, it could be damaging to the defence or prosecution.
"Then what can we believe if we can't believe our own eyes?'
Buting, who defended Avery in his now infamous 2007 murder trial, said AI is now in a race with experts who are being trained to tell the difference.
But the US-based criminal defence lawyer claims that is no guarantee to stop sickos twisting the truth.
Buting claimed: 'It may result in dismissals but I think it's more likely to result in wrongful convictions because law enforcement and the prosecution just have more resources.
"Nobody really knows how AI is going to impact the justice system.
"But there are also very skilled people who are trying to develop techniques of being able to tell when something has been altered, even at a sophisticated level.
"How AI actually affects the legal system is still very much up in the air.
Deepfakes – what are they, and how do they work?
Here's what you need to know... Deepfakes are phoney videos of people that look perfectly real
They're made using computers to generate convincing representations of events that never happened
Often, this involves swapping the face of one person onto another, or making them say whatever you want
The process begins by feeding an AI hundreds or even thousands of photos of the victim
A machine learning algorithm swaps out certain parts frame-by-frame until it spits out a realistic, but fake, photo or video
In one famous deepfake clip, comedian Jordan Peele created a realistic video of Barack Obama in which the former President called Donald Trump a 'dipsh*t'
In another, the face of Will Smith is pasted onto the character of Neo in the action flick The Matrix. Smith famously turned down the role to star in flop movie Wild Wild West, while the Matrix role went to Keanu Reeves
"If people are able to discover that evidence has been altered, let's say it's a situation where the defence has an expert who can look at the metadata and all the background, then that may very well result in a dismissal of the case, and should.
'Because the evidence was altered, it's original destroyed, how can we believe anything anymore?"
Former White House Information Officer Theresa Payton previously warned The Sun about the huge risks deepfakes pose to society.
She said: "This technology poses risks if misused by criminal syndicates or nation-state cyber operatives.
"Malicious applications include creating fake personas to spread misinformation, manipulate public opinion, and conduct sophisticated social engineering attacks."
In Black Mirror style, Payton warned malicious actors could exploit this technology to sow confusion and chaos by creating deepfakes of world leaders or famous faces - dead or alive.
Buting warned that although teams are being urgently equipped with skills to spot deepfakes, the pace at which the technology is advancing could soon become a real issue.
Who is Steven Avery?
STEVEN Avery is serving a life sentence at Wisconsin's Waupun Correctional Institution.
He and his nephew Brendan Dassey were convicted of the 2005 murder of Teresa Halbach.
He has been fighting for his freedom ever since he was found guilty of murder in 2007.
Avery argued that his conviction was based on planted evidence and false testimony.
In 1985, Avery was falsely convicted of sexually assaulting a young female jogger.
It took 18 years for his conviction to be overturned and he was given a $36million (£28.2million) payout in compensation.
But days later, he was re-arrested for the murder of Teresa Halbach.
The 62-year-old is continuing serving life in prison without the possibility of parole.
In the 2015 Netflix original series Making a Murderer, Avery documented his struggle for "justice."
In the last episode of the series, viewers were told that Avery had exhausted his appeals and was no longer entitled to state-appointed legal representation.
He added: 'I do fear it could be an issue sooner rather than later.
"There has been a steady erosion in the defence in the UK, for example barristers make very little money, really, for what they have to do.
'There is a real imbalance. The whole idea of an adversary system which the UK employs as do we in the US, is if you have two relatively skilled, equal parties on each side presenting their view of the evidence against the others that the truth will come out.
'Or that the jury will be able to discern the truth or close to it in anyway, whatever justice might be.
'But to the extent that there is this big imbalance and the defence is unskilled or underpaid, then you tend to get lower quality or lower experienced attorneys.
'That's been going on for a long time, so then when you add something like AI to it, it's going to be even harder."
Buting became internationally renowned after appearing on the 2015 Netflix documentary series Making a Murderer.
He alleged Avery had been convicted of a murder he didn't commit, falling foul of a set-up.
But Avery, now 62, was found guilty and is serving a life sentence for the murder of Teresa Halbach in 2005.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Greta Thunberg attempts to reach Gaza despite Israel's chilling threat
Greta Thunberg attempts to reach Gaza despite Israel's chilling threat

Daily Mirror

time26 minutes ago

  • Daily Mirror

Greta Thunberg attempts to reach Gaza despite Israel's chilling threat

Israel's defence chief has vowed to prevent an aid boat carrying Greta Thunberg and other activists from reaching the Gaza Strip. Israel Katz said Israel will not allow anyone to break its blockade of the Palestinian territory. He said: "To the anti- Semitic Greta and her fellow Hamas propagandists, I will say this clearly: You should turn back, because you will not make it to Gaza." Mr Katz added: "I have instructed the IDF to act so the hate flotilla does not reach the shores of Gaza - and to take any means necessary to that end." Greta, 22, is among 12 activists on the Madleen, a British-flagged boat operated by the Freedom Flotilla Coalition. The Swedish activist has previously denied anti-Semitism. She added: "We are a humanitarian aid ship." An attempt last month by Freedom Flotilla to reach Gaza failed after another vessel was attacked by two drones off Malta. Yesterday, 13 people were killed by Israeli fire near an aid station in Rafah, Gaza. Witnesses said the shooting occurred at 6am, when they were told the centre would open.

‘Significant challenges' in use of AI within UK screen sector
‘Significant challenges' in use of AI within UK screen sector

Scotsman

timean hour ago

  • Scotsman

‘Significant challenges' in use of AI within UK screen sector

Hans Lucas/AFP via Getty Images Scottish researchers were among those to create the report Sign up to our Arts and Culture newsletter, get the latest news and reviews from our specialist arts writers Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... The use of artificial intelligence (AI) within the UK screen sector raises 'significant legal, ethical, and practical challenges' such as the use of copyrighted material being used without the permission of the rights holders, a report has warned. Other issues highlighted by the British Film Institute (BFI) report include the safeguarding of human creative control, the fear of jobs being lost as positions are replaced through the use of AI, and investment in training in new skills. Advertisement Hide Ad Advertisement Hide Ad High energy consumption and carbon emissions, and the risks to creative content around biased data, are also described as being of concern. The report, which has been carried out in partnership with CoSTAR universities Goldsmiths, Loughborough and Edinburgh, analyses how the screen sector is using and experimenting with rapidly evolving generative AI technologies. It warned that the 'primary issue' was the use of copyrighted material – such as hundreds of thousands of film and TV scripts – in the training of generative AI models, without payment or the permission of rights-holders. 'This practice threatens the fundamental economics of the screen sector if it devalues intellectual property creation and squeezes out original creators,' the report said. Advertisement Hide Ad Advertisement Hide Ad But it added that the UK's strong foundation in creative technology – as it is home to more than 13,000 creative technology companies – means that the UK screen sector is well positioned to adapt to the technological shift. Getty Images The report – titled AI in the Screen Sector: Perspectives and Paths Forward – said generative AI promises to democratise and revolutionise the industry, with the BBC, for example, piloting AI initiatives. Meanwhile, projects such as the Charismatic consortium, which is backed by Channel 4 and Aardman Animations, aim to make AI tools accessible to creators regardless of their budget or experience. It said this could empower a new wave of British creators to produce high-quality content with modest resources, though concerns about copyright and ethical use remain significant barriers to full adoption. Advertisement Hide Ad Advertisement Hide Ad The report sets out nine key recommendations it suggests should be addressed within the next three years to enable the UK screen sector to thrive in using AI. These include establishing the UK as a world-leading market of IP licensing for AI training, and embedding sustainability standards to reduce AI's carbon footprint. It also calls for structures and interventions to pool knowledge, develop workforce skills and target investments in the UK's creative technology sector, while it urges support for independent creators through accessible tools, funding and ethical AI products. The BFI's director of research and innovation, Rishi Coupland, said: 'AI has long been an established part of the screen sector's creative toolkit, most recently seen in the post-production of the Oscar-winning The Brutalist, and its rapid advancement is attracting multimillion investments in technology innovator applications. Advertisement Hide Ad Advertisement Hide Ad 'However, our report comes at a critical time and shows how generative AI presents an inflection point for the sector and, as a sector, we need to act quickly on a number of key strategic fronts. 'Whilst it offers significant opportunities for the screen sector such as speeding up production workflows, democratising content creation and empowering new voices, it could also erode traditional business models, displace skilled workers, and undermine public trust in screen content. 'The report's recommendations provide a roadmap to how we can ensure that the UK's world-leading film, TV, video games and VFX industries continue to thrive by making best use of AI technologies to bring their creativity, innovations and storytelling to screens around the globe.' Professor Jonny Freeman, director of CoSTAR Foresight Lab, said: 'This latest CoSTAR Foresight Lab report, prepared by the BFI, navigates the complex landscape of AI in the screen sector by carefully weighing both its transformative opportunities and the significant challenges it presents. Advertisement Hide Ad Advertisement Hide Ad 'The report acknowledges that while AI offers powerful tools to enhance creativity, efficiency, and competitiveness across every stage of the production workflow – from script development and pre-production planning, through on-set production, to post-production and distribution – it also raises urgent questions around skills, workforce adaptation, ethics, and sector sustainability.' CoSTAR is a £75.6 million national network of laboratories that are developing new technology to maintain the UK's world-leading position in gaming, TV, film, performance, and digital entertainment.

Southport monster has tests in prison to see if he's 'mad not bad'
Southport monster has tests in prison to see if he's 'mad not bad'

Daily Mirror

timean hour ago

  • Daily Mirror

Southport monster has tests in prison to see if he's 'mad not bad'

Axel Rudakubana was jailed for a minimum of 52 years for killing Elsie Dot Stancombe, seven, Alice Aguiar, nine, and six-year-old Bebe King at a Taylor Swift-themed dance class in Southport Triple child killer Axel Rudakubana is being considered for a move from prison to a "cushy" secure hospital, it is understood. The 18-year-old murderer has reportedly had assessments with therapists and psychologists over whether he is "mad not bad" and, if the fiend is cleared for treatment in a mental health unit he could be transferred to a different jail. ‌ Currently in HMP Belmarsh, a tough Category A prison, Rudakubana will move to either Broadmoor, Ashworth or Rampton hospitals. Inmates at these - England's three top-secure psychiatric units - are treated as "patients" rather than prisoners, with a focus on their therapy. ‌ But this has caused anger, not least as The Mirror has reported last month Rudakubana, from Banks, Lancashire, already has privileges amid his "cushy life" in jail. He requests treats such as Maltesers and McCoy's - and has them delivered to his cell by prison guards whom are colleagues of an officer Rudakubana recently scolded with boiling water. "If he was moved, he would be kept in better conditions, with all the rooms single ones and less restrictions on things like TV and DvDs plus visitors... If you are serving a sentence as long as his, you would want to be in a secure hospital unit rather than prison. No officer at Belmarsh believes he is mentally ill but he says the right things in meetings," a source said last night. Any move would also hit the taxpayer with a much higher bill – of £325,000 per year compared to £57,000 in prison - for Rudakubana's care. The yob is serving at least 52 years for killing Elsie Dot Stancombe, seven, Alice Aguiar, nine, and six-year-old Bebe King last summer. Speaking to The Sun, the source added: "Conversations are happening now about his future, after he was being assessed by therapists, psychologists and other medical experts. He is very keen for a move because but others at Belmarsh think he is gaming the system." When the teenager was sentenced in January, it was revealed he had been under the care of an NHS mental health service between 2019 and 2023 - until he "stopped engaging". The fiend has been held under strict conditions at Belmarsh, which is in southeast London, but he somehow was able to attack the guard in May. Rudakubana was allowed a kettle and is believed to have used it to boil water ahead of the "serious assault," after which the officer needed treatment in hospital. He has since returned to work, it is understood. Ministry of Justice sources told The Sun initial assessments had started – but said no hospital referral had been "initiated" or was ongoing.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store