
UK judge raises alarm after lawyers submit fake legal cases produced by AI tools
Lawyers have cited fake cases generated by
artificial intelligence in court
proceedings in England, a judge has said - warning that attorneys could be prosecuted if they don't check the accuracy of their research.
High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system."
In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday.
They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court.
In a ruling written by Sharp, the judges said that in a 90 million pound (USD 120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist.
Live Events
The client in the case, Hamad Al-Haroun, apologised for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain.
But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around."
In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened."
The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action.
Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law.
"Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
7 hours ago
- Time of India
UK judge raises alarm after lawyers submit fake legal cases produced by AI tools
Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said - warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound (USD 120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. Live Events The client in the case, Hamad Al-Haroun, apologised for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."


Mint
9 hours ago
- Mint
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON (AP) — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.'


Economic Times
10 hours ago
- Economic Times
Operation Social Media: Digital dogs of war bark loud, bite little in Pakistan's info ops
Live Events When bots go off louder than bombs Indian jets capturing Lahore and Karachi. Arrest of Pakistan's army chief and an alleged military coup. A Pakistani cyberattack disabling India's power grid. India bombing Afghan territory or surrendering in key battlefronts. Pakistan's playbook Videos from Lebanon's 2020 explosion being shared as missile strikes on Indian cities. Drone footage from Jalandhar fires framed as attacks. Game footage falsely portraying Pakistani military success. Recycled images from other conflict zones passed off as Indian casualties. Inside Pakistan's covert spy ring Open-source intelligence: Boon or bane? Newsrooms under fire Cyber Frontline: 1.5 million attacks, but only 150 breaches India's response AI fact-checkers Truth is the first casualty, but not the last word (You can now subscribe to our (You can now subscribe to our Economic Times WhatsApp channel 'Indian forces wave the white flag!'"Karachi captured!""Pakistan Army Chief arrested!"None of it was true. All of it went India and Pakistan teetered on the edge of open warfare this May following a gruesome terror attack in Pahalgam that killed 26 civilians, a parallel battle unfolded, not on land or in air, but in the boundless terrain of was not merely a war of missiles and drones; it was an orchestrated campaign of perception warfare, fuelled by a deluge of misinformation and psychological operations designed to distort, distract and is how 'Operation Social Media' unfolded -- an invisible front that exposed how deeply disinformation can influence modern conflict, and how India, despite facing a sophisticated hybrid threat, sought to maintain both operational focus and digital crisis began with a terror attack at a popular tourist spot in Kashmir. The assault bore the fingerprints of Pakistan-based terror outfits, prompting New Delhi to launch Operation Sindoor , a series of precision strikes on terror infrastructure in Pakistan and Pakistan-occupied Kashmir (PoK) on May immediately, unverified claims began saturating social media. According to reports from The Guardian and The Washington Post, X (formerly Twitter) became a hotbed of false triumphs, premature victory laps, and fictionalised videos, repurposed war clips, and even footage from video games like Arma 3 flooded social media platforms during the India-Pakistan standoff, giving rise to a parallel narrative war. These posts were amplified by a mix of anonymous accounts, official handles, and even journalists acting on unverified internet observatory NetBlocks reported that 65% of these viral false posts originated from IP addresses linked to Pakistan, while another 20% came from untraceable bot to the Washington-based non-profit think tank, the Centre for the Study of Organized Hate, 'X emerged as the primary hub for both misinformation and disinformation.' The think tank analysed 437 such posts and found that 179, or nearly 41%, originated from verified accounts, which are often perceived as credible due to their blue-check status. These included posts by politicians, influencers, media personalities, and retired military officials.'What was particularly alarming,' the report noted, 'was the credibility lent to these falsehoods by high-profile sources.' Despite the scale of this disinformation, only 73 posts, just 17%, were flagged by X's Community Notes, the platform's crowd-sourced fact-checking feature. This, the think tank argued, pointed to a serious lapse in content moderation at a time of high geopolitical Hameed Naik, director of the think tank, described the information war as 'a global trend in hybrid warfare'. 'This wasn't ordinary nationalist chest-thumping,' said Joyojeet Pal of the University of Michigan. 'This had the potential to push two nuclear-armed neighbours to the brink.'The social media campaign didn't begin with Operation Sindoor; it was already underway. On April 25, days before the Indian Air Force strike, India's Ministry of Information and Broadcasting had announced the banning of 16 YouTube channels and several Instagram accounts for spreading 'provocative and communally sensitive content.'Of these, six were Pakistan-based and ten operated from within India, with a combined viewership of over 680 million.A key inflection point came when Pakistan lifted its year-long ban on X during the peak of the crisis. According to minutes from a Pakistani Senate committee meeting, this move was deliberate and strategic, intended to enable Islamabad to 'participate in the narrative war.'NetBlocks confirmed that access to X in Pakistan was restored precisely as tensions with India escalated, giving Pakistani agencies and allied influencers a wide window to flood the platform with misleading and often provocative the aftermath of the operation, and as misinformation swirled on social media, India's Press Information Bureau (PIB) Fact Check division stepped in to debunk dozens of viral claims. These included:Together, these examples offer a window into the scale, coordination, and intent behind the disinformation campaign, aimed not just at misleading the public but also at distorting the global perception of India's military and political a related espionage probe, Indian intelligence uncovered a Pakistan-backed operation recruiting social media influencers as spies. Naushaba Shahzad Masood, known as 'Madam N', runs Jaiyana Travels and Tourism in Lahore. She was building a network of 500 spies inside India, focusing on Hindu and Sikh YouTubers like Jyoti Malhotra and Jasbir six months, Naushaba arranged travel for about 3,000 Indians and 1,500 expatriates to Pakistan, fast-tracking visas through direct contacts at the Pakistani High Commission in Delhi. She also managed Sikh and Hindu pilgrimage tours with the Evacuee Trust Property Board (ETPB), charging inflated fees that funded ISI trails include Naushaba's phone number found on arrested spies' devices and two Pakistani bank accounts linked to transfers from India. Her network recruits through agents operating in major Indian cities, including situation also highlighted the double-edged nature of Open-Source Intelligence (OSINT). Originally conceived to empower citizens through satellite images, open data, and social media monitoring, OSINT's decentralised model became a tool for mass manipulation.'Anyone with an internet connection could now pose as an OSINT expert,' observed an analysis published by ET. The danger lies in viral misinformation being passed off as expert assessments, especially when retweeted by influencers and news outlets under pressure for real-time Indian newsrooms too fell for the deluge of fake to The Washington Post, in one case, a journalist reportedly received a WhatsApp message, allegedly from a public broadcaster, claiming that Pakistan's army chief had been arrested. Within minutes, this falsehood became prime-time 'breaking news.'Speaking to The Post, Former Foreign Secretary Nirupama Rao described the atmosphere as one of 'hypernationalism' and 'parallel reality,' cautioning that the lack of authoritative government briefings created a vacuum often filled by not everyone was Press Information Bureau, along with a 24/7 monitoring centre set up by the Ministry of Information & Broadcasting, worked to counter misinformation in real time. Fact-checks were issued, social media handles were flagged, and broadcasters were warned for violating verification social media churned with false claims, the real-time cyber threat was no less intense. According to Maharashtra Cyber, over 1.5 million cyber attacks were launched against Indian infrastructure by seven Pakistan-allied Advanced Persistent Threat (APT) barrage of cyberattacks not only came from the neighbouring country but from Bangladesh and the Middle Eastern hacker collectives such as APT 36 (also known as Transparent Tribe), Pakistan Cyber Force, and Team Insane PK launched a coordinated series of cyberattacks in the days surrounding the arsenal included malware campaigns, distributed denial-of-service (DDoS) attacks, GPS spoofing attempts, and website defacements aimed at sowing panic and disrupting public trust in India's digital to officials familiar with the matter, India faced over 1.5 million intrusion attempts during this period. However, only 150 attacks were successful, a tiny claims that the hackers had penetrated Mumbai's airport systems or Election Commission portals were found to be baseless. Addressing reporters, a senior official of Maharashtra Cyber debunked claims of hackers stealing data from Chhatrapati Shivaji Maharaj International Airport in Mumbai, hacking aviation and municipal systems, and targeting the Election Commission website."The probe discovered that cyber attacks on (government websites in) India decreased after India-Pakistan ceased hostilities, but not fully stopped. These attacks continue from Pakistan, Bangladesh, Indonesia, Morocco, and Middle Eastern countries," he Indian government's 'Road of Sindoor' report, a classified cyber threat assessment, showed these attacks were part of a coordinated hybrid warfare strategy involving both digital and psychological the information war raged online, Indian armed forces maintained disciplined silence and strategic clarity. Official statements were sparse, but targeted. Operation Sindoor focused solely on dismantling terrorist infrastructure, confirmed in a press conference by Foreign Secretary Vikram Misri, who clarified that India did not target civilian the scenes, India's cyber defence grid was activated, fact-checking units expanded, and social media protocols for military updates tightened. The government also advised citizens to avoid unverified content and rely only on official the misinformation torrent intensified, social media users increasingly turned to AI chatbots for verification, only to find more confusion and falsehoods. Platforms like xAI's Grok, OpenAI's ChatGPT, and Google's Gemini became common go-to tools for instant fact-checking amid the crisis.'Hey @Grok, is this true?' became a viral plea on Elon Musk's platform X, reflecting the surge in users seeking quick debunks. However, these AI assistants often propagated misinformation under renewed criticism for inserting far-right conspiracy theories into unrelated answers, misidentified old video footage from Sudan's Khartoum airport as missile strikes on Pakistan's Nur Khan airbase during the conflict. Similarly, unrelated fire footage from Nepal was wrongly claimed as Pakistani military Sadeghi of the disinformation watchdog NewsGuard warned, 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers. Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news.'The Tow Center for Digital Journalism at Columbia University found that AI chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' For instance, AFP fact-checkers in Uruguay asked Google's Gemini about an AI-generated image of a woman; it confirmed the image's authenticity but fabricated details about her identity and digital front of the India-Pakistan standoff reveals the complex landscape of modern warfare, where victory is measured not just in ground gained but in narrative despite the storm of falsehoods, India's response, though understated, was layered, methodical, and largely effective. As the lines between social media warfare and statecraft blur, it's clear that the next great conflict won't just be fought with missiles, but with memes, metadata, and misinformation.