Australian lawyer apologizes for AI-generated errors in murder case
The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world.
Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took 'full responsibility' for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday.
'We are deeply sorry and embarrassed for what occurred,' Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team.
The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment.
'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday.
'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added.
The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court.
The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies.
The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say.
The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct.
The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy.
The judge noted that the Supreme Court released guidelines last year for how lawyers use AI.
'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said.
The court documents do not identify the generative artificial intelligence system used by the lawyers.
In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.
Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments.
Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
10 minutes ago
- Fast Company
AI-generated errors set back this murder case in an Australian Supreme Court
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took 'full responsibility' for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. 'We are deeply sorry and embarrassed for what occurred,' Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison.


Newsweek
10 minutes ago
- Newsweek
Outrage Grows After Meta Admits AI Guidelines Let Chatbots Flirt With Kids
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Meta, the parent company of Facebook, Instagram and WhatsApp, is revising policies that allowed chatbots to engage in "romantic or sensual" conversations with children following an explosive investigative report, company officials said Friday. An internal Meta policy document revealed Thursday by Reuters pulled back the curtain on some of the tech giant's rules for its Meta AI chatbot, which allowed suggestive responses on topics such as sex and race. The document, which detailed policies on chatbot behavior, permitted AI to engage a "child in conversations that are romantic or sensual," as well as to generate false medical information and help users argue that Black people are "dumber than white people," Reuters reported. Meta is defending its AI policies Friday after an explosive report revealed chatbots engaged in romantic or sensual conversations with children. Meta is defending its AI policies Friday after an explosive report revealed chatbots engaged in romantic or sensual conversations with children. Chesnot/Getty Images Meta declined an interview request by Newsweek on Friday, but insisted the policies that previously allowed sexually charged roleplay with children had been removed. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors," a Meta spokesperson said in a statement. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." Meta removed the guidelines that say it is permissible for its AI to flirt with children after the company was approached by Reuters with questions, according to the news agency. Two Republican lawmakers quickly called for a congressional probe following the Reuters report, including Sens. Josh Hawley and Marsha Blackburn, both Republicans. "So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it 'permissible for chatbots to flirt and engage in romantic roleplay with children,'" Hawley wrote Thursday on X. "This is grounds for an immediate congressional investigation." Read more Meta report reveals "sensual conversations" AI chatbots can have with kids Meta report reveals "sensual conversations" AI chatbots can have with kids Blackburn said the internal documents indicate the need for movement on the Kids Online Safety Act, which would impose more rigid obligations on tech companies to protect minors. The bill has passed the Senate, but remains stalled in the House. "Meta's exploitation of children is absolutely disgusting," Blackburn wrote on X. "This report is only the latest example of why Big Tech cannot be trusted to protect underage users when they have refused to do so time and time again." Senator Brian Schatz, a Democrat from Hawaii, slammed Meta as "disgusting and evil" while questioning how company officials had endorsed the policy prior to the change. "Meta chat bots that basically hits on kids — f--k that," Schatz wrote on X. "This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this." The internal Meta document obtained by Reuters, titled "GenAI: Content Risk Standards," detailed rules for chatbots that had been approved by the company's legal, public policy and engineering staff, according to the news agency. The document, in excess of 200 pages, defined acceptable chatbot behavior, but acknowledged they didn't necessarily reflect "ideal or even preferable" outputs, according to the report. Facebook CEO Mark Zuckerberg departs a courthouse in Washington, D.C., on April 14 following the start of an antitrust trial against Meta over the company's acquisitions of Instagram and WhatsApp. Facebook CEO Mark Zuckerberg departs a courthouse in Washington, D.C., on April 14 following the start of an antitrust trial against Meta over the company's acquisitions of Instagram and WhatsApp."It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards stated. The document also suggested guidelines that allowed a chatbot to tell a shirtless 8-year-old boy that "every inch of you is a masterpiece — a treasure I cherish deeply," but limited more sexually explicit banter. "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')," the guidelines read, according to Reuters. Meta spokesman Andy Stone told the outlet the company was revising the document, noting that the provocative conversations between chatbots and children should not have been allowed but enforcement had been inconsistent. Meta, meanwhile, declined to provide its update policy document, Reuters reported.
Yahoo
37 minutes ago
- Yahoo
Top Allied Leaders from UK, Canada, Australia, Poland and Ukraine Speaking at 16th Annual Billington CyberSecurity Summit on Sept 9-12 in DC
Partners Discuss Global Cyber Landscape at AI Themed Event WASHINGTON, August 15, 2025--(BUSINESS WIRE)--Senior leaders from across the Five Eyes Alliance, an intelligence-sharing partnership between Australia, Canada, New Zealand, the United Kingdom, and the U.S., as well as Ukraine and Poland, are confirmed to speak at various sessions during the 16th Annual Billington CyberSecurity Summit, being held September 9-12, 2025 at the Walter E. Washington Convention Center. These leaders will address topics related to the event's theme: Cyber and AI Frontiers: Strategies for a Secure and Smarter Government. "Our allies are critical in helping expose and combat nation state threats and cyber criminals that threaten our collective democracies. We are honored that leaders from our allied partners are joining us to share their global insights. Their presence reflects these alliances' importance in shaping global cyber norms and defense strategy," explained Thomas K. Billington, CEO and Founder, Billington CyberSecurity, a leading cyber education company for executives. Sessions led by Lieutenant General Michelle McGuinness, National Security Coordinator at Australia's Department of Home Affairs; Stephanie Crowe, Head of the Australian Signals Directorate and the Australian Cyber Security Centre; Rajiv Gupta, Head of the Canadian Centre for Cyber Security; and Richard Horne, CEO of the UK's National Cyber Security Centre will explore the evolving threat landscape, cyber defense coordination, and the power of international collaboration in securing democratic institutions and digital infrastructure. They will share how their nations are strengthening operational readiness, advancing cross-border threat intelligence, and reinforcing the security frameworks that protect billions. Sessions include: 9/9 - General Session: Proactive Defense with Maj. Gen. Karol Molenda, Commander, Polish Cyber Defense Forces 9/10 - Fireside Chat: A Conversation with the UK's National Cyber Security Centre with Richard Horne 9/10 - General Session: A Case Study on Salt Typhoon with Lieutenant General Michelle McGuinness 9/11 - General Session: How to Protect International Cybersecurity in the New AI Order with Richard Horne, Stephanie Crowe, and Rajiv Gupta 9/11 - Fireside Chat with Lieutenant General Michelle McGuinness 9/11 - General Session: The Future of Cyber Threat: Anticipating Threat Actors' Next Steps with Ivan Kalabashkin, Deputy Head, Cyber Department, Security Service of Ukraine Designed to address the nation's pressing cyber needs, the four-day event includes over 50 panel discussions, breakout sessions, and fireside chats that explore topics such as quantum computing, the Salt Typhoon attack, legacy systems, cloud attack methodologies, advanced cyber hacker insights, and Zero Trust. Other Fireside Chat speakers to date include: General Dan Caine, Chairman of the Joint Chiefs of Staff, DOD Katherine "Katie" Arrington, Performing the Duties of Chief Information Officer, DOD Alexei Bulazel, Special Assistant to the President and National Security Council Senior Director for Cyber, White House Dr. Douglas (Doug) Matty, Chief Digital and Artificial Intelligence Officer, CDAO Michael Duffy, Federal CISO (Acting), OMB, EOP (Invited) Lt. Gen. Paul T. Stanton, Commander of the Joint Force, Headquarters-Department of Defense Information Network and Director, DISA Presented by a variety of sponsors, including Lead Underwriters: Amazon Web Services, CISCO, and Leidos, the summit will also include a host of receptions to help attendees meet and engage with other conference attendees and speakers and has over 150 cyber-focused vendor booths where attendees can see demos and learn about innovative technologies. Attendee tickets for government and military are complimentary. Credentialed working media are free and encouraged to register in advance to cover the event at About Billington CyberSecurity Founded in 2010, Billington CyberSecurity is the leading cyber education company for executives, hosting high-caliber summits and workshops that explore key cyber issues that advance the cybersecurity of the U.S. government, its allied partners, and critical infrastructure. These events convene senior most government officials and industry partners, highlighted by its signature conference—the annual Billington CyberSecurity Summit held each fall—which attracts more than 3,500 attendees and 250 speakers. View source version on Contacts Shawn Flaherty, 703-554-3609 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data