logo
#

Latest news with #KingsCounsel

AI-generated errors set back this murder case in an Australian Supreme Court
AI-generated errors set back this murder case in an Australian Supreme Court

Fast Company

time3 days ago

  • Politics
  • Fast Company

AI-generated errors set back this murder case in an Australian Supreme Court

A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took 'full responsibility' for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. 'We are deeply sorry and embarrassed for what occurred,' Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison.

Australia murder case court filings include fake quotes and nonexistent judgments generated by AI
Australia murder case court filings include fake quotes and nonexistent judgments generated by AI

CBS News

time3 days ago

  • CBS News

Australia murder case court filings include fake quotes and nonexistent judgments generated by AI

A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by the Elliot's associates, who couldn't find the cases cited and requested that defense lawyers provide copies, the Australian Broadcasting Corporation reported. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. The use of artificial intelligence is making its way into U.S. courtrooms in other ways. In April, a man named Jerome Dewald appeared before a New York court and submitted a video that featured an AI-generated avatar to deliver an argument on his behalf. In May, a man who was killed in a road rage incident in Arizona "spoke" during his killer's sentencing hearing after his family used artificial intelligence to create a video of him reading a victim impact statement.

Senior lawyer apologises after filing AI-generated submissions in Victorian murder case
Senior lawyer apologises after filing AI-generated submissions in Victorian murder case

ABC News

time3 days ago

  • ABC News

Senior lawyer apologises after filing AI-generated submissions in Victorian murder case

A senior lawyer has apologised to a Victorian judge for filing submissions in a murder case that included fake quotes and non-existent case judgements generated by artificial intelligence (AI). Defence lawyer Rishi Nathwani, who holds the title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Mr Nathwani told Justice James Elliott on Wednesday, on behalf of the defence team. The AI-generated errors caused a 24-hour delay in resolving a case that Justice Elliott had hoped to conclude on Wednesday. He later ruled on Thursday that Mr Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Justice Elliott told lawyers on Thursday. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by the Justice's associates, who couldn't find the cases and requested that defence lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes", court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who did not check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Justice Elliott said. The court documents do not identify the generative AI system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $US5,000 ($7,600) fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases", perverting the course of justice, which carries a maximum sentence of life in prison. AP

Lawyer issues apology after using AI-generated fake quotes in murder case
Lawyer issues apology after using AI-generated fake quotes in murder case

The Independent

time3 days ago

  • Politics
  • The Independent

Lawyer issues apology after using AI-generated fake quotes in murder case

A senior Australian lawyer has issued an apology to a judge after court submissions in a murder case were found to contain fake quotes and non-existent legal judgments generated by artificial intelligence. The significant error occurred in the Supreme Court of Victoria, adding to a growing list of AI-related blunders impacting justice systems globally. Rishi Nathwani, a defence lawyer holding the prestigious title of King's Counsel, accepted "full responsibility" for the incorrect information submitted in the case involving a teenager charged with murder. Court documents, reviewed by The Associated Press on Friday, detailed the admission. Addressing Justice James Elliott on Wednesday, Mr Nathwani stated on behalf of the defence team: "We are deeply sorry and embarrassed for what occurred." The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.

Australian lawyer apologizes for AI-generated errors in murder case
Australian lawyer apologizes for AI-generated errors in murder case

Yahoo

time3 days ago

  • Politics
  • Yahoo

Australian lawyer apologizes for AI-generated errors in murder case

MELBOURNE, Australia (AP) — A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took 'full responsibility' for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. 'We are deeply sorry and embarrassed for what occurred,' Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store