logo
#

Latest news with #Westlaw

Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief
Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief

Yahoo

time20-05-2025

  • Politics
  • Yahoo

Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief

Timothy Burke is accused of grabbing unaired Fox News footage using someone else's credentials. A judge scolded his lawyers for misrepresenting and making up legal precedents in an attempt to get the case thrown out. A lawyer for Burke admitted to using ChatGPT and Westlaw's AI features without checking their output. A lawyer for Timothy Burke, the journalist indicted over leaked Fox News footage, admitted in a court filing Monday that he used ChatGPT and other AI tools to write an error-filled legal brief. Last week, Judge Kathryn Kimball Mizelle said a filing by Burke's lawyers contained "significant misrepresentations and misquotations" and demanded an explanation. On Monday, the lawyers, Michael Maddux and Mark Rasch, said the errors happened because of Rasch's research and edits. The judge cited nine examples of "non-existent quotes and miscited propositions" that appeared to come from federal appellate rulings and a Congressional committee report. She also said their brief had six errors that may have been less egregious, as well as other "miscellaneous problems." Rasch's process "included the use of Westlaw, Westlaw's AI features, Google, Google Scholar, as well as the 'deep research' feature of the Pro version of ChatGPT version 4.5," the brief said. The lawyers said Rasch used a feature on the legal research platform Westlaw called Quick Check to vet the brief, but didn't do so again after accidentally adding unvetted sections from previous drafts. Maddux, the lawyers added, was busy with another case. Maddux, Rasch, and Burke didn't immediately respond to requests for comment. Neither OpenAI, the company behind ChatGPT, or Thomson Reuters, which makes Westlaw, responded to requests for comment. The proliferation of AI and the high cost of legal research has led to a number of attorneys being called to the mat by judges over errors in their legal arguments, often a result of generative AI systems' tendency to "hallucinate." Often, the mistakes are made by solo practitioners or lawyers from small firms, though big firms have also been found using AI. A Latham & Watkins attorney said the AI system Claude was to blame for giving the wrong name and authors for an article cited in an expert's report, though the content was otherwise correct. Last week, attorneys from the firms K&L Gates and Ellis George were told to pay $31,000 after their submissions were found to contain made-up citations. Burke, a former Deadspin editor now working as a media consultant, faces charges of hacking into a streaming system used by broadcasters. The case has attracted attention from press freedom advocates, with his lawyers arguing Burke committed no crime since the URLs he visited to download clips of Fox News footage were public. The footage, which included antisemitic remarks by the rapper Ye and behind-the-scenes comments by Tucker Carlson about sex, his "postmenopausal" viewers, and issues with the Fox Nation streaming service, was never aired on the network. When the clips appeared online in 2022 and 2023, it aroused suspicions that a Fox employee had leaked them. In 2023, however, federal investigators zeroed in on Burke, who was indicted last year. Read the original article on Business Insider

Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief
Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief

Yahoo

time20-05-2025

  • Politics
  • Yahoo

Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief

Timothy Burke is accused of grabbing unaired Fox News footage using someone else's credentials. A judge scolded his lawyers for misrepresenting and making up legal precedents in an attempt to get the case thrown out. A lawyer for Burke admitted to using ChatGPT and Westlaw's AI features without checking their output. A lawyer for Timothy Burke, the journalist indicted over leaked Fox News footage, admitted in a court filing Monday that he used ChatGPT and other AI tools to write an error-filled legal brief. Last week, Judge Kathryn Kimball Mizelle said a filing by Burke's lawyers contained "significant misrepresentations and misquotations" and demanded an explanation. On Monday, the lawyers, Michael Maddux and Mark Rasch, said the errors happened because of Rasch's research and edits. The judge cited nine examples of "non-existent quotes and miscited propositions" that appeared to come from federal appellate rulings and a Congressional committee report. She also said their brief had six errors that may have been less egregious, as well as other "miscellaneous problems." Rasch's process "included the use of Westlaw, Westlaw's AI features, Google, Google Scholar, as well as the 'deep research' feature of the Pro version of ChatGPT version 4.5," the brief said. The lawyers said Rasch used a feature on the legal research platform Westlaw called Quick Check to vet the brief, but didn't do so again after accidentally adding unvetted sections from previous drafts. Maddux, the lawyers added, was busy with another case. Maddux, Rasch, and Burke didn't immediately respond to requests for comment. Neither OpenAI, the company behind ChatGPT, or Thomson Reuters, which makes Westlaw, responded to requests for comment. The proliferation of AI and the high cost of legal research has led to a number of attorneys being called to the mat by judges over errors in their legal arguments, often a result of generative AI systems' tendency to "hallucinate." Often, the mistakes are made by solo practitioners or lawyers from small firms, though big firms have also been found using AI. A Latham & Watkins attorney said the AI system Claude was to blame for giving the wrong name and authors for an article cited in an expert's report, though the content was otherwise correct. Last week, attorneys from the firms K&L Gates and Ellis George were told to pay $31,000 after their submissions were found to contain made-up citations. Burke, a former Deadspin editor now working as a media consultant, faces charges of hacking into a streaming system used by broadcasters. The case has attracted attention from press freedom advocates, with his lawyers arguing Burke committed no crime since the URLs he visited to download clips of Fox News footage were public. The footage, which included antisemitic remarks by the rapper Ye and behind-the-scenes comments by Tucker Carlson about sex, his "postmenopausal" viewers, and issues with the Fox Nation streaming service, was never aired on the network. When the clips appeared online in 2022 and 2023, it aroused suspicions that a Fox employee had leaked them. In 2023, however, federal investigators zeroed in on Burke, who was indicted last year. Read the original article on Business Insider

Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief
Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief

Business Insider

time20-05-2025

  • Politics
  • Business Insider

Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief

A lawyer for Timothy Burke, the journalist indicted over leaked Fox News footage, admitted in a court filing Monday that he used ChatGPT and other AI tools to write an error-filled legal brief. Last week, Judge Kathryn Kimball Mizelle said a filing by Burke's lawyers contained "significant misrepresentations and misquotations" and demanded an explanation. On Monday, the lawyers, Michael Maddux and Mark Rasch, said the errors happened because of Rasch's research and edits. The judge cited nine examples of "non-existent quotes and miscited propositions" that appeared to come from federal appellate rulings and a Congressional committee report. She also said their brief had six errors that may have been less egregious, as well as other "miscellaneous problems." Rasch's process "included the use of Westlaw, Westlaw's AI features, Google, Google Scholar, as well as the 'deep research' feature of the Pro version of ChatGPT version 4.5," the brief said. The lawyers said Rasch used a feature on the legal research platform Westlaw called Quick Check to vet the brief, but didn't do so again after accidentally adding unvetted sections from previous drafts. Maddux, the lawyers added, was busy with another case. Maddux, Rasch, and Burke didn't immediately respond to requests for comment. Neither OpenAI, the company behind ChatGPT, or Thomson Reuters, which makes Westlaw, responded to requests for comment. The proliferation of AI and the high cost of legal research has led to a number of attorneys being called to the mat by judges over errors in their legal arguments, often a result of generative AI systems' tendency to "hallucinate." Often, the mistakes are made by solo practitioners or lawyers from small firms, though big firms have also been found using AI. A Latham & Watkins attorney said the AI system Claude was to blame for giving the wrong name and authors for an article cited in an expert's report, though the content was otherwise correct. Last week, attorneys from the firms K&L Gates and Ellis George were told to pay $31,000 after their submissions were found to contain made-up citations. Burke, a former Deadspin editor now working as a media consultant, faces charges of hacking into a streaming system used by broadcasters. The case has attracted attention from press freedom advocates, with his lawyers arguing Burke committed no crime since the URLs he visited to download clips of Fox News footage were public. The footage, which included antisemitic remarks by the rapper Ye and behind-the-scenes comments by Tucker Carlson about sex, his "postmenopausal" viewers, and issues with the Fox Nation streaming service, was never aired on the network. When the clips appeared online in 2022 and 2023, it aroused suspicions that a Fox employee had leaked them. In 2023, however, federal investigators zeroed in on Burke, who was indicted last year.

Daily Journal: Legal Experts Highlight Key IP Trends To Watch
Daily Journal: Legal Experts Highlight Key IP Trends To Watch

Los Angeles Times

time20-04-2025

  • Business
  • Los Angeles Times

Daily Journal: Legal Experts Highlight Key IP Trends To Watch

Legal experts are watching significant copyright cases that could reshape intellectual property law in the coming year - with artificial intelligence (AI) playing a significant but not exclusive role. AI continues to raise novel IP issues, but open questions over things like the applicability of doctrines such as fair use show signs of reaching a resolution in the year ahead. According to Justin Hughes, a professor at Loyola Law School who teaches patent law and intellectual property, one anticipated trend in AI-related copyright litigation is that the sheer volume of new lawsuits from content creators and publishers against AI platforms is likely to slow. 'In copyright, litigation over AI training sets has been fast and furious for the past three years. I assume we will see a slowdown in the number of new cases filed,' Hughes stated in an email. 'More importantly, my guess is that we are going to see more of these disputes settle. With the growing number of business deals between major content owners and AI companies, it is getting harder for the AI companies to argue that it is fair use under 17 U.S.C. 107 to train on the content of everyone else with whom they don't do deals. The recent district court decision in Thomson Reuters v. Ross Intelligence - written by a Third Circuit judge sitting by designation - also doesn't help the AI companies. So, I expect to see more settlements.' The precise impact of the Thomson Reuters case - in which a federal judge in Delaware ruled that an AI legal research company infringed on Thomson Reuters' intellectual property by improperly accessing and copying content from the Westlaw legal research platform - isn't clear. The court ruled against the defendant's attempt to invoke the fair use defense but limited its decision to non-generative AI. 'I will be surprised if that decision becomes a hugely significant decision just because it is so fact specific,' said Steven Stein, a partner with Greenberg Glusker Fields Claman & Machtinger LLP who focuses on media, intellectual property, entertainment and business disputes. Stein thinks the New York Times et al. v. OpenAI case, ongoing in the Southern District of New York, will be a far more significant test of whether fair use is an adequate defense against allegations AI companies improperly scrape data. The newspaper and other media organizations accuse OpenAI of improperly accessing their data to train its AI models. 'The New York Times case could definitely have broader implications for OpenAI and for AI generally,' Stein said. 'And I think if there is an adverse ruling against OpenAI, I think it's going to lead to and require AI companies to license content from companies like the New York Times to use in connection with their AI, even if it just involves training that AI.' The case, depending on the outcome, could 'also lead to legislation being passed or considered to try to address the ramifications of that decision, because it obviously has what would have a huge impact on this emerging industry,' Stein said. Although there was no significant federal AI legislation in the pipeline, Stein said lawmakers would likely also be looking at a pending U.S. Copyright Office report on the same issue being litigated by The New York Times. The office has been conducting a three-part evaluation of AI and copyright issues, with two reports already released and a third expected later this year examining the legal implications of training AI models on copyrighted works. Outside of AI, Stein said that a potential broadening of copyright terminations to overseas jurisdictions was an issue firmly on his radar for the year ahead. Termination provisions allow creators to reclaim ownership rights to their work during a five-year window that opens on the 35th year after the original licensing agreement took effect. Stein explained that this legal mechanism often protects authors who license content before its market value is established. 'Part of the issue has been that a lot of people think copyright termination is limited to the United States,' Stein explained. 'So, for example, if you have a copyright in the United States and you license it, and then you terminate the copyright, the idea has been or most people often the termination only applies within the United States.' This can create issues when it comes time for authors to try to 'renegotiate and get a better deal,' as they lack the international rights to the work, Stein said. However, a recent ruling in Vetter v. Resnick, in the Middle District of Louisiana, significantly expanded copyright termination rights, establishing that the termination of rights for The Swingin' Medallions' 1966 song 'Double Shot (Of My Baby's Love)' applies both domestically and internationally. 'Many commentators believe that the court got it wrong, but that decision is being appealed to the 5th Circuit Court of Appeals, and it's very interesting to see what the 5th Circuit, and ultimately, maybe even the United States Supreme Court, does with that, because there's a lot of wheeling and dealing in Hollywood relating to IP and copyright termination notices, and if, in fact, the termination notice applies to broad, to rights internationally that would have a major impact in the entertainment industry,' Stein said. Copyright infringement as it pertains to the use of embedded videos on websites, and the potential for the issue to land before the U.S. Supreme Court, is another development catching Stein's attention. He explained that a recent New York federal court decision in Richardson v. Townsquare Media diverged from previous New York rulings by finding that embedding YouTube videos does not constitute copyright infringement. The court applied fair use principles for one video and determined YouTube's terms of service granted an implicit license for the second. This ruling differed from the findings of other New York federal courts that have held embedding does infringe, contrasting with the 9th Circuit's position that it does not. The Richardson decision potentially sets the stage for an eventual Supreme Court review of this contentious copyright issue, Stein said, if the 2nd Circuit took up the issue and ruled differently to the 9th circuit. Stein said that 'it's certainly not imminent, but it's possible' that the issue could 'rise to the level of the Supreme Court in the not-too-distant future.' Stein said that music publishing companies are increasingly pursuing copyright infringement claims against organizations that use unlicensed music in social media content, with Sony recently filing a lawsuit against USC for music used in university-related social media posts. 'Out of all these music infringement cases that have been filed, there haven't really been rulings in them that address or speak to the issues implicated by these claims. Most of these cases are settled. So, if any of these cases are ever litigated, and there are many pending, including the one that was filed last week , it'd be very interesting to see what the judges do with them.' Hughes said that he would be keeping an eye on whether the U.S. Supreme Court would take up the question of whether the 4th Circuit correctly upheld a jury verdict that Cox Communications willfully contributed to copyright infringement because it knew of its users' infringing activity and materially contributed to it. 'If the Supreme Court grants cert. in Cox Communications v. Sony Music Entertainment, we could get some reshaping of the contours of indirect liability in copyright law. Some folks think the Fourth Circuit erred in its contributory liability analysis, while others - myself included - think the appellate court created confusion, if not an outright circuit split, on vicarious liability,' he wrote. He also anticipated that the United States Patent and Trademark Office (USPTO) would continue to crack down on the 'monopolize' slogans and words. 'It's not a 'court' per se, but my guess is that the Trademark Trial and Appeal Board at USPTO will quietly continue to strengthen its 'failure to function' decisions as a way to deny trademark registration to slogans and single words that are being claimed as trademarks, but really are just efforts to monopolize control of words on t-shirts and other apparel - more power to them in this effort,' he wrote. The Los Angeles/San Francisco Daily Journal is a publication for lawyers practicing in California, featuring updates on the courts, regulatory changes, the State Bar and the legal community at large.

Major Publishers From The Guardian To Condé Nast Sue AI Startup For Copyright Violation In Latest Salvo
Major Publishers From The Guardian To Condé Nast Sue AI Startup For Copyright Violation In Latest Salvo

Yahoo

time13-02-2025

  • Business
  • Yahoo

Major Publishers From The Guardian To Condé Nast Sue AI Startup For Copyright Violation In Latest Salvo

Fourteen publishers have sued Canadia artificial intelligence firm Cohere for widespread unauthorized use of their content in developing and running its generative AI systems, alleging massive, systematic copyright and trademark infringement. It's the latest legal salvo in the battle between content providers and generative AI models that digest their text and spit it back to users often word for word, including articles behind a paywall. The complaint, filed in the Southern District of New York, says Cohere has infringed on thousands of articles and seeks a permanent injunction, jury trial and damages of up to $150k per work infringed. More from Deadline AI Chatbots Have "Significant Inaccuracies" When Summarizing News, BBC Says; Top Exec Deborah Turness Says Tech Firms Are "Playing With Fire" Elon Musk's "Whole Life Is From A Position Of Insecurity," Taunts Sam Altman As AI Wars Heat Up, "I Feel For The Guy" Elon Musk Investor Group With Ari Emanuel Offers $97B For OpenAI; CEO Sam Altman Says "No Thank You" 'This is a lawsuit to protect journalism from systematic copyright and trademark infringement,' says the suit by Advance Local Media, Condé Nast, The Atlantic, Forbes Media, The Guardian, Business Insider, LA Times, McClatchy Media Company, Newsday, Plain Dealer Publishing Company, Politico, The Republican Company, Toronto Star Newspapers and Vox Media, all members of trade association News/Media Alliance. 'Rather than create its own content, Cohere takes the creative output of Publishers, some of the largest, most enduring, and most important news, magazine, and digital publishers in the United States and around the world. Without permission or compensation, Cohere uses scraped copies of our articles … to power its artificial intelligence ('AI') service, which in turn competes with Publisher offerings and the emerging market for AI licensing.' The burgeoning field of generative AI require huge amounts of content to train its models, resulting in increasingly frequent litigation. The New York Times is suing ChatGPT parent OpenAI in a similar action. News Corp.'s Dow Jones, which owns The Wall Street Journal and New York Post, has sued Jeff Bezos-backed Perplexity AI. A handful of lawsuits have hit over the past several years from novelist Michael Chabon to comedian Sarah Silverman, playwrights and others whose material has been used to train so-called large language models without permission or compensation. In one victory earlier this week, Thomson Reuters won the first big AI copyright case from a 2020 lawsuit against startup Ross Intelligence. A judge ruled the AI firm had infringed copyright law by reproducing material from the media giant's legal database Westlaw. Cohere, today's suit reads, 'freely admits that 'AI is only as useful as the data it can access' … [but] fails to license the content it uses. Cohere takes Publishers' valuable articles, without authorization and without providing compensation. Cohere copies, uses, and disseminates Publishers' news and magazine articles to build and deliver a commercial service that mimics, undercuts, and competes with lawful sources for their articles and that displaces existing and emerging licensing markets.' 'Command is incapable of performing its own original research. It invests no resources into news gathering in the field and no has writers, fact-checkers, or editors on staff.' On the strength of the content it steals, the suit says, it charges for its product suite and actively courts customers. The suit includes numerous screenshots of ripped off articles including an example of output that states, ''This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now,'' all the while providing the full article to any user who asks for it, whether they have a Business Insider subscription or not.' As alarming are examples of 'hallucinations,' or references to articles that do not exist. 'Not content with just stealing our works, Cohere also blatantly manufactures fake pieces and attributes them to us, misleading the public and tarnishing our brands,' the suit says. It cites an article in The Guardian published on October 7, 2024 titled 'The pain will never leave: Nova massacre survivors return to site one year on.' When prompted for this piece, Cohere 'delivered a wildly inaccurate article that it represented was 'published on June 29, 2022 in The Guardian by Luke Harding.' Among other flaws, the Cohere article confused the October 7, 2023 massacre at The Nova Music Festival with a mass shooting that took place in Nova Scotia, Canada in 2020. Cohere also manufactured details about the Nova Scotia tragedy, attributing several quotes—including those gathered in The Guardian's reporting — to Tom Bagley, a man who was murdered in the 2020 shootings and thus could neither 'return to the scene of the killings' nor offer quotes to a news outlet. Needless to say, this fictional article never appeared in The Guardian.' Best of Deadline A Full Timeline Of Blake Lively & Justin Baldoni's 'It Ends With Us' Feud In Court, Online & In The Media Everything We Know About 'The Night Agent' Season 3 So Far 'A Complete Unknown's Monica Barbaro On Finally Meeting The 'Thoughtful And Wonderful' Joan Baez And A Sweet Moment With Ariana Grande

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store