Latest news with #GoogleScholar
Yahoo
20-05-2025
- Politics
- Yahoo
Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief
Timothy Burke is accused of grabbing unaired Fox News footage using someone else's credentials. A judge scolded his lawyers for misrepresenting and making up legal precedents in an attempt to get the case thrown out. A lawyer for Burke admitted to using ChatGPT and Westlaw's AI features without checking their output. A lawyer for Timothy Burke, the journalist indicted over leaked Fox News footage, admitted in a court filing Monday that he used ChatGPT and other AI tools to write an error-filled legal brief. Last week, Judge Kathryn Kimball Mizelle said a filing by Burke's lawyers contained "significant misrepresentations and misquotations" and demanded an explanation. On Monday, the lawyers, Michael Maddux and Mark Rasch, said the errors happened because of Rasch's research and edits. The judge cited nine examples of "non-existent quotes and miscited propositions" that appeared to come from federal appellate rulings and a Congressional committee report. She also said their brief had six errors that may have been less egregious, as well as other "miscellaneous problems." Rasch's process "included the use of Westlaw, Westlaw's AI features, Google, Google Scholar, as well as the 'deep research' feature of the Pro version of ChatGPT version 4.5," the brief said. The lawyers said Rasch used a feature on the legal research platform Westlaw called Quick Check to vet the brief, but didn't do so again after accidentally adding unvetted sections from previous drafts. Maddux, the lawyers added, was busy with another case. Maddux, Rasch, and Burke didn't immediately respond to requests for comment. Neither OpenAI, the company behind ChatGPT, or Thomson Reuters, which makes Westlaw, responded to requests for comment. The proliferation of AI and the high cost of legal research has led to a number of attorneys being called to the mat by judges over errors in their legal arguments, often a result of generative AI systems' tendency to "hallucinate." Often, the mistakes are made by solo practitioners or lawyers from small firms, though big firms have also been found using AI. A Latham & Watkins attorney said the AI system Claude was to blame for giving the wrong name and authors for an article cited in an expert's report, though the content was otherwise correct. Last week, attorneys from the firms K&L Gates and Ellis George were told to pay $31,000 after their submissions were found to contain made-up citations. Burke, a former Deadspin editor now working as a media consultant, faces charges of hacking into a streaming system used by broadcasters. The case has attracted attention from press freedom advocates, with his lawyers arguing Burke committed no crime since the URLs he visited to download clips of Fox News footage were public. The footage, which included antisemitic remarks by the rapper Ye and behind-the-scenes comments by Tucker Carlson about sex, his "postmenopausal" viewers, and issues with the Fox Nation streaming service, was never aired on the network. When the clips appeared online in 2022 and 2023, it aroused suspicions that a Fox employee had leaked them. In 2023, however, federal investigators zeroed in on Burke, who was indicted last year. Read the original article on Business Insider
Yahoo
20-05-2025
- Politics
- Yahoo
Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief
Timothy Burke is accused of grabbing unaired Fox News footage using someone else's credentials. A judge scolded his lawyers for misrepresenting and making up legal precedents in an attempt to get the case thrown out. A lawyer for Burke admitted to using ChatGPT and Westlaw's AI features without checking their output. A lawyer for Timothy Burke, the journalist indicted over leaked Fox News footage, admitted in a court filing Monday that he used ChatGPT and other AI tools to write an error-filled legal brief. Last week, Judge Kathryn Kimball Mizelle said a filing by Burke's lawyers contained "significant misrepresentations and misquotations" and demanded an explanation. On Monday, the lawyers, Michael Maddux and Mark Rasch, said the errors happened because of Rasch's research and edits. The judge cited nine examples of "non-existent quotes and miscited propositions" that appeared to come from federal appellate rulings and a Congressional committee report. She also said their brief had six errors that may have been less egregious, as well as other "miscellaneous problems." Rasch's process "included the use of Westlaw, Westlaw's AI features, Google, Google Scholar, as well as the 'deep research' feature of the Pro version of ChatGPT version 4.5," the brief said. The lawyers said Rasch used a feature on the legal research platform Westlaw called Quick Check to vet the brief, but didn't do so again after accidentally adding unvetted sections from previous drafts. Maddux, the lawyers added, was busy with another case. Maddux, Rasch, and Burke didn't immediately respond to requests for comment. Neither OpenAI, the company behind ChatGPT, or Thomson Reuters, which makes Westlaw, responded to requests for comment. The proliferation of AI and the high cost of legal research has led to a number of attorneys being called to the mat by judges over errors in their legal arguments, often a result of generative AI systems' tendency to "hallucinate." Often, the mistakes are made by solo practitioners or lawyers from small firms, though big firms have also been found using AI. A Latham & Watkins attorney said the AI system Claude was to blame for giving the wrong name and authors for an article cited in an expert's report, though the content was otherwise correct. Last week, attorneys from the firms K&L Gates and Ellis George were told to pay $31,000 after their submissions were found to contain made-up citations. Burke, a former Deadspin editor now working as a media consultant, faces charges of hacking into a streaming system used by broadcasters. The case has attracted attention from press freedom advocates, with his lawyers arguing Burke committed no crime since the URLs he visited to download clips of Fox News footage were public. The footage, which included antisemitic remarks by the rapper Ye and behind-the-scenes comments by Tucker Carlson about sex, his "postmenopausal" viewers, and issues with the Fox Nation streaming service, was never aired on the network. When the clips appeared online in 2022 and 2023, it aroused suspicions that a Fox employee had leaked them. In 2023, however, federal investigators zeroed in on Burke, who was indicted last year. Read the original article on Business Insider

Business Insider
20-05-2025
- Politics
- Business Insider
Lawyers for a journalist accused of hacking Fox News blame AI for error-filled legal brief
A lawyer for Timothy Burke, the journalist indicted over leaked Fox News footage, admitted in a court filing Monday that he used ChatGPT and other AI tools to write an error-filled legal brief. Last week, Judge Kathryn Kimball Mizelle said a filing by Burke's lawyers contained "significant misrepresentations and misquotations" and demanded an explanation. On Monday, the lawyers, Michael Maddux and Mark Rasch, said the errors happened because of Rasch's research and edits. The judge cited nine examples of "non-existent quotes and miscited propositions" that appeared to come from federal appellate rulings and a Congressional committee report. She also said their brief had six errors that may have been less egregious, as well as other "miscellaneous problems." Rasch's process "included the use of Westlaw, Westlaw's AI features, Google, Google Scholar, as well as the 'deep research' feature of the Pro version of ChatGPT version 4.5," the brief said. The lawyers said Rasch used a feature on the legal research platform Westlaw called Quick Check to vet the brief, but didn't do so again after accidentally adding unvetted sections from previous drafts. Maddux, the lawyers added, was busy with another case. Maddux, Rasch, and Burke didn't immediately respond to requests for comment. Neither OpenAI, the company behind ChatGPT, or Thomson Reuters, which makes Westlaw, responded to requests for comment. The proliferation of AI and the high cost of legal research has led to a number of attorneys being called to the mat by judges over errors in their legal arguments, often a result of generative AI systems' tendency to "hallucinate." Often, the mistakes are made by solo practitioners or lawyers from small firms, though big firms have also been found using AI. A Latham & Watkins attorney said the AI system Claude was to blame for giving the wrong name and authors for an article cited in an expert's report, though the content was otherwise correct. Last week, attorneys from the firms K&L Gates and Ellis George were told to pay $31,000 after their submissions were found to contain made-up citations. Burke, a former Deadspin editor now working as a media consultant, faces charges of hacking into a streaming system used by broadcasters. The case has attracted attention from press freedom advocates, with his lawyers arguing Burke committed no crime since the URLs he visited to download clips of Fox News footage were public. The footage, which included antisemitic remarks by the rapper Ye and behind-the-scenes comments by Tucker Carlson about sex, his "postmenopausal" viewers, and issues with the Fox Nation streaming service, was never aired on the network. When the clips appeared online in 2022 and 2023, it aroused suspicions that a Fox employee had leaked them. In 2023, however, federal investigators zeroed in on Burke, who was indicted last year.


The Hindu
22-04-2025
- Science
- The Hindu
Weird phrase plaguing scientific papers traced to glitch in AI data
Earlier this year, scientists discovered a peculiar term appearing in published papers: 'vegetative electron microscopy'. This phrase, which sounds technical but is actually nonsense, has become a 'digital fossil' – an error preserved and reinforced in artificial intelligence (AI) systems that is nearly impossible to remove from our knowledge repositories. Like biological fossils trapped in rock, these digital artefacts may become permanent fixtures in our information ecosystem. The case of 'vegetative electron microscopy' offers a troubling glimpse into how AI systems can perpetuate and amplify errors throughout our collective knowledge. Bad scan, error in translation 'Vegetative electron microscopy' appears to have originated through a remarkable coincidence of unrelated errors. First, two papers from the 1950s, published in the journal Bacteriological Reviews, were scanned and digitised. However, the digitising process erroneously combined 'vegetative' from one column of text with 'electron' from another. As a result, the phantom term was created. Decades later, 'vegetative electron microscopy' turned up in some Iranian scientific papers. In 2017 and 2019, two papers used the term in English captions and abstracts. This appears to be due to a translation error. In Farsi, the words for 'vegetative' and 'scanning' differ by only a single dot. An error on the rise The upshot? As of today, 'vegetative electron microscopy' appears in 22 papers, according to Google Scholar. One was the subject of a contested retraction from a Springer Nature journal, and Elsevier issued a correction for another. The term also appears in news articles discussing subsequent integrity investigations. 'Vegetative electron microscopy' began to appear more frequently in the 2020s. To find out why, we had to peer inside modern AI models – and do some archaeological digging through the vast layers of data they were trained on. The large language models behind modern AI chatbots such as ChatGPT are 'trained' on huge amounts of text to predict the likely next word in a sequence. The exact contents of a model's training data are often a closely guarded secret. To test whether a model 'knew' about 'vegetative electron microscopy', we input snippets of the original papers to find out if the model would complete them with the nonsense term or more sensible alternatives. The results were revealing. OpenAI's GPT-3 consistently completed phrases with 'vegetative electron microscopy'. Earlier models such as GPT-2 and BERT did not. This pattern helped us isolate when and where the contamination occurred. We also found the error persists in later models including GPT-4o and Anthropic's Claude 3.5. This suggests the nonsense term may now be permanently embedded in AI knowledge bases. By comparing what we know about the training datasets of different models, we identified the CommonCrawl dataset of scraped internet pages as the most likely vector where AI models first learned this term. The scale problem Finding errors of this sort is not easy. Fixing them may be almost impossible. One reason is scale. The CommonCrawl dataset, for example, is millions of gigabytes in size. For most researchers outside large tech companies, the computing resources required to work at this scale are inaccessible. Another reason is a lack of transparency in commercial AI models. OpenAI and many other developers refuse to provide precise details about the training data for their models. Research efforts to reverse engineer some of these datasets have also been stymied by copyright takedowns. When errors are found, there is no easy fix. Simple keyword filtering could deal with specific terms such as 'vegetative electron microscopy'. However, it would also eliminate legitimate references (such as this article). More fundamentally, the case raises an unsettling question. How many other nonsensical terms exist in AI systems, waiting to be discovered? Implications for science and publishing This 'digital fossil' also raises important questions about knowledge integrity as AI-assisted research and writing become more common. Publishers have responded inconsistently when notified of papers including 'vegetative electron microscopy'. Some have retracted affected papers, while others defended them. Elsevier notably attempted to justify the term's validity before eventually issuing a correction. We do not yet know if other such quirks plague large language models, but it is highly likely. Either way, the use of AI systems has already created problems for the peer-review process. For instance, observers have noted the rise of 'tortured phrases' used to evade automated integrity software, such as 'counterfeit consciousness' instead of 'artificial intelligence'. Additionally, phrases such as 'I am an AI language model' have been found in other retracted papers. Some automatic screening tools such as Problematic Paper Screener now flag 'vegetative electron microscopy' as a warning sign of possible AI-generated content. However, such approaches can only address known errors, not undiscovered ones. Living with digital fossils The rise of AI creates opportunities for errors to become permanently embedded in our knowledge systems, through processes no single actor controls. This presents challenges for tech companies, researchers, and publishers alike. Tech companies must be more transparent about training data and methods. Researchers must find new ways to evaluate information in the face of AI-generated convincing nonsense. Scientific publishers must improve their peer review processes to spot both human and AI-generated errors. Digital fossils reveal not just the technical challenge of monitoring massive datasets, but the fundamental challenge of maintaining reliable knowledge in systems where errors can become self-perpetuating. Aaron J. Snoswell is research fellow in AI accountability; Kevin Witzenberger is research fellow, GenAI Lab; and Rayane El Masri is a PhD candidate, GenAI Lab – all at Queensland University of Technology. This article is republished from The Conversation.


India.com
21-04-2025
- Business
- India.com
Revolutionizing Industrial Engineering: The Impact of Vijay Gurav's Work
Vijay Gurav (File) In recent years, the industrial engineering movement, with several streams running through its metamorphosis, is being led by Vijay Gurav. Coming in with a solid background in assembly line designs, production optimization, and advanced manufacturing systems, he worked toward radical methodologies that emphasized the improvements that must be made for efficiency, less waste, and more productivity. His leading-edge research in computer vision-based motion-time study has revolutionized manufacturing plants' approaches of analyzing worker efficiency, thus raising the bar of performance measurement and process improvement. Vijay Gurav's success in research is complemented by enormous applications of research that have rendered benefits for industries applying his knowledge of mixed-model assembly lines, optimization of factory floor space, and lean principles. His book Modern Industrial Engineering and Factory Assembly Line Systems has been a key reference for practitioners worldwide, offering insight into productivity, cost controls, and methods for AI-enabled manufacturing. His works published in Google Scholar on quality assurance by deep learning and optimization algorithms for production scheduling have provided innovative solutions to challenging manufacturing problems. The development of new tools has seen Vijay contributing enormously to raising the industrial engineering practice beyond research and publications. By developing apps at Apple, he has simplified the Time Study Engineer and Root Cause Analysis for industrial engineering functions, enabling professionals to accurately perform time studies and engage in data-driven problem-solving through 5-Why and Ishikawa Diagram methods. These technological advances have bridged the gap between traditional industrial engineering practice and modern AI-aided decision-making. It's the limitless contributions that Vijay Gurav has made to industrial engineering that have driven changes in manufacturing toward becoming smarter, more efficient, and more capable of meeting the demands of modern economies. Truly a visionary, relentless in pursuing optimization, he ranks among the topmost industrial engineers in the world, inspiring future generations of engineers in the innovation space. Vijay Gurav's definition of industrial engineering goes beyond the realms of conventional manufacturing and applications into artificial intelligence, optimization algorithms, and data-driven decision-making in revamping modern factories. Innovations in production scheduling and efficient assembly line work have led manufacturers to cut cycle times and balance workloads while optimizing resources. By setting up metaheuristic optimization algorithms, he has aligned complex scheduling problems in FRP product manufacturing with requirements from the industry and has helped the latter achieve high demand accuracy at low cost. Vijay's contribution goes beyond research and applications; his leadership in workforce productivity has fundamentally altered how factories measure and enhance labor efficiency. The Time Study Engineer app has enabled industrial engineers to maintain an accurate digital record of conducting time and motion studies, thus promoting more credible labor standards and productivity benchmarks. The Root Cause Analysis app, which incorporates systematic problem-solving techniques such as Ishikawa and the 5-Why method, has emerged as a necessary tool in continuous improvement initiatives for manufacturing plants worldwide. Having spent a career devoted to bridging industrial engineering, AI, and optimization, Vijay Gurav has positioned himself as the thought leader guiding the next industrial revolution. Industrial engineering has transitioned to become more and more data-driven and enhanced due to the use of AI technologies to maximize efficiency and minimize costs, thereby causing a radical shift in how factories around the globe are run.