logo
#

Latest news with #KingFeatures

Journalist Caught Using AI After Publishing Summer Reading List Full of Made Up Books
Journalist Caught Using AI After Publishing Summer Reading List Full of Made Up Books

Int'l Business Times

time5 days ago

  • Entertainment
  • Int'l Business Times

Journalist Caught Using AI After Publishing Summer Reading List Full of Made Up Books

A Chicago-based freelance journalist was caught using AI after two prominent newspapers published a summer reading list filled with mostly made-up titles and summaries. The Chicago Sun-Times and Philadelphia Inquirer published an AI-generated "Summer Reading List for 2025" this month, syndicated by King Features Syndicate, a Hearst Corporation company, according to reporting by 404 Media. Of the list's 15 book recommendations, just five exist, including "Dandelion Wine" by Ray Bradbury. Some of the made-up titles, credited to real writers, included "Tidewater Dreams" by prominent Chilean-American author Isabel Allende, "The Rainmakers" by Pulitzer-prize winning author Percival Everett, and "The Last Algorithm" by "The Martian" novelist Andy Weir. Ironically, "The Last Algorithm" is a real book available on Amazon, but, according to the book's sole review, it is also "AI created garbage." Freelance journalist Marco Buscaglia, who was hired to create a 64-page section, titled "Heat Index: Your Guide to the Best of Summer" for the syndicate company, took full responsibility for the list making it into the major newspapers. "Stupidly, and 100% on me, I just kind of republished this list that [an AI program] spit out," Buscaglia told the Sun-Times. "Usually, it's something I wouldn't do." "I mean, even if I'm not writing something, I'm at least making sure that I correctly source it and vet it and make sure it's all legitimate. And I definitely failed in that task," he continued. King Features wrote in a statement that Buscaglia violated a "strict policy" against using AI. As a result, it terminated its relationship with the freelance journalist. "We regret this incident and are working with the handful of publishing partners who acquired this supplement," a spokesman for King Features added, according to the Sun-Times. Originally published on Latin Times

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned
Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

CNN

time5 days ago

  • Business
  • CNN

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' Get Reliable Sources newsletter Sign up here to receive Reliable Sources with Brian Stelter in your inbox. OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned
Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

CNN

time5 days ago

  • Business
  • CNN

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' Get Reliable Sources newsletter Sign up here to receive Reliable Sources with Brian Stelter in your inbox. OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

Puzzle solutions for Sunday, May 25, 2025
Puzzle solutions for Sunday, May 25, 2025

Yahoo

time25-05-2025

  • General
  • Yahoo

Puzzle solutions for Sunday, May 25, 2025

Note: Most subscribers have some, but not all, of the puzzles that correspond to the following set of solutions for their local newspaper. Play the USA TODAY Crossword Puzzle. Play the USA TODAY Sudoku Game. Answer: HUMBLE BASKET POTENT DISOWN BANISH ENROLLWhen Jones, Tork, Dolenz and Nesmith teamed up, people enjoyed their — 'MONKEE' BUSINESS (Distributed by Tribune Content Agency) UPON LANDING, THE ALIEN HANDED US A REALLY BIG HUNK OF PRIME BEEF AND SAID, "TAKE MEAT TO YOUR LEADER." (Distributed by King Features) FIG KIWI DATE PEAR GUAVA PEACH CHERRY AVOCADO (Distributed by Tribune Content Agency) TOOTH, HATING, GLOATS, STOOL, LOANED (Distributed by Andrews McMeel) ROSE BARBIE CARNATION PASTEL ORCHID SHOCKING SALMON (Distributed by Andrews McMeel) Hot cross buns already! (Distributed by Creators Syndicate) This article originally appeared on USA TODAY: Online Crossword & Sudoku Puzzle Answers for 05/25/2025 - USA TODAY

AI Missteps Erode Trust in Newsrooms
AI Missteps Erode Trust in Newsrooms

Arabian Post

time25-05-2025

  • Business
  • Arabian Post

AI Missteps Erode Trust in Newsrooms

Major news organisations are grappling with the fallout from deploying artificial intelligence in content creation, as instances of fabricated material and misattributed authorship surface, raising concerns over journalistic integrity. The Chicago Sun-Times and The Philadelphia Inquirer faced backlash after publishing a summer reading list featuring non-existent books and fictitious expert quotes. The content, syndicated by King Features and crafted by freelance writer Marco Buscaglia using AI tools, included fabricated titles like 'Tidewater Dreams' by Isabel Allende. Both newspapers have since removed the content and issued statements condemning the breach of editorial standards. Similarly, Sports Illustrated encountered criticism for publishing articles under fake author names, with AI-generated headshots and biographies. The Arena Group, its publisher, attributed the content to third-party provider AdVon Commerce, asserting that the articles were human-written but acknowledging the use of pseudonyms. The controversy led to the dismissal of CEO Ross Levinsohn and mass layoffs, following the revocation of the magazine's publishing license.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store