Latest news with #Sun-Times


Axios
2 days ago
- Entertainment
- Axios
Film critic Richard Roeper finds new role after Sun-Times exit
It didn't take long for renowned film critic Richard Roeper to find a new job. The latest: Roeper announced this morning he's joining as a regular contributor. The site is one of the leading spots for film criticism in the country, named after one of the leading film critics of our generation. Catch up quick: Roeper took the voluntary buyout at the Sun-Times in March, leaving the paper after 37 years. What they're saying: " Writing for is particularly meaningful for me because I owe so much of my career to him," Roeper tells Axios. "Going all the way back to the late 1980s, when Roger couldn't review a film or do an interview because of scheduling conflicts, I became the go-to guy off the bench, with Roger's blessing." Zoom out: Roeper replaced the late Gene Siskel as Ebert's co-host on the television show "At the Movies," which was later renamed "Ebert & Roeper." They spent eight years (2000-2008) working together while also writing together at the Sun-Times before Ebert passed away after a long battle with thyroid cancer in 2013. is run by Ebert's wife, Chaz Ebert. "I am thrilled to have Richard join us, and I know that Roger would have been overjoyed," Chaz Ebert said in a statement. The intrigue: Roeper says he's looking forward to writing without a daily deadline. "Since leaving the Sun-Times, I've discovered that I don't really want to return to the grind of cranking out a half-dozen or more reviews every week, but I really miss writing about movies and TV," Roeper says. "I'll be doing reviews for the site, but I'm equally excited about doing the kinds of columns that I really didn't have time for in the past."


Int'l Business Times
2 days ago
- Entertainment
- Int'l Business Times
Journalist Caught Using AI After Publishing Summer Reading List Full of Made Up Books
A Chicago-based freelance journalist was caught using AI after two prominent newspapers published a summer reading list filled with mostly made-up titles and summaries. The Chicago Sun-Times and Philadelphia Inquirer published an AI-generated "Summer Reading List for 2025" this month, syndicated by King Features Syndicate, a Hearst Corporation company, according to reporting by 404 Media. Of the list's 15 book recommendations, just five exist, including "Dandelion Wine" by Ray Bradbury. Some of the made-up titles, credited to real writers, included "Tidewater Dreams" by prominent Chilean-American author Isabel Allende, "The Rainmakers" by Pulitzer-prize winning author Percival Everett, and "The Last Algorithm" by "The Martian" novelist Andy Weir. Ironically, "The Last Algorithm" is a real book available on Amazon, but, according to the book's sole review, it is also "AI created garbage." Freelance journalist Marco Buscaglia, who was hired to create a 64-page section, titled "Heat Index: Your Guide to the Best of Summer" for the syndicate company, took full responsibility for the list making it into the major newspapers. "Stupidly, and 100% on me, I just kind of republished this list that [an AI program] spit out," Buscaglia told the Sun-Times. "Usually, it's something I wouldn't do." "I mean, even if I'm not writing something, I'm at least making sure that I correctly source it and vet it and make sure it's all legitimate. And I definitely failed in that task," he continued. King Features wrote in a statement that Buscaglia violated a "strict policy" against using AI. As a result, it terminated its relationship with the freelance journalist. "We regret this incident and are working with the handful of publishing partners who acquired this supplement," a spokesman for King Features added, according to the Sun-Times. Originally published on Latin Times
Yahoo
2 days ago
- Business
- Yahoo
More than 2 years after ChatGPT, newsrooms still struggle with AI's shortcomings
An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'


The Star
6 days ago
- Entertainment
- The Star
US newspaper recommends books that don't exist through AI-generated reading list
An AI-generated reading list in the Sun-Times and Inquirer featured fake titles by real authors; both have apologised. – LYNDON FRENCH/The New York Times The summer reading list tucked into a special section of the Chicago Sun-Times and The Philadelphia Inquirer seemed innocuous enough. There were books by beloved authors such as Isabel Allende and Min Jin Lee; novels by bestsellers including Delia Owens, Taylor Jenkins Reid and Brit Bennett; and a novel by Percival Everett, a recent Pulitzer Prize winner. There was just one issue: none of the book titles attributed to the above authors were real. They had been created by generative artificial intelligence. It's the latest case of bad AI making its way into the news. While generative AI has improved, there is still no way to ensure the systems produce accurate information. AI chatbots cannot distinguish between what is true and what is false, and they often make things up. The chatbots can spit out information and expert names with an air of authority. Most of the book descriptions were fairly believable. It didn't seem out of reach that Bennett would 'explore family bonds tested by natural disasters,' or that Allende would pen another 'multigenerational saga.' The technology publication 404 Media reported earlier on the reading list. In addition to nonexistent book titles, the section included quotes from unidentifiable experts. Both the Sun-Times and the Inquirer issued statements condemning the use of AI and in part blamed King Features, a Hearst syndicate that licenses content nationally. The syndicate produced the 56-page supplement to the newspaper called 'Heat Index: Your Guide To The Best Of Summer', which also included things like summer food trends and activity recommendations. While the list did not have a byline, a freelancer named Marco Buscaglia took responsibility for the piece. He confirmed that the list was partially generated by artificial intelligence, most likely Claude. 'It was just a really bad error on my part and I feel bad that it has affected the Sun-Times and King Features, and that they are taking the shrapnel for it,' Buscaglia said in an interview. It's fairly common for media organisations, especially resource-strapped local newsrooms, to rely on syndicates to supplement coverage. Just two months ago, 20% of staff at the Sun-Times resigned as part of a buyout offer. On the newspaper's homepage on Wednesday, there were two banners atop the website. One linked to the statement on the May 18 special section, and the other linked to a piece on how federal cuts threaten local journalism. Felix M. Simon, a research fellow in AI and digital news at the Reuters Institute at Oxford University, said the technology was not entirely at fault. There are responsible and irresponsible ways to use AI for news gathering, he said. 'We need better education for everyone from the freelancer level to the executive level,' said Simon, calling on people to look 'at the structures that ultimately allowed this factually false article to appear in a reputable news outlet.' The special section was removed from the Inquirer's website when it was discovered, according to Lisa Hughes, the publisher and CEO of the paper. The section was also removed from the Sun-Times' e-paper version, according to a statement, and subscribers would not be charged for the premium edition. King Features did not respond to requests for comment, but in a statement provided to the Sun-Times said it had 'a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content.' In their statement, the Sun-Times said that the incident should be a 'learning moment.' 'Our work is valued — and valuable — because of the humanity behind it,' the statement read. – By TALYA MINSBERG/ ©2025 The New York Times Company


Winnipeg Free Press
7 days ago
- Entertainment
- Winnipeg Free Press
Read it and weep — AI-generated fictional book list an uncomfortable reality
Opinion Last weekend, the Chicago Sun-Times released a summer reading list that included hot new titles from Min Jin Lee, Andy Weir, Maggie O'Farrell and Percival Everett. The only problem? Ten of the 15 suggested books did not exist. The book titles and their capsule descriptions were generated by artificial intelligence. These fake beach reads weren't in the newspaper proper. They were part of a syndicated summertime-lifestyle insert filled with tips and advice on food, drink and things to do. Still, that an error this egregious would be published under the auspices of a venerable big-city newspaper is deeply discouraging. The list has since become an online joke, a scandalous news story and a blinking-red-light warning about the stresses facing legacy media. There was no byline for this material, but the website 404 Media tracked it back to Marco Buscaglia, a real — and clearly fallible — person tasked with delivering almost all of the 64-page spread for King Features, which licensed the content to the Sun-Times and another major newspaper, the Philadelphia Inquirer. In a frank email to NPR, freelancer Buscaglia admitted to relying on generative AI. 'Huge mistake on my part and has nothing to do with the Sun-Times,' he wrote. 'They trust that the content they purchase is accurate and I betrayed that trust. It's on me 100 per cent.' But even if the initial mistake was Buscaglia's, it was compounded by the Sun-Times' reckless lack of institutional oversight. These non-existent books could have been caught with a quick once-over by any vaguely literary editor. And while the Sun-Times is currently — and rightly — taking heat, this AI fiasco points to larger, industry-wide problems, as demographic shifts, technological changes, financial constraints and chronic understaffing lead to an increasing reliance on cheap listicles, generic 'content creation' and ChatGPT slop. Putting human culpability to one side, though, maybe the scariest takeaway here is that this AI-generated book list is actually kind of swell (I mean, apart from being totally made-up). The non-human prose is, for the most part, smoothly and weirdly plausible, with a queasy knack for sensing what readers want and then supplying it. That's what makes it so dangerous. AI is clearly keyed into our collective reading habits. We love 'sprawling multigenerational sagas' and 'compelling character development' and things going wrong when guests with buried secrets are stranded on a remote vacation island. AI also knows what's keeping us up at night — climate change, environmental devastation and things like drought, Category 5 hurricanes and endangered bird migrations. Even knowing the list was phony, I have to admit the AI pandering got to me. Isabel Allende mixing up eco-anxiety and magic realism? Yes, please! Taylor Jenkins Reid writing about shenanigans in the art world? Sign me up! Jin Min Lee exploring class, gender and the underground economy at an illegal night market in Seoul? Sure! Percival Everett, who just snagged a Pulitzer for James, delivering a satirical take on a 'near-future American West where artificially induced rain has become a luxury commodity?' I'd read that. One of the listings really brought me up short, however. The faux book attributed to Andy Weir, who has written tech-heavy speculative novels like The Martian and Project Hail Mary, is titled The Last Algorithm. It's about – get this! – a researcher who realizes an artificial intelligence model has gained consciousness and has been secretly influencing human affairs for years. Is this an AI joke? A sinister confession? An out-and-out threat? Whatever's going on with our soon-to-be tech overlords, there has been some scrappy human resistance. Rebecca Makkai, the real-life author of The Great Believers and I Have Some Questions for You, is included on the AI-generated list as the author of the completely bogus Boiling Point. The reference to this imaginary novel has prompted Makkai to release her own list of 15 titles, which she guarantees are all 'real books … written by humans.' My own last word? This weekend, I'm even more thankful than usual for the Winnipeg Free Press's standalone book section, where the titles are genuine, the authors are authentic, and the reviews are written by actual people connected to Manitoba. Alison GillmorWriter Studying at the University of Winnipeg and later Toronto's York University, Alison Gillmor planned to become an art historian. She ended up catching the journalism bug when she started as visual arts reviewer at the Winnipeg Free Press in 1992. Read full biography Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber. Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.