logo
Slop the Presses

Slop the Presses

The Atlantic20-05-2025
At first glance, 'Heat Index' appears as inoffensive as newspaper features get. A 'summer guide' sprawling across more than 50 pages, the feature, which was syndicated over the past week in both the Chicago Sun-Times and The Philadelphia Inquirer, contains '303 Must-Dos, Must-Tastes, and Must-Tries' for the sweaty months ahead. Readers are advised in one section to 'Take a moonlight hike on a well-marked trail' and 'Fly a kite on a breezy afternoon.' In others, they receive tips about running a lemonade stand and enjoying 'unexpected frozen treats.'
Yet close readers of the guide noticed that something was very off. 'Heat Index' went viral earlier today when people on social media pointed out that its summer-reading guide matched real authors with books they haven't written, such as Nightshade Market, attributed to Min Jin Lee, and The Last Algorithm, attributed to Andy Weir—a hint that the story may have been composed by a chatbot. This turned out to be true. Slop has come for the regional newspapers.
Originally written for King Features, a division of Hearst, 'Heat Index' was printed as a kind of stand-alone magazine and inserted into the Sun-Times, the Inquirer, and possibly other newspapers, beefing the publications up without staff writers and photographers having to do additional work themselves. Although many of the elements of 'Heat Index' do not have an author's byline, some of them were written by a freelancer named Marco Buscaglia. When we reached out to him, he admitted to using ChatGPT for his work.
Buscaglia explained that he had asked the AI to help him come up with book recommendations. He hasn't shied away from using these tools for research: 'I just look for information,' he said. 'Say I'm doing a story, 10 great summer drinks for your barbecue or whatever. I'll find things online and say, hey, according to Oprah.com, a mai tai is a perfect drink. I'll source it; I'll say where it's from.' This time, at least, he did not actually check the chatbot's work. What's more, Buscaglia said that he submitted his first draft to King, which apparently accepted it without substantive changes and distributed it for syndication.
King Features did not respond to a request for comment. Buscaglia (who also admitted his AI use to 404 Media) seemed to be under the impression that the summer-reading article was the only one with problems, though this is not the case. For example, in a section on 'hammock hanging ethics,' Buscaglia quotes a 'Mark Ellison, resource management coordinator for Great Smoky Mountains National Park.' There is indeed a Mark Ellison who works in the Great Smoky Mountains region—not for the national park, but for a company he founded called Pinnacle Forest Therapy. Ellison told us via email that he'd previously written an article about hammocks for North Carolina's tourism board, offering that perhaps that is why his name was referenced in Buscaglia's chatbot search. But that was it: 'I have never worked for the park service. I never communicated with this person.' When we mentioned Ellison's comments, Buscaglia expressed that he was taken aback and surprised by his own mistake. 'There was some majorly missed stuff by me,' he said. 'I don't know. I usually check the source. I thought I sourced it: He said this in this magazine or this website. But hearing that, it's like, Obviously he didn't.'
Another article in 'Heat Index' quotes a 'Dr. Catherine Furst,' purportedly a food anthropologist at Cornell University, who, according to a spokesperson for the school, does not actually work there. Such a person does not seem to exist at all.
For this material to have reached print, it should have had to pass through a human writer, human editors at King, and human staffers at the Chicago Sun-Times and The Philadelphia Inquirer. No one stopped it. Victor Lim, a spokesperson for the Sun-Times, told us, 'This is licensed content that was not created by, or approved by, the Sun-Times newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate.' A longer statement posted on the paper's website (and initially hidden behind a paywall) said in part, 'This should be a learning moment for all of journalism.' Lisa Hughes, the publisher and CEO of the Inquirer, told us the publication was aware the supplement contained 'apparently fabricated, outright false, or misleading' material. 'We do not know the extent of this but are taking it seriously and investigating,' she said via email. Hughes confirmed that the material was syndicated from King Features, and added, 'Using artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach.' (Although each publication blames King Features, both the Sun-Times and the Inquirer affixed their organization's logo to the front page of 'Heat Index'—suggesting ownership of the content to readers.)
There are layers to this story, all of them a depressing case study. The very existence of a package like 'Heat Index' is the result of a local-media industry that's been hollowed out by the internet, plummeting advertising, private-equity firms, and a lack of investment and interest in regional newspapers. In this precarious environment, thinned-out and underpaid editorial staff under constant threat of layoffs and with few resources are forced to cut corners for publishers who are frantically trying to turn a profit in a dying industry. It stands to reason that some of these harried staffers, and any freelancers they employ, now armed with automated tools such as generative AI, would use them to stay afloat.
Buscaglia said that he has sometimes seen rates as low as $15 for 500 words, and that he completes his freelance work late at night after finishing his day job, which involves editing and proofreading for AT&T. Thirty years ago, Buscaglia said, he was an editor at the Park Ridge Times Herald, a small weekly paper that was eventually rolled up into Pioneer Press, a division of the Tribune Publishing Company. 'I loved that job,' he said. 'I always thought I would retire in some little town—a campus town in Michigan or Wisconsin—and just be editor of their weekly paper. Now that doesn't seem that possible.' (A librarian at the Park Ridge Public Library accessed an archive for us and confirmed that Buscaglia had worked for the paper.)
On one level, 'Heat Index' is just a small failure of an ecosystem on life support. But it is also a template for a future that will be defined by the embrace of artificial intelligence across every industry—one where these tools promise to unleash human potential, but instead fuel a human-free race to the bottom. Any discussion about AI tends to be a perpetual, heady conversation around the ability of these tools to pass benchmark tests or whether they can or could possess something approximating human intelligence. Evangelists discuss their power as educational aids and productivity enhancers. In practice, the marketing language around these tools tends not to capture the ways that actual humans use them. A Nobel Prize–winning work driven by AI gets a lot of run, though the dirty secret of AI is that it is surely more often used to cut corners and produce lowest-common-denominator work.
Venture capitalists speak of a future in which AI agents will sort through the drudgery of daily busywork and free us up to live our best lives. Such a future could come to pass. The present, however, offers ample proof of a different kind of transformation, powered by laziness and greed. AI usage and adoption tends to find weaknesses inside systems and exploit them. In academia, generative AI has upended the traditional education model, based around reading, writing, and testing. Rather than offer a new way forward for a system in need of modernization, generative-AI tools have broken it apart, leaving teachers and students flummoxed, even depressed, and unsure of their own roles in a system that can be so easily automated.
AI-generated content is frequently referred to as slop because it is spammy and flavorless. Generative AI's output often becomes content in essays, emails, articles, and books much in the way that packing peanuts are content inside shipped packages. It's filler—digital lorem ipsum. The problem with slop is that, like water, it gets in everywhere and seeks the lowest level. Chatbots can assist with higher-level tasks like coding or scanning and analyzing a large corpus of spreadsheets, document archives, or other structured data. Such work marries human expertise with computational heft. But these more elegant examples seem exceedingly rare. In a recent article, Zach Seward, the editorial director of AI initiatives at The New York Times said that, while the newspaper uses artificial intelligence to parse websites and datasets to assist with reporting, he views AI on its own as little more than a 'parlor trick,' mostly without value when not in the hands of already skilled reporters and programmers.
Speaking with Buscaglia, we could easily see how the 'Heat Index' mistake could become part of a pattern for journalists swimming against a current of synthetic slop, constantly produced content, and unrealistic demands from publishers. 'I feel like my role has sort of evolved. Like, if people want all this content, they know that I can't write 48 stories or whatever it's going to be,' he said. He talked about finding another job, perhaps as a 'shoe salesman.'
One worst-case scenario for AI looks a lot like the 'Heat Index' fiasco—the parlor tricks winning out. It is a future where, instead of an artificial-general-intelligence apocalypse, we get a far more mundane destruction. AI tools don't become intelligent, but simply good enough. They are not deployed by people trying to supplement or enrich their work and potential, but by those looking to automate it away entirely. You can see the contours of that future right now: in anecdotes about teachers using AI to grade papers written primarily by chatbots or in AI-generated newspaper inserts being sent to households that use them primarily as birdcage liners and kindling. Parlor tricks met with parlor tricks—robots talking to robots, writing synthetic words for audiences who will never read them.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sam Altman hopes AGI will allow people to have more kids in the future
Sam Altman hopes AGI will allow people to have more kids in the future

Business Insider

time2 hours ago

  • Business Insider

Sam Altman hopes AGI will allow people to have more kids in the future

OpenAI CEO Sam Altman says having a kid has been "amazing" and thinks everyone else should have one, too. He also says AGI could maybe help with that. AGI, or artificial general intelligence, is a still theoretical version of AI that reasons as well as humans. Achieving AGI is the ultimate goal of many of the leading AI companies and is what's largely driving the AI talent wars. Meanwhile, the world's population growth is slowing down. In the United States, Gen Z and millennials are delaying having children or not having children at all to focus on their financial stability. Some prominent futurists, including Altman, say that's a cause for concern. He said this trend is a "real problem" during an episode of "People by WTF" with Nikhil Kamath on Thursday. Altman, who had his first child earlier this year, said he hopes that building families and creating community "will become far more important in a post-AGI world." He said he thinks this will be possible because AGI will allow for a world "where people have more abundance, more time, more resources, and potential, and ability." As AI progresses and becomes a more useful tool, he says society will grow richer and there will be more social support. "I think it's pretty clear that family and community are two of the things that make us the happiest, and I hope we will turn back to that," Altman said. When Kamath asked about Altman's own experience with fatherhood, the CEO said he strongly recommends having children. "It felt like the most important and meaningful and fulfilling thing I could imagine doing," he said. Altman has described himself as "extremely kid-pilled" and said that in the first weeks of being a dad, he was "constantly" asking ChatGPT questions. Using AI is a skill that he says he plans to pass down to his children. "My kids will never be smarter than AI," Altman said on an episode of The OpenAI Podcast in June. "They will grow up vastly more capable than we grew up, and able to do things that we cannot imagine, and they'll be really good at using AI." Altman isn't the only prominent CEO in the AI industry who's passionate about procreation. Elon Musk, the founder of Grok-maker xAI, among other companies, has fathered over 10 known children. Musk has said he's "doing his best to help the underpopulation crisis." "A collapsing birth rate is the biggest danger civilization faces by far," Musk said in an X post in 2022.

Here are hottest jobs in tech, and the roles you should avoid
Here are hottest jobs in tech, and the roles you should avoid

Business Insider

time3 hours ago

  • Business Insider

Here are hottest jobs in tech, and the roles you should avoid

Tech job postings have plunged 35% since early 2020, with some roles seeing radically lower demand. The decline began post-pandemic and continued after ChatGPT's release in late 2022. AI and machine learning roles are in demand, while junior positions require more experience. New tech hiring data from Indeed shows overall postings have plunged 35% from early 2020. Roughly half this decline happened during the post-pandemic sell-off, when tech companies realized they'd overhired. The other half happened after ChatGPT came out in late 2022. "While there isn't a smoking gun linking AI to the plunge, automation trends could be a reason the weakness persists," said Brendon Bernard, a senior economist at Indeed. "We're also seeing potential AI effects show up in the types of tech jobs still in demand, and rising experience requirements among the opportunities that remain." Demand for AI and machine learning roles is up, while listings for junior tech positions have dropped significantly. More job postings now require 5+ years of experience. Here's what hot, and what's not, in tech jobs over the past five years or so: I often think that the most boring technology ends up being the most profitable. So it may not be surprising to see an SAP job at the top of the table on the right here.

A Word, Please: Debunking em dash myths
A Word, Please: Debunking em dash myths

Los Angeles Times

time3 hours ago

  • Los Angeles Times

A Word, Please: Debunking em dash myths

For nearly 20 years, Grammar Girl Mignon Fogarty has been tackling grammar myths with the grace and patience this columnist can only aspire to. On her wildly popular podcast and blog, 'Grammar Girl's Quick and Dirty Tips for Better Writing,' she has addressed every entrenched grammar prejudice under the sun, from the myth that says you can't end a sentence with a preposition to the mistaken belief that an adverb between 'to' and an infinitive, as in 'to boldly go,' is a grammar mistake. And for lo these past two decades, as I've repeatedly stooped to petty grammar disputes, I've marveled at her natural magnanimity. So trust me when I tell you it takes a lot to make her say 'I'm so annoyed,' 'It's driving me crazy' and 'Please stop.' She's talking about a language myth that won't die. Perhaps you've heard it: Dashes in a piece of writing prove, or at least make it likely, the piece was written by AI. Fogarty has heard this online a lot, mainly on social media. Based on her experience, she wasn't buying it, so she did some digging. Fogarty couldn't pinpoint exactly where the idea started, but she found the source that likely blasted this belief into the mainstream — a video for a popular podcast that talked about 'the ChatGPT hyphen,' calling it a 'longer hyphen,' and got about 2.5 million views. Remember that Mark Twain quote 'A lie can travel halfway around the world while the truth is still putting on its shoes'? Well, that's a good example of how fast misinformation could spread before the internet because, apparently, that wasn't Twain. It was more likely Jonathan Swift. Whoever the source, we know that nonsense travels exponentially faster in the information age. And, as Fogarty explained, the business about dashes being a sign of AI writing is indeed nonsense. 'Em dashes' — the correct term for the punctuation marks setting off this clause — 'are not a sign of AI writing,' Fogarty said. The very idea is kind of silly when you understand how AI writing tools learn. 'So why might AI use a lot of em dashes?' Fogarty wondered. 'And if it does, then how could seeing a lot of em dashes in someone's writing not be a sure sign they've used AI? Well, all the writing you see coming out of tools like ChatGPT are the way they are because they were trained on human writing. The only way em dashes would be in there is if people used them.' In other words, AI's writing habits are humans' writing habits — humans like Emily Dickinson, who famously used a lot of them. That's why em dashes are useless for identifying AI-written text. So instead of trying to learn some secret for spotting AI writing, why not invest that energy in this quick refresher on em dashes? Em dashes, so called because they're about as wide as a lowercase letter m, connect one part of a sentence to another. They're not hyphens, which are about half that width and connect words to other words, prefixes or suffixes. The two main jobs of an em dash are to indicate an abrupt change in sentence structure or tone and to set off parenthetical information. An abrupt change in structure is — well, it's like this. Parenthetical info — examples, related thoughts, etc. — could also go into parentheses, but dashes keep them more prominent in a sentence. Technically, dashes are not for connecting clauses that could stand as their own sentences — this sentence should be broken into two instead of made one with a dash. But that's not so much a rule as an interpretation (mine). Some editing styles put a space on either side of an em dash. Others attach the dash directly to the surrounding words. Either way is correct. If you like em dashes, go ahead and use them. I, personally, am a fan. But if you're evaluating someone else's writing, don't assume em dashes mean they cheated and got AI to do their work for them. You just can't know that from their dashes. — June Casagrande is the author of 'The Joy of Syntax: A Simple Guide to All the Grammar You Know You Should Know.' She can be reached at JuneTCN@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store