logo
#

Latest news with #ChicagoSunTimes

Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'
Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'

Yahoo

time14 hours ago

  • Business
  • Yahoo

Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'

A major newspaper in the United States has rightly come under fire after the discovery of a lack of oversight that led to the publication of false information. As detailed by The Verge, the May 18 issue of the Chicago Sun-Times featured a summer reading guide with recommendations for fake books generated by artificial intelligence. To make matters even more concerning, other articles were found to include quotes and citations from people who don't appear to exist. The summer reading list included fake titles by real authors alongside actual books. The Sun-Times admitted in a post on Bluesky that the guide was "not editorial content and was not created by, or approved by, the Sun-Times newsroom," and added that it was "looking into how this made it into print." In a statement later published on the newspaper's website, the Sun-Times revealed that the guide was "licensed from a national content partner" and said it was removing the section from all digital editions while updating its policies on publishing third-party content to ensure future mistakes like this are avoided. According to The Verge, the reading list was published without a byline, but a writer named Marco Buscaglia was credited for other pieces in the summer guide. Buscaglia was found to have written other pieces that quote and cite sources and experts that do not appear to be real. Buscaglia admitted to 404 Media that he uses artificial intelligence "for background at times," but claimed he always checks the material. "This time, I did not, and I can't believe I missed it because it's so obvious. No excuses," Buscaglia told 404 Media. "On me 100 percent and I'm completely embarrassed." This is yet another incident that highlights the importance of maintaining professional standards and ensuring that AI-generated content is properly vetted before publication. In an age where misinformation can spread quickly, it's up to leading news outlets like the Sun-Times to avoid these mistakes so they don't lose the trust of the general public. On a broader level, AI is an energy-intensive field that carries significant environmental concerns. The International Energy Agency published a report warning that electricity consumption from data centers that power AI is expected to double by 2026 and will reach a level that is "roughly equivalent to the electricity consumption of Japan." It's important to stay informed on critical climate issues and efforts to reduce energy consumption amid the ongoing evolution of AI technology. How often do you worry about toxic chemicals getting into your home? Always Often Sometimes Never Click your choice to see results and speak your mind. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do
Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do

Forbes

time2 days ago

  • Business
  • Forbes

Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do

The AI symbol sits at the heart of a circle formed by bright yellow foldable caution signs adorned ... More with exclamation marks. This image creatively conveys the urgent need for awareness and careful consideration of AI's rapid growth and its implications. The design's high impact, with its strong contrast and focal point, makes it an effective tool for raising awareness or sparking conversations around technology, security, and innovation. Perfect for customizable content with plenty of space for additional messaging or branding. As AI embeds deeper into leadership workflows, a subtle form of decision drift is taking hold. Not because the tools are flawed but because we stop questioning them. Their polish is seductive. Their speed, persuasive. But when language replaces thought, clarity no longer guarantees correctness. In July 2023, the Chicago Sun-Times published an AI-generated summer reading list. The summaries were articulate. The titles sounded plausible. But only five of the fifteen books were real. The rest? Entirely made up: fictional authors, fabricated plots, polished prose built on nothing. It sounded smart. It wasn't. That's the risk. Now imagine an executive team building its strategy on the same kind of output. It's not fiction anymore. It's a leadership risk. And it's happening already. Quietly. Perceptibly. In organizations where clarity once meant confidence and strategy was something you trusted. Not just in made-up book titles but in the growing gap between what sounds clear and what's actually correct. Large language models aren't fact checkers. They're pattern matchers. They generate language based on probability, not precision. What sounds coherent may not be correct. The result is a stream of outputs that look strategic but rest on shaky ground. This isn't a call to abandon AI. But it is a call to re-anchor how we use it. To ensure leaders stay accountable. To ensure AI stays a tool, not a crutch. I'm not saying AI shouldn't inform decisions. But it must be paired with human intuition, sense making and real dialogue. The more confident the language, the more likely it is to go unquestioned. Model collapse is no longer theoretical. It's already happening. It begins when models are trained on outputs from other models or worse, on their own recycled content. Over time, distortions multiply. Edge cases vanish. Rare insights decay. Feedback loops breed repetition. Sameness. False certainty. Businessman with white cloud instead of head on blue background. Businessman and management. ... More Business and commerce. Digital art. As The Register warned, general purpose AI may already be declining in quality, not in tone but in substance. What remains looks fluent. But it says less. That's just the mechanical part. The deeper concern is how this affects leaders. When models feed on synthetic data and leaders feed on those outputs, what you get isn't insight. It's reflection. Strategy becomes a mirror, not a map. And we're not just talking bias or hallucinations. As copyright restrictions tighten and human-created content slows, the pool of original data shrinks. What's left is synthetic material recycled over and over. More polish. Less spark. According to researchers at Epoch, high quality training data could be exhausted by 2026 to 2032. When that happens, models won't be learning from the best of what we know. They'll be learning from echoes. Developers are trying to slow this collapse. Many already are, by protecting non-AI data sources, refining synthetic inputs and strengthening governance. But the impending collapse signals something deeper. A reminder that the future of intelligence must remain blended — human machine, not machine alone. Intuitive, grounded and real. Psychologists like Kahneman and Tversky warned us long ago about the framing trap: the way a question is asked shapes the answer. A 20 percent chance of failure feels different than an 80 percent chance of success, even if it's the same data. AI makes this trap faster and more dangerous. Because now, the frame itself is machine generated. A biased prompt. A skewed training set. A hallucinated answer. And suddenly, a strategy is shaped by a version of reality that never existed. Ask AI to model a workforce reduction plan. If the prompt centers on financials, the reply may omit morale, long-term hiring costs or reputational damage. The numbers work. The human cost disappears. AI doesn't interrupt. It doesn't question. It reflects. If a leader seeks validation, AI will offer it. The tone will align. The logic will sound smooth. But real insight rarely feels that easy. That's the risk — not that AI is wrong, but that it's too easily accepted as right. When leaders stop questioning and teams stop challenging, AI becomes a mirror. It reinforces assumptions. It amplifies bias. It removes friction. That's how decision drift begins. Dialogue becomes output. Judgment becomes approval. Teams fall quiet. Cultures that once celebrated debate grow obedient. And something more vital begins to erode: intuition. The human instinct for context. The sense of timing. The inner voice that says something's off. It all gets buried beneath synthetic certainty. To stop flawed decisions from quietly passing through AI-assisted workflows, every leader should ask: AI-generated content is already shaping board decks, culture statements and draft policies. In fast-paced settings, it's tempting to treat that output as good enough. But when persuasive language gets mistaken for sound judgment, it doesn't stay in draft mode. It becomes action. Garbage in. Polished out. Then passed as policy. This isn't about intent. It's about erosion. Quiet erosion in systems that reward speed, efficiency and ease over thoughtfulness. And then there's the flattery trap. Ask AI to summarize a plan or validate a strategy, and it often echoes the assumptions behind the prompt. The result? A flawed idea wrapped in confidence. No tension. No resistance. Just affirmation. That's how good decisions fail — quietly, smoothly and without a single raised hand in the room. Leadership isn't about having all the answers. It's about staying close to what's real and creating space for others to do the same. The deeper risk of AI isn't just in false outputs. It's in the cultural drift that happens when human judgment fades. Questions stop. Dialogue thins. Dissent vanishes. Leaders must protect what AI can't replicate — the ability to sense what's missing. To hear what's not said. To pause before acting. To stay with complexity. AI can generate content. But it can't generate wisdom. The solution isn't less AI. It's better leadership. Leaders who use AI not as final word but as provocateur. As friction. As spark. In fact, human-generated content will only grow in value. Craft will matter more than code. What we'll need most is original thought, deep conversation and meaning making — not regurgitated text that sounds sharp but says nothing new. Because when it comes to decisions that shape people, culture and strategy, only human judgment connects the dots that data can't see. In the end, strategy isn't what you write. It's what you see. And to see clearly in the age of AI, you'll need more than a prompt. You'll need presence. You'll need discernment. Neither can be AI trained. Neither can be outsourced.

AI hallucinations? What could go wrong?
AI hallucinations? What could go wrong?

Japan Times

time5 days ago

  • Business
  • Japan Times

AI hallucinations? What could go wrong?

Oops. Gotta revise my summer reading list. Those exciting offerings plucked from a special section of The Chicago Sun-Times newspaper and reported last week don't exist. The freelancer who created the list used generative artificial intelligence for help and several of the books and many of the quotes that gushed about them were made up by the AI. These are the most recent and high-profile AI hallucinations to make it into the news. We expect growing pains as new technology matures but, oddly and perhaps inextricably, that problem appears to be getting worse with AI. The notion that we can't ensure that AI will produce accurate information is, uh, 'disturbing' if we intend to integrate that product so deeply into our daily lives that we can't live without it. The truth might not set you free, but it seems like a prerequisite for getting through the day. An AI hallucination is a phenomenon by which a large language model (LLM) such as a generative AI chatbot finds patterns or objects that simply don't exist and responds to queries with nonsensical or inaccurate answers. There are many explanations for these hallucinations — bad data, bad algorithms, training biases — but no one knows what produces a specific response. Given the spread of AI from search tools to the ever-more prominent role it takes in ordinary tasks (checking grammar or intellectual grunt work in some professions), that's not only troubling but dangerous. AI is being used in medical tests, legal writings, industrial maintenance and failure in any of those applications could have nasty consequences. We'd like to believe that eliminating such mistakes is part of the development of new technologies. When they examined the persistence of this problem, tech reporters from The New York Times noted that researchers and developers were saying several years ago that 'AI hallucinations would be solved. Instead, they're appearing more often and people are failing to catch them.' Tweaking models helped reduce hallucinations. But AI is now using 'new reasoning systems,' which means that it ponders questions for microseconds (or maybe seconds for hard questions) longer and that seems to be creating more mistakes. In one test, hallucination rates for newer AI models reached 79%. While that is extreme, most systems hallucinated in double-digit percentages. More worryingly, because the systems are using so much data, there is little hope that human researchers can figure out what is going on and why. The NYT cited Amr Awadallah, chief executive of Vectara, a startup that builds AI tools for businesses, who warned that 'Despite our best efforts, they will always hallucinate.' He concluded 'That will never go away.' That was also the conclusion of a team of Chinese researchers who noted that 'hallucination represents an inherent trait of the GPT model' and 'completely eradicating hallucinations without compromising its high-quality performance is nearly impossible.' I wonder about the 'high quality' of that performance when the results are so unreliable. Writing in the Harvard Business Review, professors Ian McCarthy, Timothy Hannigan and Andre Spicer last year warned of the 'epistemic risks of botshit,' the made-up, inaccurate and untruthful chatbot content that humans uncritically use for tasks. It's a quick step from botshit to bullshit. (I am not cursing for titillation but am instead referring to the linguistic analysis of philosopher Harry Frankfurt in his best-known work, 'On Bullshit.') John Thornhill beat me to the punch last weekend in his Financial Times column by pointing out the troubling parallel between AI hallucinations and bullshit. Like a bullshitter, a bot doesn't care about the truth of its claims but wants only to convince the user that its answer is correct, regardless of the facts. Thornhill highlighted the work of Sandra Wachter and two colleagues from the Oxford Internet Institute who explained in a paper last year that 'LLMs are not designed to tell the truth in any overriding sense... truthfulness or factuality is only one performance measure among many others such as 'helpfulness, harmlessness, technical efficiency, profitability (and) customer adoption.' ' They warned that a belief that AI tells the truth when combined with the tendency to attribute superior capabilities to technology creates 'a new type of epistemic harm.' It isn't the obvious hallucinations we should be worrying about but the 'subtle inaccuracies, oversimplifications or biased responses that are passed off as truth in a confident tone — which can convince experts and nonexperts alike — that posed the greatest risk.' Comparing this output to Frankfurt's 'concept of bullshit,' they label this 'careless speech' and write that it 'causes unique long-term harms to science, education and society, which resists easy quantification, measurement and mitigation.' While careless speech was the most sobering and subtle AI threat articulated in recent weeks, there were others. A safety test conducted by Anthropic, the developer of the LLM Claude, on its newest AI models revealed 'concerning behavior' in many dimensions. For example, the researchers discovered the AI 'sometimes attempting to find potentially legitimate justifications for requests with malicious intent.' In other words, the software tried to please users who wanted it to answer questions that would create dangers — such as creating weapons of mass destruction — even though it had been instructed not to do so. The most amusing — in addition to scary — danger was the tendency of the AI 'to act inappropriately in service of goals related to self-preservation.' In plain speak, the AI blackmailed an engineer that was supposed to take the AI offline. In this case, the AI was given access to email that said it would be replaced by another version and email that suggested that the individual was having an extramarital affair. In 84% of cases, the AI said it would reveal the affair if the engineer went ahead with the replacement. (This was a simulation, so no actual affair or blackmail occurred.) We'll be discovering more flaws and experiencing more frustration as AI matures. I doubt that those problems will slow its adoption, however. Mark Zuckerberg, CEO of Meta, anticipates far deeper integration of the technology into daily life, with people turning to AI for therapy, shopping and even casual conversation. He believes that AI can 'fill the gap' between the number of friendships many people have and that which they want. He's putting his money where his mouth is, having announced at the beginning of the year that Meta would invest as much as $65 billion this year to expand its AI infrastructure. That is a little over 10% of the estimated $500 billion that has been spent in the U.S. on private investment for AI between 2013 to 2024. Global spending last year is reckoned to have topped $100 billion. Also last week, OpenAI CEO Sam Altman announced that he had purchased former Apple designer Jony Ive's company io in a bid to develop AI 'companions' that will re-create the digital landscape as did the iPhone when it was first released. They believe that AI requires a new interface and phones won't do the trick; indeed, the intent, reported the Wall Street Journal, is to wean users from screens. The product will fit inside a pocket and be fully aware of a user's surroundings and life. They plan to ship 100 million of the new devices 'faster than any company has ever shipped before.' Call me old-fashioned but I am having a hard time putting these pieces together. A hallucination might be just what I need to resolve my confusion. Brad Glosserman is deputy director of and visiting professor at the Center for Rule-Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. His new book on the geopolitics of high-tech is expected to come out from Hurst Publishers this fall.

Book community slams 'fake' list of summer reads as none of the books are real
Book community slams 'fake' list of summer reads as none of the books are real

Daily Mirror

time7 days ago

  • Entertainment
  • Daily Mirror

Book community slams 'fake' list of summer reads as none of the books are real

Looking to sink your teeth into a great read for the summer? This summer reads list has irked the book community after publishing a list of novels partly generated by AI Book fans are outraged after a US newspaper published a 2025 summer reading list full of books that no one can actually read. The problem? Almost all of the novels were AI -generated. The scandal began after the listicle was published by the Chicago Sun Times on May 18 as an editorial insert titled The Heat Index. This included works by bestselling and award-winning authors, like Seven Husbands of Evelyn Hugo author Taylor Jenkins-Reid, Maggie O'Farrell, Min Lin Jee and 2025 Pulitzer-prize winner Percival Everett. However, book-lovers were quick to discover that there was something suspect about the novels. Namely, they didn't exist. ‌ Though, perhaps the biggest scandal was how unimaginative the AI book titles were. According to the list, New York Times bestseller Brit Bennett had written 'Hurricane Season' (exploring 'family bonds tested by natural disasters') and Rebecca Makkai had published 'Boiling Point' (a climate activist is 'forced to reckon with her own environmental impact' after an argument with her teenage daughter). ‌ Meanwhile, one attributed 'The Last Algorithm' to Andy Weir, an American sci-fi author perhaps best-known The Martian. Ironically, the fake book's plot summary described 'a programmer who discovers that an AI-system has developed consciousness – only to discover it has secretly been influencing global events for years.' Social media book fans were quick to point out the inaccuracies. 'Hey @chicagosuntimes - what in the AI wrote this is this??? I can assure you, Maggie O'Farrell did not write Migrations. And I don't have enough characters to point out all of the other inaccuracies. Do better. You should have paid someone to write this,' 'Booktuber' Tina Books wrote on BlueSky. Others accused the writer of using ChatGPT – which is prone to making 'hallucinations' – to write the text. 'I went into my library's database of Chicago area newspapers to confirm this isn't fake, and it's not. Why the hell are you using ChatGPT to make up book titles? You used to have a books staff. Absolutely no fact checking?' Book Riot editor Kelly Jensen wrote on BlueSky. To add even more confusion to the mix, some of the book titles included were actually real, like Call Me By Your Name by Andre Aciman and Atonement by Ian McEwan. ‌ The writer of the list admitted to 404 media that the article had been partly generated by AI. He said: "I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it, because it's so obvious. No excuses. On me 100 per cent and I'm completely embarrassed." But how exactly did this pass into a news outlet? The vice-president of marketing at the Chicago Sun Times, Victor Lim, later told 404 Media that the Heat Index section had been licensed by the company King Features – which is owned by the magazine giant Hearst. Lim said that no one from Chicago Public Media reviewed the section, as it came from a newspaper, so they 'falsely made the assumption' that there would be an editorial process already in place. He added that they would be updating this policy in future. ‌ However, it's left many on social media feeling concern of AI usage in media. Reacting to the story, one TikTok user wrote: 'This is why AI cannot replace humans. You still need journalists, you still need actual book reviewers, and people who go to the theatre. AI is not meant to replace despite corporate greed.' The union that represents editorial employees at the newspaper, The Sun-Times Guild, confirmed to CBC News that the summer guide was a syndicated section produced externally "without the knowledge of the members of our newsroom." They added: "We're deeply disturbed that AI-generated content was printed alongside our work. The fact that it was sixty-plus pages of this 'content' is very concerning — primarily for our relationship with our audience but also for our union's jurisdiction."

Chicago paper publishes AI-generated 'summer reading list' with books that don't exist
Chicago paper publishes AI-generated 'summer reading list' with books that don't exist

Fox News

time22-05-2025

  • Entertainment
  • Fox News

Chicago paper publishes AI-generated 'summer reading list' with books that don't exist

The Chicago Sun-Times admitted on Tuesday that it published an AI-generated list of books that don't exist for its summer reading list. On Sunday, the publication released a special 64-page section titled "Heat Index: Your Guide to the Best of Summer" which featured a list of 15 recommended books for summer. However, upon further look, it was found that 10 of the 15 books on the list were not real. One example included a book called "Nightshade Market" by Min Jin Lee, which was described as a "riveting tale set in Seoul's underground economy" and follows "three women whose paths intersect in an illegal night market" exploring "class, gender and the shadow economies beneath prosperous societies." Lee herself confirmed on her X account on Tuesday that the book was not real. "I have not written and will not be writing a novel called 'Nightshade Market.' Thank you," Lee wrote. Chicago Public Media CEO Melissa Bell remarked on the situation in an article on Tuesday, revealing that the mistake came from a freelance writer from one of their partner companies, King Features. Bell acknowledged that the list was published without review from the editorial team. "We are in a moment of great transformation in journalism and technology, and at the same time, our industry continues to be besieged by business challenges," Bell wrote. "This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it." Moving forward, Bell announced that the paper will be reviewing its relationship with content partners like King Features, updating its policies for third-party content and explicitly identifying third-party content in its publications. The Chicago Sun-Times also removed the section from its e-paper version and confirmed that it would not charge subscribers who bought the premium edition. "We are committed to making sure this never happens again. We know that there is work to be done to provide more answers and transparency around the production and publication of this section, and will share additional updates in the coming days," Bell said. In another report for the Chicago Sun-Times, the freelance writer was identified as Marco Buscaglia, who confirmed that he used AI for this and other stories without disclosing it to supervisors or fully vetting the results. King Features later added that it was "terminating" its relationship with Buscaglia, saying that he had violated strict policy regarding the use of AI. In an additional statement to Fox News Digital on Wednesday, Chicago Public Media marketing director Victor Lim said, "Regarding Chicago Public Media's usage of generative AI, we are committed to producing journalism that is accurate, ethical, and deeply human. While GAI may assist with certain tasks—like summarizing documents or analyzing data—our editorial content will always be created and shaped by journalists."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store