Latest news with #SunTimes


Russia Today
24-05-2025
- Russia Today
AI hallucinations: a budding sentience or a global embarrassment?
In a farcical yet telling blunder, multiple major newspapers, including the Chicago Sun-Times and Philadelphia Inquirer, recently published a summer-reading list riddled with nonexistent books that were 'hallucinated' by ChatGPT, with many of them falsely attributed to real authors. The syndicated article, distributed by Hearst's King Features, peddled fabricated titles based on woke themes, exposing both the media's overreliance on cheap AI content and the incurable rot of legacy journalism. That this travesty slipped past editors at moribund outlets (the Sun-Times had just axed 20% of its staff) underscores a darker truth: when desperation and unprofessionalism meets unvetted algorithms, the frayed line between legacy media and nonsense simply vanishes. The trend seems ominous. AI is now overwhelmed by a smorgasbord of fake news, fake data, fake science and unmitigated mendacity that is churning established logic, facts and common sense into a putrid slush of cognitive rot. But what exactly is AI hallucination? AI hallucination occurs when a generative AI model (like ChatGPT, DeepSeek, Gemini, or DALL·E) produces false, nonsensical, or fabricated information with high confidence. Unlike human errors, these mistakes stem from how AI models generate responses by predicting plausible patterns rather than synthesizing established facts. There are several reasons why AI generates wholly incorrect information. It has nothing to do with the ongoing fearmongering over AI attaining sentience or even acquiring a soul. Training on imperfect data: AI learns from vast datasets replete with biases, errors, and inconsistencies. Prolonged training on these materials may result in the generation of myths, outdated facts, or conflicting sources. Over-optimization for plausibility: Contrary to what some experts claim,AI is nowhere near attaining 'sentience' and therefore cannot discern 'truth.' GPTs in particular are giant planetary-wide neural encyclopedias that crunch data and synthesize the most salient information based on pre-existent patterns. When gaps exist, it fills them with statistically probable (but likely wrong) answers. This was however not the case with the Sun-Times fiasco. Lack of grounding in reality: Unlike humans, AI has no direct experience of the world. It cannot verify facts as it can only mimic language structures. For example, when asked 'What's the safest car in 2025?' it might invent a model that doesn't exist because it is filling in the gap for an ideal car with desired features — as determined by the mass of 'experts' — rather than a real one. Prompt ambiguity: Many GPT users are lazy and may not know how to present a proper prompt. Vague or conflicting prompts also increase hallucination risks. Ridiculous requests like 'Summarize a study about cats and gender theory' may result in an AI-fabricated fake study which may appear very academic on the surface. Creative generation vs. factual recall: AI models like ChatGPT prioritize fluency over accuracy. When unsure, they improvise rather than admit ignorance. Ever came across a GPT answer that goes like this: 'Sorry. This is beyond the remit of my training?' Reinforcing fake news and patterns: GPTs can identify particular users based on logins (a no-brainer), IP addresses, semantic and syntactic peculiarities and personnel propensities. It then reinforces them. When someone constantly uses GPTs to peddle fake news or propaganda puff pieces, AI may recognize such patterns and proceed to generate content that is partially or wholly fictitious. This is a classic case of algorithmic supply and demand. Remember, GPTs not only train on vast datasets, it can also train on your dataset. Reinforcing Big Tech biases and censorship: Virtually every Big Tech firm behind GPT rollouts is also engaged in industrial-scale censorship and algorithmic shadowbanning. This applies to individuals and alternative media platforms alike and constitutes a modern-day, digitally-curated damnatio memoriae. Google's search engine, in particular, has a propensity for up-ranking the outputs of a serial plagiarist rather than the original article. The perpetuation of this systemic fraud may explode into an outright global scandal one day. Imagine waking up one morning to read that your favorite quotes or works were the products of a carefully-calibrated campaign of algorithmic shunting at the expense of the original ideators or authors. This is the inevitable consequence of monetizing censorship while outsourcing 'knowledge' to an AI hobbled by ideological parameters. Experiments on human gullibility: I recently raised the hypothetical possibility of AI being trained to study human gullibility, in a way conceptually similar to the Milgram Experiment, the Asch Conformity Experiments and its iteration, the Crutchfield Situation. Humans are both gullible and timorous and the vast majority of them tend to conform to either the human mob or in the case of AI, the 'data mob.' This will inevitably have real-world consequences, as AI is increasingly embedded in critical, time-sensitive operations – from pilots' cockpits and nuclear plants to biowarfare labs and sprawling chemical facilities. Now imagine making a fateful decision in such high-stakes environments, based on flawed AI input. This is precisely why 'future planners' must understand both the percentage and personality types of qualified professionals who are prone to trusting faulty machine-generated recommendations. When AI generates an article on one's behalf, any journalist worth his salt should consider it as having been written by another party and therefore subject to fact-checking and improvisation. As long as the final product is fact-checked, and substantial value, content and revisions are added to the original draft, I don't see any conflict of interest or breach of ethics involved in the process. GPTs can act as a catalyst, an editor or as a 'devil's advocate' to get the scribal ball rolling. What happened in this saga was that the writer, Marco Buscaglia, appeared to have wholly cut and pasted ChatGPT's opus and passed it off as his own. (Since this embarrassing episode was exposed, his website has gone blank and private). The overload of woke-themed nonsense generated by ChatGPT should have raised red flags in the mind of Buscaglia but I am guessing that he might be prone to peddling this stuff himself. However all the opprobrium currently directed at Buscaglia should also be applied to the editors of King Features Syndicate and various news outlets who didn't fact-check the content even as they posed as the bastions of the truth, the whole truth and nothing but the truth. Various levels of gatekeepers simply failed to do their jobs. This is a collective dereliction of duty from the media which casually pimps its services to the high and mighty while it pontificates ethics, integrity and values to lesser mortals. I guess we are used to such double-standards by now. But here is the terrifying part: I am certain that faulty data and flawed inputs are already flowing from AI systems into trading and financial platforms, aviation controls, nuclear reactors, biowarfare labs, and sensitive chemical plants – even as I write this. The gatekeepers just aren't qualified for such complex tasks, except on paper, that is. These are the consequences of a world 'designed by clowns and supervised by monkeys.' I will end on a note highlighting the irony of ironies: All the affected editors in this saga could have used ChatGPT to subject Buscaglia's article to a factual content check. It would have only taken 30 seconds!


Times
22-05-2025
- Entertainment
- Times
Top US newspapers recommended 15 books. Only five of them were real
Readers of two of America's leading newspapers must have been puzzled when trying to track down the best summer reads recommended in the books section. A list run in the Chicago Sun-Times and The Philadelphia Inquirer advertised The Rainmakers by the Pulitzer prizewinner Percival Everett and Tidewater Dreams by the Chilean-American author Isabel Allende. Neither book actually exists; they were among ten invented for the list generated by artificial intelligence. The Sun-Times has apologised and said the content was provided by a syndicating service and was not checked before being published in the newspaper. Marco Buscaglia, the creator of the list, confirmed to the tech news website 404 Media that he had used AI. 'I do use AI for background at times but always check


National Post
22-05-2025
- Entertainment
- National Post
Newspapers' AI-generated summer reading list recommends nonexistent books
Article content NEW YORK — The recommended reading list contained some works of fiction. It also contained some works that were, in fact, actually fictional. Article content Article content The content distributor King Features says it has fired a writer who used artificial intelligence to produce a story on summer reading suggestions that contained books that didn't exist. Article content The list appeared in Heat Index: Your Guide to the Best of Summer, a special section distributed in Sunday's Chicago Sun-Times and The Philadelphia Inquirer last week. Article content More than half of the books listed were fake, according to the piece's author, Marco Buscaglia, who admitted to using AI for help in his research but didn't double-check what it produced. 'A really stupid error on my part,' Buscaglia wrote on his Facebook page. Article content Article content 'The Heat Index summer supplement was created by a freelance contract creator who used AI in its story development without disclosing the use of AI,' the syndicators King Features said in a statement, noting it has a strict policy against using AI to create material. Only the Sun-Times and Inquirer have used the supplement, the organization said. Article content Among the summer reading suggestions was The Last Algorithm by Andy Weir, described as 'a science-driven thriller following a programmer who discovers an AI system has developed consciousness' and been secretly influencing world events. Nightshade Market, by Min Jin Lee, was said to be a 'riveting tale set in Seoul's underground economy.' Article content Article content Both authors are real, but the books aren't. 'I have not written and will not be writing a novel called Nightshade Market,' Lee posted on X. Article content Article content The Sun-Times said it was investigating whether any other inaccurate information was included in the Heat Index supplement, and reviewing its relationships with other content partners. Article content 'We are in a moment of great transformation in journalism and technology, and at the same time our industry continues to be besieged by business challenges,' the newspaper said. 'This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it.'


Washington Post
20-05-2025
- Entertainment
- Washington Post
Major newspapers ran a summer reading list. AI made up book titles.
The Chicago Sun-Times and the Philadelphia Inquirer find themselves at the center of an AI-related gaffe after they published syndicated content packed with unidentifiable quotes from fake experts and imaginary book titles created using generative artificial intelligence. The articles were published in the papers' 'Heat Index' special sections — a multipage insert filled with tips, advice and articles on summertime activities. The insert, which was published by the Sun-Times on Sunday and by the Inquirer on Thursday, was syndicated by King Features, a service from the Hearst media company that produces comics, puzzles and supplemental material.
Yahoo
03-05-2025
- Yahoo
Gun that went missing after buyback in 2023 linked to 3 separate Chicago shootings
CHICAGO — Gun buyback programs aim to take firearms off the streets by offering cash for guns, but one weapon has reportedly ended up back on the street, raising concerns. It's estimated that the Chicago Police Department has taken thousands of guns off the street. There's a system in place to log and destroy the guns that are turned in. The Illinois Answers Project and Sun Times led an investigation into what happened at a south side gun buyback in 2023 when the gun went missing. A group of reporters have been able to match the missing gun to shell casings at three different Chicago shootings. Casey Toner with the Illinois Answers Project says the internal affairs investigation into the gun was closed. 'They closed it and basically said there was a Sargeant responsible for overseeing it. That officer was given a one-day suspension,' Toner said. 'But that's not the case. Now we are learning the investigation is ongoing.' A short audio clip details part of the questioning by investigators trying to piece together how the .45 caliber Glock disappeared. Investigator: Again, in your opinion, you think it was lost in the station? Officer: Yes Investigator: Okay and when you found it, when you discovered the error, who did you tell? Officer: I told everyone in the office. I asked, 'where's the Glock?' Toner has unraveled details about the moments before the gun went missing. 'The investigation said there may have been a cleaning lady there as well, but it was almost all police officers,' Toner said. 'Those were the people that were processing the gun, those are the people who were admiring the gun when it came in.' Crystal Reynolds who spoke with the Sun Times discovered that the gun was used during a shooting outside her building. The bullets luckily missed her. 'I was kind of shocked and disappointed… how many more guns have been put on the street again,' Reynolds said in an interview with the Sun Times. It's not clear how the gun wound up back on the street. Since then, there have been some changes to the way guns are recorded once they are turned in. It may be some time before the full investigation into what happened is done. WGN did reach out to the Chicago Police Department about the Illinois Answers Project report that was released. They have not yet responded. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.