logo
#

Latest news with #AIorNot

Did 'The Simpsons' predict Coldplay jumbotron scandal?
Did 'The Simpsons' predict Coldplay jumbotron scandal?

Express Tribune

time28-07-2025

  • Entertainment
  • Express Tribune

Did 'The Simpsons' predict Coldplay jumbotron scandal?

According to viral social media posts, The Simpsons predicted a kiss cam moment at a Coldplay concert in July 2025. After a video went viral allegedly showing the Astronomer CEO caught cheating with the company's chief people officer, users on social media quickly claimed this was another event the sitcom had predicted long before it happened. As per DW, the rumour spread across various social media platforms like TikTok, X, Instagram and Facebook, in various languages. Some posts garnered millions of views, and some even contained specific details, such as the season, episode and air date, where the supposed screenshot was allegedly taken from. DW Fact Check had a look into the alleged prediction. Claim: "Did The Simpsons really predict the Coldplay Concert incident in a 2003 episode? (The alleged affair or sighting of Astronomer CEO Andy Byron & Kristin Cabot)," says this post on X that includes a still of the kiss cam footage alongside an image that appears to show the similar scene from the The Simpsons cartoon. DW Fact Check: False Most versions of the rumour claim the prediction occurred in Season 26, Episode 10, The Man Who Came to Be Dinner. The episode aired on January 4, 2015, and features a plot in which the Simpson family is abducted by aliens while visiting an amusement park. The episode is available on streaming platforms and YouTube here. However, there is no such scene in this episode. Another episode, Season 17, Episode 22, Marge and Homer Turn a Couple Play, does include a kiss cam scene, but it takes place at a baseball match, and the characters kissing are Marge and Homer. The alleged prediction screenshot circulating online is, therefore, most likely generated by artificial intelligence or digitally manipulated. DW Fact Check uploaded the image to several AI-detection platforms, including AIorNot, which labeled it as "likely AI-generated." Hive Moderation even gave an estimate of 99.9 per cent "likely to contain AI-generated or deepfake content." But this isn't the first time The Simpsons has allegedly predicted future events. You may have seen some of these viral claims before: Trump as US president One of the most famous alleged predictions from The Simpsons was that Donald Trump would become the president of the United States. In Season 11, Episode 17, Lisa Simpson did actually reference Donald Trump's presidency in an episode that originally aired on March 19, 2000. In the episode, set in the future, Lisa is the current US president and implies that real estate mogul Trump had been her predecessor and caused a budget crisis. In 2015, the media cited the episode as a foreshadowing of Trump's future presidential run. However, the image often used to illustrate this "presidential prediction" is actually from a short episode called Trumptastic Voyage, from Season 25, which aired in July 2015, after Trump had already announced his candidacy. The image shows Trump and Homer on an escalator in front of a crowd. The cartoon scene is based on a real-life event from June of the same year. 2024 Baltimore bridge Another viral post claimed that The Simpsons predicted the March 2024 collapse of the Baltimore bridge, showing Homer and Lisa watching the event unfold. At first glance, the image looks real, but the devil is in the details. There are small hints that this image is actually AI-generated. If you take a closer look at the image, you'll see that Lisa's hair has 10 spikes, whereas the real character of the cartoon series only has eight. Homer's hair is also incorrect - the zigzags are noticeably narrower than in the show. COVID-19 pandemic Many people have also claimed online that the show predicted the COVID-19 pandemic. They refer to Season 4, Episode 21, titled Marge in Chains. In the episode many Springfield residents order juicers from Osaka, Japan. One of the factory workers is sick and coughs into the boxes, spreading what becomes known as the "Osaka Flu." The only similarity between the "Osaka Flu" and COVID-19 is that both originated in East Asia. The fictional flu did not lead to a global lockdown, a pandemic or millions of deaths. By contrast, COVID-19 was far more serious, resulting in over 7 million deaths worldwide, as recorded by the World Health Organisation.

As AI images flood the internet, can these tools help you tell what's real?
As AI images flood the internet, can these tools help you tell what's real?

CBC

time14-03-2025

  • CBC

As AI images flood the internet, can these tools help you tell what's real?

AI image detectors promise to help cut through the flood of online content to help determine what's real and what's not — but can you rely on them to get it right? Many people now regularly turn to AI detectors to help determine if viral images are AI-generated. And as people increasingly become skeptical of even real images, those tools are in greater demand. CBC News tested five commonly-recommended free AI image detectors to see if they were accurately able to estimate whether an image was real or AI-generated. These included: AI or Not, Illuminarty, Hive Moderation, Is it AI? and a tool hosted on the open source machine learning platform Hugging Face. While many of these AI detector services have a free tier, some have pricing plans that can cost hundreds of dollars each year. The CBC's visual investigations unit ran three photos through each detector twice. The images tested include a real photo of a CBC hallway lined with lockers, a similar-looking AI-generated image, and a lower resolution, compressed version of the AI-generated image that was posted to the social media platform X. Do AI image detectors work? We tested 5 19 hours ago Duration 2:44 AI image detectors are growing in popularity as a way to determine whether an image or video shared online is real or not. CBC News' Visual Investigations team tested some of the most popular free tools online to see how effective they are — and whether you should rely on them. In the first test, AI or Not and Hive accurately labelled all three images. Illuminarty and Is it AI? got all three wrong. The detector hosted on Hugging Face accurately labelled the AI-generated images, but thought the real image was fake. In the second test, the results were the same, except this time Is it AI? labelled the real image correctly, but still got the AI-generated images wrong. "It really does depend on which system you're looking at," said Ben Y. Zhao, a computer science professor at the University of Chicago. Tests produced mixed results CBC News reached out to all five AI detector companies for comment. The CEOs of AI or Not and Hive both said their AI detectors are searching for patterns invisible to the naked eye. "While some AI-generated images may be easier to identify by common errors — like extra fingers on someone's hand, unrealistic reflections and shadows, or the mismatched scale of objects — the sophistication of AI-generated imagery is moving beyond the threshold of human detection," said Hive CEO Kevin Guo in a statement. AI or Not CEO Anatoly Kvitnitsky says their tool looks for "pixel-level patterns of content." The creator of the AI detector on Hugging Face, developer Colby Brown, says AI detection is still worth pursuing, even if it gets some images wrong. "User caution is needed," Brown said in a statement. "Individual images can fool such detectors even if they have reasonable accuracy on a larger sample (or feed) of images." The team at Is It AI? said in a statement that the test CBC News performed highlights "the ongoing challenge that AI image detectors face" as the technology develops. They also said that their tool "covers a wide range of domains and typically requires a larger and more diverse dataset to assess accuracy comprehensively." As AI image generators are continuously improving, so are detectors. Is It AI? said the company will soon release a new detection tool with "substantial improvements" in accuracy. Brown also said that he may develop a new and more advanced tool. Illuminarty didn't respond to CBC's requests for comment. WATCH | Investigating the claim AI was used to boost crowd size at Carney event: How we debunked this Carney AI claim 16 days ago Duration 1:51 Zhao says some AI detectors are better than others. "Some of them are trained on millions of images that allow them to do a better job with discerning the differences," he said. He noted that bad actors can even use AI image detectors to iterate and fine-tune fake images that would then be labelled as real. "I think the real danger is really to a lot of the folks who are not in a situation where they expect" to be targetted by AI-generated fakes, Zhao said, noting these are usually people who aren't as familiar with technology. "They're going to be easier targets for scammers and phishing scams and different kinds of things." Zhao says that old tricks for detecting AI images are becoming less reliable. Famously, early iterations of AI image generators had trouble mimicking human hands, but he says that's not the case anymore. Still, AI image generators don't get everything right, and a trained eye can often pick out details that clearly indicate AI was used. The AI image CBC News used in the test can be identified as fake with the naked eye. The lockers in the hallway have locks that are warped and blurred, for example. The overhead lights have no fixtures, a panel on the ceiling has a line running through it, and there appear to be far too many lockers for the amount of space shown in the photo. Zhao says that when people are trying to tell the difference between a real photo and an AI-generated image, thinking through the details is important. "Does it make sense for the button to be placed in this way? Does it make sense for the hair to blend in with the turtle neck that way? Those kinds of smaller details are really still tricky for models to get right," Zhao said. Methodology: During testing, CBC News sought to mimic the experience of a member of the general public. We chose five free popular online AI image detectors. We chose five free popular online AI image detectors by tallying the number of recommendations from lists featured on the first five pages of Google search results and chose the top five of those services. Three images were tested: a real photo taken by CBC reporters, an AI-generated image and a compressed version of the same AI image that was posted to X and downloaded again. The five detectors were then scored as correct or incorrect based on whether they accurately assessed whether the images were more likely created by a human or AI. The test was run twice. The AI photo was generated from Google's Gemini AI with the prompt: "Create an image of a hallway with blue lockers filling half of the hallway on the left, grey checkered carpet and light orange wall on the right, and white hallway on the left back. A red wall is at the end of the hallway."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store