logo
As AI images flood the internet, can these tools help you tell what's real?

As AI images flood the internet, can these tools help you tell what's real?

CBC14-03-2025
AI image detectors promise to help cut through the flood of online content to help determine what's real and what's not — but can you rely on them to get it right?
Many people now regularly turn to AI detectors to help determine if viral images are AI-generated. And as people increasingly become skeptical of even real images, those tools are in greater demand.
CBC News tested five commonly-recommended free AI image detectors to see if they were accurately able to estimate whether an image was real or AI-generated. These included: AI or Not, Illuminarty, Hive Moderation, Is it AI? and a tool hosted on the open source machine learning platform Hugging Face. While many of these AI detector services have a free tier, some have pricing plans that can cost hundreds of dollars each year.
The CBC's visual investigations unit ran three photos through each detector twice. The images tested include a real photo of a CBC hallway lined with lockers, a similar-looking AI-generated image, and a lower resolution, compressed version of the AI-generated image that was posted to the social media platform X.
Do AI image detectors work? We tested 5
19 hours ago
Duration 2:44
AI image detectors are growing in popularity as a way to determine whether an image or video shared online is real or not. CBC News' Visual Investigations team tested some of the most popular free tools online to see how effective they are — and whether you should rely on them.
In the first test, AI or Not and Hive accurately labelled all three images. Illuminarty and Is it AI? got all three wrong. The detector hosted on Hugging Face accurately labelled the AI-generated images, but thought the real image was fake.
In the second test, the results were the same, except this time Is it AI? labelled the real image correctly, but still got the AI-generated images wrong.
"It really does depend on which system you're looking at," said Ben Y. Zhao, a computer science professor at the University of Chicago.
Tests produced mixed results
CBC News reached out to all five AI detector companies for comment.
The CEOs of AI or Not and Hive both said their AI detectors are searching for patterns invisible to the naked eye.
"While some AI-generated images may be easier to identify by common errors — like extra fingers on someone's hand, unrealistic reflections and shadows, or the mismatched scale of objects — the sophistication of AI-generated imagery is moving beyond the threshold of human detection," said Hive CEO Kevin Guo in a statement.
AI or Not CEO Anatoly Kvitnitsky says their tool looks for "pixel-level patterns of content."
The creator of the AI detector on Hugging Face, developer Colby Brown, says AI detection is still worth pursuing, even if it gets some images wrong.
"User caution is needed," Brown said in a statement. "Individual images can fool such detectors even if they have reasonable accuracy on a larger sample (or feed) of images."
The team at Is It AI? said in a statement that the test CBC News performed highlights "the ongoing challenge that AI image detectors face" as the technology develops.
They also said that their tool "covers a wide range of domains and typically requires a larger and more diverse dataset to assess accuracy comprehensively."
As AI image generators are continuously improving, so are detectors. Is It AI? said the company will soon release a new detection tool with "substantial improvements" in accuracy. Brown also said that he may develop a new and more advanced tool.
Illuminarty didn't respond to CBC's requests for comment.
WATCH | Investigating the claim AI was used to boost crowd size at Carney event:
How we debunked this Carney AI claim
16 days ago
Duration 1:51
Zhao says some AI detectors are better than others.
"Some of them are trained on millions of images that allow them to do a better job with discerning the differences," he said.
He noted that bad actors can even use AI image detectors to iterate and fine-tune fake images that would then be labelled as real.
"I think the real danger is really to a lot of the folks who are not in a situation where they expect" to be targetted by AI-generated fakes, Zhao said, noting these are usually people who aren't as familiar with technology. "They're going to be easier targets for scammers and phishing scams and different kinds of things."
Zhao says that old tricks for detecting AI images are becoming less reliable. Famously, early iterations of AI image generators had trouble mimicking human hands, but he says that's not the case anymore.
Still, AI image generators don't get everything right, and a trained eye can often pick out details that clearly indicate AI was used.
The AI image CBC News used in the test can be identified as fake with the naked eye. The lockers in the hallway have locks that are warped and blurred, for example. The overhead lights have no fixtures, a panel on the ceiling has a line running through it, and there appear to be far too many lockers for the amount of space shown in the photo.
Zhao says that when people are trying to tell the difference between a real photo and an AI-generated image, thinking through the details is important.
"Does it make sense for the button to be placed in this way? Does it make sense for the hair to blend in with the turtle neck that way? Those kinds of smaller details are really still tricky for models to get right," Zhao said.
Methodology:
During testing, CBC News sought to mimic the experience of a member of the general public. We chose five free popular online AI image detectors. We chose five free popular online AI image detectors by tallying the number of recommendations from lists featured on the first five pages of Google search results and chose the top five of those services.
Three images were tested: a real photo taken by CBC reporters, an AI-generated image and a compressed version of the same AI image that was posted to X and downloaded again. The five detectors were then scored as correct or incorrect based on whether they accurately assessed whether the images were more likely created by a human or AI. The test was run twice.
The AI photo was generated from Google's Gemini AI with the prompt: "Create an image of a hallway with blue lockers filling half of the hallway on the left, grey checkered carpet and light orange wall on the right, and white hallway on the left back. A red wall is at the end of the hallway."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Chatbots learned to write from us. Can AI now change the way we think?

time2 days ago

Chatbots learned to write from us. Can AI now change the way we think?

As AI saturates the internet, researchers say it's changing the way we write — and, potentially, the way we think. A recent report (new window) from online security platform Imperva found automated traffic surpassed human-generated activity online for the first time in 2024. While experts told CBC News it's impossible to say definitively whether that's accurate, they do note there's more AI online than ever before. And as people increasingly turn to AI-powered chatbots in their everyday lives, experts suggest they're mimicking the language chatbots tend to use. Some worry this is creating a feedback loop that could shrink human creativity and potentially alter our thought processes. I do worry about the homogenization of language being the canary in the coal mine for the homogenization of thought, to where AI starts to really influence not just what we say, but how we think, said Canadian futurist Sinead Bovell, who founded the tech education company WAYE that advises organizations on emerging technologies. She says more than half of text online is now likely generated in full or in part by AI. Bovell says she's noticed a uniformity in the way people write on social media platforms like X and LinkedIn, as well as sites like Substack, the blogging and newsletter platform. Several experts told CBC News that this growing homogeneity of language online is also making it increasingly difficult to parse what is and isn't written by humans. Bovell says some of the hallmarks of AI writing include symmetrical clauses such as, It's not just X, but Y, words like moreover, the use of lists and bullet points, metaphors that often don't make sense and a generally bland, neutral tone. Morteza Dehghani, professor of psychology and computer science at University of Southern California and director of the school's Center for Computational language sciences, studies the homogenization of language online. Photo: Submitted by Morteza Dehghani AI also impacts human thought, reasoning Morteza Dehghani, director of the University of Southern California's Center for Computational Language Sciences, says his research bolsters his concern about the homogenization of language and thought. We're losing the variance and perspective that we see in human societies. And this is going to affect our reasoning as well, Dehghani said. In a February study (new window) , Dehghani and a team of USC researchers analyzed language found in Reddit posts, scientific papers and American community newspapers from 2018 to 2025. They found a spike in AI-generated text in late 2022, which they note corresponded with the release of OpenAI's ChatGPT chatbot. They also found a drop in the variance and complexity of written text since that spike. LISTEN | How AI can impact your brain (new window) Dehghani says this indicates that even writers who aren't directly using large language models (LLMs) like ChatGPT seem to be trying to adapt to the writing structures they see in an online world increasingly overrun by AI. You want to write in the same fashion that your readers are exposed to, or are used to, he said. We're just getting into this loop of homogenization. In a separate paper (new window) published this month, Dehghani and other USC researchers argue that these homogenizing effects of LLMs on writing are carrying into human expression and thought, noting that the LLMs reflect and reinforce dominant styles of writing while also marginalizing alternative voices and reasoning strategies. Bovell, the futurist, says because data used to train AI comes from the internet, it tends to reflect the loudest and most dominant online voices, meaning groups and cultures that are historically marginalized typically aren't used in that training data, which adds to the homogenization. Enlarge image (new window) Futurist Sinead Bovell says that as AI use has grown, she's noticed a uniformity in the way people write on various online platforms. Photo: Submitted by Sinead Bovell This problem is compounded, she said, by the fact that most AI we use comes from a handful of American companies. At the end of the day, she said, these companies are building the foundation of the future, and that's something that we all need to really think about. AI increasingly training on its own content John Licato, associate professor in Computer Science and Engineering and director of the Advancing Machine and Human Reasoning Lab at the University of South Florida, says the amount of AI and bot-generated content we're consuming is much higher than it's ever been, and will probably continue to increase. He says that determining the exact level of automated content versus human content is especially hard because humans amplify posts made by social media bots, and vice versa. WATCH | Your next job interview could be with AI: He says the internet has reached a point where it could continue to function in an imaginary scenario where humans stopped using it. That leads to another problem with machine learning, where generative AI systems trained on their own content produce increasingly worse and more biased results. This is something Licato says is already happening. When you have AI that's continually trained on its own data, things like biases get worse after multiple generations. Mistakes get worse after every generation, he said. That is the kind of thing that we would expect to happen if humans just dropped off the internet. The push to preserve diversity Those who study AI and language suggest there's still a lot to learn about this homogeneity and how to address it. In the USC paper published in August, the researchers concluded that preserving and enhancing meaningful human diversity has to be central to the design and development of AI. If we don't pay deliberate attention to that diversity, they warn, we won't be able to harness the full potential of the technology without sacrificing the very diversity that defines human society. To push back against homogenization in LLMs and make them work in the public interest, Bovell says we have to figure out how to mould data in a way that's more diverse, something that's not always a priority for private companies. It's also important to have open-source models that allow anyone to modify them, Bovell says, as opposed to proprietary models like ChatGPT that don't make code and other details available to the public. She's encouraged that countries like Sweden (new window) have begun working on "sovereign AI (new window) " models to reflect their local cultures. Bovell says that if AI is going to impact and shape languages that are the shared fabric allowing society to reach consensus and move forward, you want to make sure that these tools are reflective of the people in your population and the breadth and depth of the diversity there. LISTEN | The decline of the internet (new window) Kevin Maimann (new window) · CBC News · Digital Writer Kevin Maimann is a senior writer for CBC News based in Edmonton. He has covered a wide range of topics for publications including VICE, the Toronto Star, Xtra Magazine and the Edmonton Journal. You can reach Kevin by email at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store