logo
How to Spot AI Hype and Avoid The AI Con, According to Two Experts

How to Spot AI Hype and Avoid The AI Con, According to Two Experts

CNET13-05-2025

"Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."
That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.
The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.
Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."
Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.
Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.
So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.
Watch out for language that humanizes AI
Anthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."
These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.
That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.
In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."
The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.
But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."
Be dubious of the phrase 'super intelligence'
If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."
The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.
Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.
The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."
Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
Apple Delaying Siri's Intelligence Isn't a Failure. The Problem Is Bigger Than Apple Apple Delaying Siri's Intelligence Isn't a Failure. The Problem Is Bigger Than Apple
Click to unmute
Video Player is loading.
Play Video
Pause
Skip Backward
Skip Forward
Next playlist item
Unmute
Current Time
0:00
/
Duration
6:28
Loaded :
0.00%
0:00
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
6:28
Share
Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text
Color White Black Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Text Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Transparent Caption Area Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset Done
Close Modal Dialog
End of dialog window.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Apple Delaying Siri's Intelligence Isn't a Failure. The Problem Is Bigger Than Apple
Ask what goes in and how outputs are evaluated
One of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.
One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.
If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.
It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What is a Waymo car?
What is a Waymo car?

Yahoo

timean hour ago

  • Yahoo

What is a Waymo car?

( — Waymo vehicles have been in headlines over the last few days after anti-immigration enforcement protestors set some of them on fire in Los Angeles. But what are they? A Waymo car is a fully autonomous, self-driving vehicle operated by the company Waymo, a Google subsidiary. It operates on a rideshare service application called Waymo One, which offers on-demand, self-driving transportation in select cities, according to How does ICE know who to deport? Waymo began in 2009 in San Francisco as an experiment and went on in 2016 to become a fully-operational robotaxi service in some major cities. 'The Waymo Driver is the embodiment of fully autonomous technology that is always in control from pickup to destination,' the company said on its website. 'Passengers don't even need to know how to drive. They can sit in the back seat, relax, and enjoy the ride with the Waymo Driver getting them to their destination safely.' Waymo cars are currently available in Phoenix, AZ, San Francisco, Los Angeles, and Austin, TX, according to its website. They are reportedly coming soon to Atlanta, GA, and Miami, FL. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Microsoft-Backed OpenAI (MSFT) Has Reached $10B in Annual Recurring Revenue
Microsoft-Backed OpenAI (MSFT) Has Reached $10B in Annual Recurring Revenue

Business Insider

timean hour ago

  • Business Insider

Microsoft-Backed OpenAI (MSFT) Has Reached $10B in Annual Recurring Revenue

Microsoft-backed AI company OpenAI has reached $10 billion in annual recurring revenue (ARR) less than three years after launching its popular ChatGPT chatbot. This total includes sales from its consumer products, ChatGPT business services, and its API, but does not include licensing revenue from Microsoft (MSFT) or large one-time deals, according to an OpenAI spokesperson. Interestingly, this amount jumped from about $5.5 billion in ARR last year. However, its rapid growth has come at a cost, as OpenAI lost around $5 billion last year to support its expansion. Confident Investing Starts Here: Nevertheless, OpenAI is aiming for even bigger targets. Indeed, the company is reportedly looking to reach $125 billion in revenue by 2029, according to a person familiar with the plans, which was first reported by The Information. In addition, earlier this year, OpenAI closed a $40 billion funding round that turned out to be the largest private tech deal on record and valued the company at $300 billion. As a result, based on today's revenue, OpenAI is currently valued at about 30 times sales. This highlights the huge expectations that investors have of the company. It is worth noting that OpenAI first launched ChatGPT for consumers in late 2022 and added business products the following year, with adoption growing quickly. By late March, the firm said that it had 500 million weekly active users. Moreover, earlier this month, OpenAI reported that it now has 3 million paying business users, which was an increase from 2 million in February. These strong user numbers are helping drive OpenAI's fast revenue growth as it continues to expand its AI products for both consumers and businesses. Is MSFT Stock a Buy? Although you cannot directly invest in OpenAI, you can buy shares of Microsoft, which has a 49% stake in OpenAI. And according to analysts, Microsoft stock has a Strong Buy consensus rating among 36 Wall Street analysts. That rating is based on 31 Buys and five Holds assigned in the last three months. Furthermore, the average MSFT price target of $514.93 implies 9% upside potential.

Google has a really weird problem at its new London HQ
Google has a really weird problem at its new London HQ

Digital Trends

time2 hours ago

  • Digital Trends

Google has a really weird problem at its new London HQ

When Google's striking new office building finally opens in London later this year, it'll be home to up to as many as 7,000 workers … and possibly a few foxes, too. The cunning creature has taken up residence on the building's 300-meter-long rooftop garden and its unexpected occupation has been an issue for the last three years, according to a Guardian report (via London Centric). Recommended Videos The expansive roof area has been filled with wildflowers and woodland plants and is supposed to be an area for Google employees to relax and enjoy a bite to eat, or maybe even dream up the next big idea for the tech giant. But the lush garden is likely to be out of bounds if the foxes are still roaming free there. 'Fox sightings at construction sites are pretty common, and our King's Cross development is no exception,' Google told the London Centric in a statement. 'While foxes have been occasionally spotted at the site, their appearances have been brief and have had minimal impact on the ongoing construction.' But the four-legged residents have reportedly been digging burrows in the carefully landscaped grounds, with some people connected with the site having seen fox poop about the place. While London is famous for fox sightings, it's not clear how the animal managed to find its way to the roof of the 11-story building, which has been under construction since 2018. The building, designed by Thomas Heatherwick Studio and Bjarke Ingels Group, features the garden as a centerpiece and is supposed to be a shared space for not only Google workers but also bees, bats, birds, and butterflies. But not foxes. With the building set to welcome workers before the end of this year, there's still time to clear the garden of the pesky animal. But with foxes known to be resourceful and highly adaptable, getting rid of them may be a greater challenge than expected.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store