
The grab-bag called artificial intelligence
Every day, it seems there is a new story in the media about artificial intelligence. AI technology is revolutionary. It is useless. It's groundbreaking. It is also destructive. But it will also save lives. Though we can't forget that it may rot your brain.
Given the way that we have let AI be discussed in the media, somehow all these things are true at the same time. And that is because tech journalists, the people we rely on to be the intermediary between the industry and the public, have failed us deeply. Instead of speaking truth to power, tech journalists have (with a few exceptions) allowed industry leaders full control over the language used to cover them. Whether tech journalists have done this because they are afraid that being too critical will lose them coveted access to industry leaders, or if they simply lack the capacity to properly criticize the industry they are covering, they have largely failed to hold the industry accountable.
This is why basically anything that a computer does, we call it AI now. Got a new algorithm? That's AI now. A new chat function for your user interface? That's AI too. A program that can decode proteins for developing new pharmaceuticals? Yes, that is also AI. We have allowed the tech industry to brand everything, from the utterly banal and harmful, to the beneficial and brilliant, as AI, even when the technologies have nothing to do with each other and would have been called something more specific just a few years ago.
It is a concerted effort by the tech industry to convince people that they have developed a new singular technology which has invaded every facet of our society, superior to most human capabilities. It allows them to build hype by suggesting grandiosities like that they are on the brink of producing an artificial general intelligence, even though they haven't produced anything that can even be remotely considered as such.
The truth is, the tech industry is proceeding much as it always has. These are all disparate technologies with varying degrees of efficacy and viability, and they should be discussed so. We should not be letting the tech industry get away with smuggling in the bad with the good just because they have a fancy new marketing term.
The reason that they're getting away with this is that they spearheaded this entire effort with an admittedly sophisticated chatbot grafted onto a search engine, providing it with the conversational ability and enough referential capacity to pass the Turing Test.
One good thing about all this, I suppose, is that these developments have shown us just how useless the Turing Test is. Which is the idea that if an artificial intelligence can convince a human that it is conscious, then we must question how it is distinguishable from actual consciousness. But as we have seen, tricking humans is fantastically easy and not a good gauge for anything.
Even though ChatGPT led the way as the voice of this Mechanical Turk, it is perhaps the worst iteration of the things we have allowed to be branded as AI. And it is emblematic of what the tech industry thinks they can get away with. Besides being regularly false, environmentally disastrous and a poor substitute for the human labour it seeks to supplant, it has also been shown to prey on the vulnerable and feed into the delusions of mentally unstable users.
Beyond all that, it's largely useless. There isn't really any use case for this technology where the benefits outweigh the downsides. A recent MIT study showed that even using it as an assistant in writing projects, its most basic function, ultimately leads to increasingly poor performance and even reduced brain function from users.
And it doesn't even have the excuse of it producing good work. 'It's the worst it will ever be' apologists love to insist. But even that isn't true. As ChatGPT starts to cannibalize its own slop, essentially using material it created itself as its training data, we get what is referred to as 'model collapse' and render every new version less functional than the last. A glaring flaw that those few critics of the industry have been warning about from the advent of the tech.
Therein lies the heart of the problem. The leaders of the tech industry are so high on their own messiah complex that they believe such criticisms are meaningless. Everything will work out for them simply because they are the special genius boys who our supposed meritocracy has rewarded for their unparalleled brilliance. The problems with the tech will eventually be worked out, they are sure, because everything always works out for them.
The truth of it is that venture capital is so overleveraged in the 'AI' industry that they refuse to take a loss by admitting a piece of tech has been a wasted investment. So they keep pumping capital into firms like OpenAI even though it has never turned a profit and shows little prospect of ever doing so. Even as they continue to incorporate useless pieces of tech that nobody asked for into every single product or service they can, making things worse while charging more for them, in a process that tech critic Ed Zitron calls The Rot Economy and writer Cory Doctrow has dubbed Enshittification.
It's time we start talking about tech in these terms.
Alex Passey is a Winnipeg writer.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Canada News.Net
17 hours ago
- Canada News.Net
Scam centers exposed as Meta purges millions of WhatsApp accounts
NEW YORK CITY, New York: Meta, the parent company of WhatsApp, has announced that it removed 6.8 million WhatsApp accounts in the first half of the year due to their ties with international scam networks. These accounts were linked to organized criminal "scam centers" operating across borders, targeting people through online fraud. The mass takedown is part of Meta's broader effort to fight online scams, which have become more frequent and sophisticated. In a statement released this week, Meta said it is also introducing new tools on WhatsApp to help users identify and avoid scams. One such feature is a safety notice that appears when someone not in your contact list adds you to a group chat. Another ongoing test will prompt users to pause before replying to suspicious messages. According to Meta, criminal scam centers, often run through forced labor and organized crime, are among the most active sources of digital fraud. These scammers frequently switch platforms to avoid detection, sometimes beginning on dating apps or SMS and then moving to social media or payment platforms. Meta cited recent scam campaigns that used Facebook, Instagram, TikTok, Telegram, and even ChatGPT to spread fraudulent schemes. These included fake offers to pay for social media engagement, pyramid schemes, and misleading cryptocurrency investment pitches. One such campaign, reportedly run from a scam center in Cambodia, was disrupted by Meta in collaboration with OpenAI, the creators of ChatGPT. The scammers had been using AI-generated messages to trick users and expand their operations across platforms. With scams growing more elaborate, Meta is urging users to remain cautious and use the new security tools being rolled out across its services.


Toronto Star
2 days ago
- Toronto Star
OpenAI launches GPT-5, a potential barometer for whether AI hype is justified
OpenAI on Thursday released the fifth generation of the artificial intelligence technology that powers ChatGPT, a product update that's being closely watched as a measure of whether generative AI is advancing rapidly or hitting a plateau. GPT-5 arrives more than two years after the March 2023 release of GPT-4, bookending a period of intense commercial investment, hype and worry over AI's capabilities.


Toronto Star
3 days ago
- Toronto Star
OpenAI releases GPT-5, a potential barometer for whether artificial intelligence hype is justified
SAN FRANCISCO (AP) — OpenAI has released the fifth generation of the artificial intelligence technology that powers ChatGPT, a product update that's being closely watched as a measure of whether generative AI is advancing rapidly or hitting a plateau. GPT-5 arrives more than two years after the March 2023 release of GPT-4, bookending a period of intense commercial investment, hype and worry over AI's capabilities.