
Artificial intelligence – the panacea to all ills, or an existential threat to our world?
'Once men turned their thinking over to the machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.' – Frank Herbert, Dune, 1965
In the early 19th century, a group of disgruntled factory workers in industrial England began protesting against the introduction of mechanised looms and knitting frames into the factories.
Fearful of losing their jobs, they smashed machines and engaged in acts of sabotage. They were dealt with harshly through imprisonment and even execution. They became known as the Luddites.
At the time, it was not the technology they were most concerned about, but rather the loss of their livelihoods. Ironically, today, the word Luddite has become something of an accusation, a complaint about those who, because they are seen as not understanding a new technology, are deemed to be anti-technology. Even anti-progress.
The 2020s have seen rapid progress in the development of a 'new' technology – artificial intelligence (AI). But the history of AI can be traced back to the middle of the 20th century, and so is perhaps not very new at all.
At the forefront of the current process has been the release of Large Language Models (LLMs) – with ChatGPT being the most prominent – that allow, at the click of a single request, an essay on the topic of your choice. LLMs are simply one type of AI and are not the same as artificial general intelligence (AGI).
Unlike current LLMs, which perform a single task, AGI would be able to reason, be creative and use knowledge across many domains – be more human-like, in essence. AGI is more of a goal, an end point in the development of AI.
LLMs have already been hugely disruptive in education, with university lecturers and school teachers scrambling to deal with ChatGPT-produced essays.
Views about the dangers of AI/AGI tend to coalesce into the doomer and the boomer poles. Crudely, and I am oversimplifying here, the 'doomers' worry that we face an existential threat to our existence were AI to be designed in a way that is misaligned with human values. Boomers, on the other hand, believe AI will solve all our problems and usher in an age of abundance, where we will all be able to work less without seeing a drop in our quality of life.
The 'doomer' narrative originates with Oxford University philosopher Nick Bostrom, who introduced a thought experiment called the ' paperclip maximiser '. Bostrom imagines a worst-case scenario where we create an all-powerful AGI agent that is misaligned with our values.
In the scenario, we request the AGI agent to maximise the production of paperclips. Bostrom worries that the command could be taken literally, with the AGI agent consuming every last resource on Earth (including humans) in its quest to maximise the production of paperclips.
Another take on this thought experiment is to imagine that we ask an all-powerful AGI agent to solve the climate breakdown problem. The quickest and most rational way of doing this would, of course, be to simply rid planet Earth of eight billion human beings.
What do we have to fear from LLMs?
LLMs have scraped the internet for every bit of data, stolen the data, and fed off the intellectual property of writers and artists. But what exactly do we have to fear from LLMs? I would suggest very little (unless, of course, you are a university lecturer in the humanities).
LLMs such as ChatGPT are (currently) little more than complex statistical programs that predict what word follows the word before, based on the above-mentioned internet scraping. They are not thinking.
In fact, some people have argued that everything they do is a hallucination. It is just that the hallucination is more often than not correct and appropriate.
Francois Chollet, a prominent AI researcher, has described LLMs in their current form as a ' dead end ' in the quest for AGI. Chollet is so confident of this that he has put up a $1-million prize for any AI system that can achieve even basic human skills in something he calls the abstraction and reasoning corpus (ARC) test.
Essentially, the ARC is a test of what is called fluid intelligence (reasoning, solving novel problems, and adaptation). Young children do well on ARC tasks. Most adults complete all tasks. Pure LLMs achieve around 0%. Yes – 0%. The $1-million prize does not even require that AGI systems match the skills of humans. Just that they achieve 85%. The prize is yet to be claimed.
People are the problem
If LLMs are (currently) a dead end in the quest for AGI, what should we be worried about? As is always the case, what we need to be afraid of is people. The people in control of this technology. The billionaires, the tech bros, and the dystopian conspiracy theorists.
High on my list is Mark Zuckerberg. The man who invented Facebook to rate the attractiveness of college women, and whose company profited enormously from the echo chamber it created. In Myanmar, this resulted in the ethnic cleansing of the Rohingya people in 2017.
At the beginning of 2025, Zuckerberg showed the depth of his commitment to diversity and integrity in his slavering capitulation to Donald Trump. Jokes aside about whether Zuckerberg is actually a robot, in recent pronouncements, what he seems to want is a world of atomised and alienated people, who out of quiet desperation turn to his dystopian hell where robots – under his control – will be trained to become 'our friends '.
And my personal favourite – Elon Musk. Musk, the ketamine-fuelled racist apologist for the Great Replacement Theory. A man who has committed securities fraud, and accused an innocent man of being a paedophile because the man had the nerve and gall to (correctly) state that Musk's submarine could not negotiate an underwater cave in Thailand.
More recently, estimates are that Musk's destruction of USAid will lead to the deaths of about 1,650,000 people within a year because of cuts to HIV prevention and treatment, as well as 500,000 annual deaths due to cuts to vaccines.
I, for one, do not want this man anywhere near my children, my family, my community, my country.
OpenAI
Sam Altman, the CEO of the world's largest plagiarism machine, OpenAI, recently stated that he would like a large part of the world's electricity grid to run his LLM/AI models.
Karen Hao, in her recently published book Empire of AI, makes a strong case for OpenAI being a classic colonial power that closely resembles (for example) the British East India Company, founded in 1600 (and dissolved in 1874).
Altman recently moved squarely into Orwellian surveillance when OpenAI bought io, a product development company owned by Jonny Ive (designer of the iPhone). While the first product is a closely guarded secret, it is said to be a wearable device that will include cameras and microphones for environmental detection. Every word you speak, every sound you hear, and every image you see will be turned into data. Data for OpenAI.
Why might Altman want this? Money, of course. But for Altman and Silicon Valley, money is secondary to data, to surveillance and the way they are able to parlay data into power and control (and then money). He will take our data, further train his ChatGPT models with it, and in turn use this to better surveil us all.
And for the pleasure of working for, and giving our data to OpenAI? Far from being paid for the data you produce, you will have to buy the gadget, be monitored 24/7, and have your life commodified and sold.
As Shoshana Zuboff said in her magisterial book, The Age of Surveillance Capitalism, 'Forget the cliché that if it's free, 'you are the product'. You are not the product; you are the abandoned carcass. The 'product' derives from the surplus that is ripped from your life.'
The problem was never the cotton loom. The Luddites knew this in the 19th century. It was always about livelihood loss and people (the industrialists).
Bostrom has it badly wrong when he imagines an all-powerful AGI entity that turns against its human inventors. But about the paperclips, he might be correct.
Zuckerberg, Musk and Altman are our living and breathing paperclip maximisers. With their political masters, they will not flinch at turning us all into paperclips and sacrificing us on the altar of their infinite greed and desire for ever-increasing surveillance and control. DM

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Maverick
7 hours ago
- Daily Maverick
Artificial intelligence – the panacea to all ills, or an existential threat to our world?
'Once men turned their thinking over to the machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.' – Frank Herbert, Dune, 1965 In the early 19th century, a group of disgruntled factory workers in industrial England began protesting against the introduction of mechanised looms and knitting frames into the factories. Fearful of losing their jobs, they smashed machines and engaged in acts of sabotage. They were dealt with harshly through imprisonment and even execution. They became known as the Luddites. At the time, it was not the technology they were most concerned about, but rather the loss of their livelihoods. Ironically, today, the word Luddite has become something of an accusation, a complaint about those who, because they are seen as not understanding a new technology, are deemed to be anti-technology. Even anti-progress. The 2020s have seen rapid progress in the development of a 'new' technology – artificial intelligence (AI). But the history of AI can be traced back to the middle of the 20th century, and so is perhaps not very new at all. At the forefront of the current process has been the release of Large Language Models (LLMs) – with ChatGPT being the most prominent – that allow, at the click of a single request, an essay on the topic of your choice. LLMs are simply one type of AI and are not the same as artificial general intelligence (AGI). Unlike current LLMs, which perform a single task, AGI would be able to reason, be creative and use knowledge across many domains – be more human-like, in essence. AGI is more of a goal, an end point in the development of AI. LLMs have already been hugely disruptive in education, with university lecturers and school teachers scrambling to deal with ChatGPT-produced essays. Views about the dangers of AI/AGI tend to coalesce into the doomer and the boomer poles. Crudely, and I am oversimplifying here, the 'doomers' worry that we face an existential threat to our existence were AI to be designed in a way that is misaligned with human values. Boomers, on the other hand, believe AI will solve all our problems and usher in an age of abundance, where we will all be able to work less without seeing a drop in our quality of life. The 'doomer' narrative originates with Oxford University philosopher Nick Bostrom, who introduced a thought experiment called the ' paperclip maximiser '. Bostrom imagines a worst-case scenario where we create an all-powerful AGI agent that is misaligned with our values. In the scenario, we request the AGI agent to maximise the production of paperclips. Bostrom worries that the command could be taken literally, with the AGI agent consuming every last resource on Earth (including humans) in its quest to maximise the production of paperclips. Another take on this thought experiment is to imagine that we ask an all-powerful AGI agent to solve the climate breakdown problem. The quickest and most rational way of doing this would, of course, be to simply rid planet Earth of eight billion human beings. What do we have to fear from LLMs? LLMs have scraped the internet for every bit of data, stolen the data, and fed off the intellectual property of writers and artists. But what exactly do we have to fear from LLMs? I would suggest very little (unless, of course, you are a university lecturer in the humanities). LLMs such as ChatGPT are (currently) little more than complex statistical programs that predict what word follows the word before, based on the above-mentioned internet scraping. They are not thinking. In fact, some people have argued that everything they do is a hallucination. It is just that the hallucination is more often than not correct and appropriate. Francois Chollet, a prominent AI researcher, has described LLMs in their current form as a ' dead end ' in the quest for AGI. Chollet is so confident of this that he has put up a $1-million prize for any AI system that can achieve even basic human skills in something he calls the abstraction and reasoning corpus (ARC) test. Essentially, the ARC is a test of what is called fluid intelligence (reasoning, solving novel problems, and adaptation). Young children do well on ARC tasks. Most adults complete all tasks. Pure LLMs achieve around 0%. Yes – 0%. The $1-million prize does not even require that AGI systems match the skills of humans. Just that they achieve 85%. The prize is yet to be claimed. People are the problem If LLMs are (currently) a dead end in the quest for AGI, what should we be worried about? As is always the case, what we need to be afraid of is people. The people in control of this technology. The billionaires, the tech bros, and the dystopian conspiracy theorists. High on my list is Mark Zuckerberg. The man who invented Facebook to rate the attractiveness of college women, and whose company profited enormously from the echo chamber it created. In Myanmar, this resulted in the ethnic cleansing of the Rohingya people in 2017. At the beginning of 2025, Zuckerberg showed the depth of his commitment to diversity and integrity in his slavering capitulation to Donald Trump. Jokes aside about whether Zuckerberg is actually a robot, in recent pronouncements, what he seems to want is a world of atomised and alienated people, who out of quiet desperation turn to his dystopian hell where robots – under his control – will be trained to become 'our friends '. And my personal favourite – Elon Musk. Musk, the ketamine-fuelled racist apologist for the Great Replacement Theory. A man who has committed securities fraud, and accused an innocent man of being a paedophile because the man had the nerve and gall to (correctly) state that Musk's submarine could not negotiate an underwater cave in Thailand. More recently, estimates are that Musk's destruction of USAid will lead to the deaths of about 1,650,000 people within a year because of cuts to HIV prevention and treatment, as well as 500,000 annual deaths due to cuts to vaccines. I, for one, do not want this man anywhere near my children, my family, my community, my country. OpenAI Sam Altman, the CEO of the world's largest plagiarism machine, OpenAI, recently stated that he would like a large part of the world's electricity grid to run his LLM/AI models. Karen Hao, in her recently published book Empire of AI, makes a strong case for OpenAI being a classic colonial power that closely resembles (for example) the British East India Company, founded in 1600 (and dissolved in 1874). Altman recently moved squarely into Orwellian surveillance when OpenAI bought io, a product development company owned by Jonny Ive (designer of the iPhone). While the first product is a closely guarded secret, it is said to be a wearable device that will include cameras and microphones for environmental detection. Every word you speak, every sound you hear, and every image you see will be turned into data. Data for OpenAI. Why might Altman want this? Money, of course. But for Altman and Silicon Valley, money is secondary to data, to surveillance and the way they are able to parlay data into power and control (and then money). He will take our data, further train his ChatGPT models with it, and in turn use this to better surveil us all. And for the pleasure of working for, and giving our data to OpenAI? Far from being paid for the data you produce, you will have to buy the gadget, be monitored 24/7, and have your life commodified and sold. As Shoshana Zuboff said in her magisterial book, The Age of Surveillance Capitalism, 'Forget the cliché that if it's free, 'you are the product'. You are not the product; you are the abandoned carcass. The 'product' derives from the surplus that is ripped from your life.' The problem was never the cotton loom. The Luddites knew this in the 19th century. It was always about livelihood loss and people (the industrialists). Bostrom has it badly wrong when he imagines an all-powerful AGI entity that turns against its human inventors. But about the paperclips, he might be correct. Zuckerberg, Musk and Altman are our living and breathing paperclip maximisers. With their political masters, they will not flinch at turning us all into paperclips and sacrificing us on the altar of their infinite greed and desire for ever-increasing surveillance and control. DM

IOL News
16 hours ago
- IOL News
Children's digital engagement: the rise of AI and viral memes
Some children are even using smartphones or tablet computers when they are as young as 12 months old, the researchers found. Research shows that children are embracing AI more and more and are engaging with chatbots Image: Supplied Children are embracing technology more and more and are engaging with artificial intelligence powered chatbots, the viral phenomenon of Italian brainrot memes, and a fresh interest in rhythm-based gaming. According to a report, children aged 8 to 10 spend approximately six hours a day glued to screens, while preteens—those aged 11 to 14—average even more at about nine hours. As a significant portion of their lives unfolds online, understanding their digital interests is paramount for parents hoping to foster healthy online habits. This year's findings indicate a striking rise in interest surrounding AI tools. Notably, while AI applications didn't feature in the top 20 most-used apps in the previous year, ' has recently entered the list. Children are increasingly not only curious about AI but actively incorporating it into their daily digital interactions. The Kaspersky report noted that more than 7.5% of all searches in this demographic were related to AI chatbots, with popular names like ChatGPT and Gemini at the forefront. Most notably, has amplified interest, with AI-related queries surging from 3.19% last year to over double that proportion this year. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Diving into specific trends, children in South Africa have shown a marked preference for communication and entertainment apps. WhatsApp maintains the top spot, accounting for 25.51% of daily device usage, closely followed by YouTube at 24.77%, while TikTok has slipped to third place with 11.09%. though still a recent entrant, was ranked 11th, comprising 1.26% of time spent on Android applications. Another fascinating aspect of the report is the emergence of "brainrot" memes, "characterised by absurd and chaotic humour. Phrases like 'tralalero tralala' have taken centre stage, representing a dynamic and rapidly shifting digital culture among children. These memes are shared across platforms, reflecting a shared understanding that may appear nonsensical to adults but resonates profoundly with younger audiences. Newly captured in the report is Sprunki, a rhythm-based game that combines music with dynamic visual interaction. Players engage by matching beats with lively audio, creating a captivating and physically stimulating environment. The game's increasing popularity is evidenced by its sudden rise in search queries, landing it among the top five most searched gaming topics alongside long-established titles like Brawl Stars and Roblox. For parents striving to secure their children's online experiences, Kaspersky recommends the following: Maintain open communication about potential online risks and establish clear safety guidelines. Secure gaming experiences with trusted security solutions to prevent harmful downloads. Stay informed on emerging threats and actively monitor children's online activities. Educate children on cybersecurity basics with resources like the Kaspersky Cybersecurity Alphabet. Utilise digital parenting apps like Kaspersky Safe Kids for managing screen time, blocking inappropriate content, and monitoring location.

IOL News
2 days ago
- IOL News
Musk's xAI in talks to raise $4. 3 billion in equity funding
The xAI logo on a smartphone arranged in New York. Image: Gabby Jones/Bloomberg Elon Musk's artificial intelligence startup xAI is in talks to raise $4.3 billion (R77bn) through an equity investment on top of the $5bn it has recently been trying to borrow from debt investors, according to information the company shared with investors who asked not to be identified because it is private. Musk's company, which is responsible for the AI chatbot Grok, needs the new money, in part, because it has already spent most of what it previously raised, the materials shared with investors indicate. Between its founding in 2023 and when the debt sale was launched this year, xAI raised $14bn via equity fundraising, according to the materials. But as of March 31, only $4bn of that was left on the company's balance sheet, the information showed. The new equity infusion is luring investors back into xAI's debt offering, which was being circulated just as Musk's public spat with President Donald Trump was playing out. The company has also offered changes to the debt documents to assuage investors' concerns, people familiar with the matter said. The changes make it more difficult for xAI to shift assets, which protects lenders' collateral, and also set a ceiling on the amount of secured debt it can raise, the people added. Commitments on the $5bn debt sale are due Tuesday, according to a different person with knowledge of the matter. In addition to the fresh funding, xAI may also get a $650 million rebate from one of its manufacturers that will help the firm cut costs, people familiar with the matter said. A spokesperson for the company declined to comment, as did a spokesperson for Morgan Stanley, the bank in charge of xAI's debt sale. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Huge fundraising rounds have become a standard feature of the fiercely competitive artificial intelligence industry, in which the top players are jockeying to secure the expensive computer chips and infrastructure needed to train advanced AI models like Grok and ChatGPT. Despite the big spending, potential xAI investors have been told that the company's valuation grew to $80bn at the end of the first quarter, up from $51bn at the end of 2024. Investors in previous rounds have included Andreessen Horowitz, Sequoia and VY Capital. Musk recently decided to merge xAI with his social media company X but the new funds will go toward the AI operations. BLOOMBERG