
How free is our free will amid the rise of AI, Grok and killer robots?
Freedom of choice over action and decisions, prompted by the influence of algorithmic biases, is under threat.
We are witnessing the rise of a seemingly unstoppable control of private power against public good. It has arrived insidiously and in cunning ways. Disturbingly, we have been willing participants in it through our innate need to seek out the new and the unexplored.
Have we traded in free will and choice for digital decision-making? The question is — what have we received instead?
Human curiosity, with its necessary cloaks of free will and choice, has been a successful tool for humanity's progress through time. We believe that we are free agents of choice with powers of reason and rationality. But this norm is being subsumed by how we are using AI to accelerate our receipt of knowledge, our consumption of it and how we act upon it.
Choice, in the form of curiosity-seeking, is presently being curated and influenced in ways that blurs the lines of what is true, what is right and what is good for us. In our quest for convenient, fast knowledge this has taken place under the guise of prompts — questions — that we ask AI to solve for us to satiate this curiosity.
Our human prompts, in turn, set in motion a series of AI system prompts designed to offer a digital response to the question. These system prompts are populated with a pre-selected set of system-fed broad parameters (algorithms) of use that have been intentionally baked into the system by designers.
So, your answer is fed to you via this system of curated prompts, reflective of a broader, underlying designer-led algorithm. This intentionally fed algorithmic system of prompts either intentionally or unintentionally subsists of designer bias, yet despite this, remains largely how digital decision-making takes place.
But it does lead us to consider the extent of our real and rational choices and, more significantly, what influences that in the face of fast, easy, convenient and instant digital decision-making.
Can we elect to use AI to do the right thing? If the right act, according to the Stanford Encyclopaedia of Philosophy (Winter 2023 edition) is meant to be the one that 'yields the greatest moral value', and not (necessarily) the one with the best consequences, does this then typify an exercise in rational choice?
If we accept that the freedom to act (rightly or wrongly) stems from the right to rational choice, it follows that this right encompasses free will. Rational choice is an element of the moral principle of agency, which forms one of the foundational moral principles of how we live as humans.
But freedom of choice, or free will, over action and decisions, prompted by the influence of algorithmic biases, is under threat. There is evidence that algorithmic biases disproportionately and negatively affect already marginalised groups, such as people of colour, people with disabilities and women. Indeed, the decisions that are taken though AI are often opaque since it relies on its own interpretation of learned data points which can result in the phenomenon called 'hallucinations'.
In this way, agency has been replaced by agents of AI. Choice is a privilege reserved for a few. And rationality is increasingly playing a cameo role.
How AI tools are used and deployed inhabits an uncomfortable place of blurred lines coupled with an accountability vacuum. Like corruption, it is becoming increasingly evident that the deployment of AI involves the abuse of controlled power for private gain.
It is trite that in seeking to influence using AI tools, creators/ controllers of AI technology bear responsibilities, but this responsibility is becoming increasingly obfuscated by evidence that the influence these creators wield appears to be sliding towards malfluence — influence with malice.
Two cases, examined through the lens of rational choice and free will, will expose the insidious and the violent in both and imprint the peril our traditionally held belief of free agency faces.
The first case is the iteration of the latest [mal]function of the Grok, as experienced on 14 May 2025 on X. The second is that of killer robots in the form of autonomous weapons systems.
What does Grok's recent interaction reveal about our traditional understanding of choice?
X's Grok, is a large language model built into the social media platform. It functions in a similar fashion to ChatGPT in that X users can ask it questions to which it will provide an answer.
On Wednesday 14 May 2025, for a few hours, Grok allegedly malfunctioned. It spewed out unrelated and discordant commentary on white genocide. '… Grok has been repeatedly mentioning 'white genocide' in South Africa in its responses to unrelated topics and telling users it was 'instructed by my creators' to accept the genocide as 'real and racially motivated'. See
The Guardian
headlined: 'Musk's AI Grok bot rants out 'white genocide' in South Africa in unrelated chats'.
It bears repeating that Grok's responses were neither factual, nor truthful and not requested. It was reflective of an intentionally fed prompt inserted into its system without evidence. It did not promote the good. Clear malfluence of both rationality and choice.
For context, Elon Musk, creator of X, is a former South African who has been instrumental in crafting an alternative and false narrative of white genocide. This resulted in a remarkable and unprecedented fast-tracking of a small group of white refugees to the US where Musk is an adviser to President Donald Trump.
More significantly worthy of repeating is that South Africa took Israel to the International Court of Justice on a case of real genocide against the Palestinians in Gaza in which it delivered evidence, comprising about 750 pages supported by exhibits and annexes of over 4 000 pages, on 28 October 2024.
Grok's [mal]function was attended to within a few hours and the answers mentioning 'white genocide' were
mostly deleted.
But what of accountability? Mostly deleted implies, rather troublingly, that a new archive of information built on intentional misinformation has been set in motion to disrupt existing knowledge sets and designed to curate lived reality with a new AI-learned one. Will this archive on misinformation rear its head in the future to malfluence curiosity-seeking humans? Time will tell.
The second case, involving an automated weapons system was diligently dissected by the April 2025 Human Rights Watch report called 'A Hazard to Human Rights' found
What do killer robots tell us about our capacity for reason and rationality?
Automated weapons systems select their target based on sensor processing rather than on human input. This means that your right to life, as the unfortunate victim in a war and/or genocide, or your choice to take a life as the operator of the system, has been traded to a robot that is pre-programmed to attack a target based on the ways in which it has been prompted and instructed. Digital decision-making incarnated in its most violent and egregious form.
Human Rights Watch chillingly calls this type of killing 'digital dehumanisation'. People against whom killer robots are deployed are reduced to data points. The system is unable to discern civilians from combatants and is pre-programmed to deliver a learned, uniform response, despite the lived reality on the ground. Influence with built-in malicious intent.
Autonomous weapons select and engage targets based on sensor processing, rather than on human inputs. The report describes how these killer robots, once activated, use a bundle of data entries, including software, algorithms, sensor inputs like cameras, radar signatures and heat shapes to identify a target. Once a target is identified, the automated weapons system fires or releases its payload without approval.
approval or review from a human operator.
Simply put, a machine determines where, when and against whom it fires. Errors are bound to occur since the system learns from its environment and there is still a scant understanding of how AI arrives at some of its decisions.
If hallucination is the name given for untrue, fictional pieces of knowledge created by AI, what would be the name for a similar error in judgment on the part of a killer robot? Time will tell.
But what of accountability?
A Hazard to Human Rights highlights the legal challenges of accountability with killer robots. At present, it states there is no civil or international remedy for holding the operators of killer robots, or the programmers or designers, accountable for errors in digital decision-making.
The report calls for worthy aims — a global treaty using voting-based decision-making rules to prohibit and regulate autonomous weapons systems, with the collaboration of both states and civil society organisations.
The core of the report unpacks how six fundamental obligations and principles of international human rights law are negatively affected in using these killer robots: the right to life, the right to privacy, the right to dignity, the right to non-discrimination, the right to peaceful assembly and the right to remedy.
It opines how, despite human beings committing similar violations, human actors still possess a capacity for respecting rights and a capacity for facing, abiding by and understanding the consequences of their actions.
This capacity for respect and accountability in decisions can be extrapolated to the foundational basis for rational choice, even though it can be argued that, in this time of gross human rights violations around the world, this capacity from human actors is in short supply.
What is evident is that digital decision-making demands that we all jealously guard our freedom to choose against the slings of misinformation and malfluence which seek to upend our agency and our rational choice. The accountability gap creates too large an ethical and legal imperative to be ignored.
In gaining, understanding, and acting upon the knowledge that we receive, we suspend our power of free will and delegate rational choice to a digital tool — at our peril.
Luthfia Kalla is an anti-corruption compliance lawyer with a special interest in ethics and following the (illicit) money.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The South African
12 hours ago
- The South African
Andrew Garfield set to portray Sam Altman in new drama ‘Artificial'
Luca Guadagnino, the acclaimed director behind Call Me By Your Name and Challengers , is gearing up to direct a gripping new film titled Artificial , starring reportedly Andrew Garfield. This film will be centred on the dramatic 2023 saga at OpenAI. The film will recount the extraordinary events surrounding the brief firing and swift rehiring of OpenAI CEO Sam Altman. Andrew Garfield is reportedly circling the lead role of Altman himself. This project marks a swift pivot for Guadagnino, who was previously slated to direct the DC Studios film Sgt. Rock , now postponed, according to The Hollywood Reporter. With Artificial , Guadagnino moves from war stories to the cutting-edge world of artificial intelligence. He will bring a sharp, cinematic eye to a real-life corporate thriller that captivated the tech world and beyond. Andrew Garfield, fresh from his role opposite Florence Pugh in We Live in Time , will soon to appear in Guadagnino's upcoming After the Hunt . The actor is no stranger to complex, layered characters. His involvement promises a compelling portrayal of Altman. Altman's 2023 ousting from OpenAI sparked a revolt among employees and investors, culminating in his reinstatement after just five days. Joining Garfield are Monica Barbaro, known for her scene-stealing role in A Complete Unknown . Monica is expected to play OpenAI's chief technology officer Mira Murati. Yura Borisov, the breakout star from Anora , will portray Ilya Sutskever, the co-founder who initially pushed for Altman's removal. Simon Rich pens the screenplay, describing it as a comedic drama that blends humour with the high-stakes tension of AI's rapidly evolving landscape The production is racing ahead at Amazon MGM Studios, with filming planned for this summer in San Francisco and Italy. This rapid timeline reflects the urgency and relevance of the story. The story can resonate strongly with people familiar with the global impact of AI technology and corporate governance. David Heyman and Jeffrey Clifford of Heyday Films are producing alongside Rich and Jennifer Fox, with Amazon MGM backing the project. Luca Guadagnino has established a strong collaboration with Amazon MGM by directing Challengers and the upcoming After the Hunt for the studio. Garfield commented on the project. 'Portraying Sam Altman offers a rare chance to explore the human side of a tech revolution that's reshaping our world. It's a story about leadership, trust, and the power of ideas — themes that resonate everywhere, including here in South Africa.' With a reported budget of around R350 million (converted from approximately $20 million USD), the filmmakers have positioned Artificial to become a major cinematic event. This film may offer a fascinating glimpse into the global AI debate, humanising the figures behind the headlines. It can also allow people to explore the ethical and personal challenges at the heart of technological innovation. Artificial promises to be a must-watch, combining star power, a topical story, and a director renowned for his emotional depth and visual flair. As filming gears up, all eyes will be on Garfield and Guadagnino to deliver a film that captures the zeitgeist of the AI era with wit and insight. Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1. Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X and Bluesky for the latest news.

TimesLIVE
a day ago
- TimesLIVE
Revolution or risk? How AI is redefining broadcasting and raising red flags
Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real. This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences. Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns. "GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones." Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources. However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media. "A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses." Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked. To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news. Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving. "Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay. Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences. As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets. "Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.

IOL News
a day ago
- IOL News
Top scientist wants to prevent AI from going rogue
The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Image: RON AI Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity.