26-05-2025
How free is our free will amid the rise of AI, Grok and killer robots?
Freedom of choice over action and decisions, prompted by the influence of algorithmic biases, is under threat.
We are witnessing the rise of a seemingly unstoppable control of private power against public good. It has arrived insidiously and in cunning ways. Disturbingly, we have been willing participants in it through our innate need to seek out the new and the unexplored.
Have we traded in free will and choice for digital decision-making? The question is — what have we received instead?
Human curiosity, with its necessary cloaks of free will and choice, has been a successful tool for humanity's progress through time. We believe that we are free agents of choice with powers of reason and rationality. But this norm is being subsumed by how we are using AI to accelerate our receipt of knowledge, our consumption of it and how we act upon it.
Choice, in the form of curiosity-seeking, is presently being curated and influenced in ways that blurs the lines of what is true, what is right and what is good for us. In our quest for convenient, fast knowledge this has taken place under the guise of prompts — questions — that we ask AI to solve for us to satiate this curiosity.
Our human prompts, in turn, set in motion a series of AI system prompts designed to offer a digital response to the question. These system prompts are populated with a pre-selected set of system-fed broad parameters (algorithms) of use that have been intentionally baked into the system by designers.
So, your answer is fed to you via this system of curated prompts, reflective of a broader, underlying designer-led algorithm. This intentionally fed algorithmic system of prompts either intentionally or unintentionally subsists of designer bias, yet despite this, remains largely how digital decision-making takes place.
But it does lead us to consider the extent of our real and rational choices and, more significantly, what influences that in the face of fast, easy, convenient and instant digital decision-making.
Can we elect to use AI to do the right thing? If the right act, according to the Stanford Encyclopaedia of Philosophy (Winter 2023 edition) is meant to be the one that 'yields the greatest moral value', and not (necessarily) the one with the best consequences, does this then typify an exercise in rational choice?
If we accept that the freedom to act (rightly or wrongly) stems from the right to rational choice, it follows that this right encompasses free will. Rational choice is an element of the moral principle of agency, which forms one of the foundational moral principles of how we live as humans.
But freedom of choice, or free will, over action and decisions, prompted by the influence of algorithmic biases, is under threat. There is evidence that algorithmic biases disproportionately and negatively affect already marginalised groups, such as people of colour, people with disabilities and women. Indeed, the decisions that are taken though AI are often opaque since it relies on its own interpretation of learned data points which can result in the phenomenon called 'hallucinations'.
In this way, agency has been replaced by agents of AI. Choice is a privilege reserved for a few. And rationality is increasingly playing a cameo role.
How AI tools are used and deployed inhabits an uncomfortable place of blurred lines coupled with an accountability vacuum. Like corruption, it is becoming increasingly evident that the deployment of AI involves the abuse of controlled power for private gain.
It is trite that in seeking to influence using AI tools, creators/ controllers of AI technology bear responsibilities, but this responsibility is becoming increasingly obfuscated by evidence that the influence these creators wield appears to be sliding towards malfluence — influence with malice.
Two cases, examined through the lens of rational choice and free will, will expose the insidious and the violent in both and imprint the peril our traditionally held belief of free agency faces.
The first case is the iteration of the latest [mal]function of the Grok, as experienced on 14 May 2025 on X. The second is that of killer robots in the form of autonomous weapons systems.
What does Grok's recent interaction reveal about our traditional understanding of choice?
X's Grok, is a large language model built into the social media platform. It functions in a similar fashion to ChatGPT in that X users can ask it questions to which it will provide an answer.
On Wednesday 14 May 2025, for a few hours, Grok allegedly malfunctioned. It spewed out unrelated and discordant commentary on white genocide. '… Grok has been repeatedly mentioning 'white genocide' in South Africa in its responses to unrelated topics and telling users it was 'instructed by my creators' to accept the genocide as 'real and racially motivated'. See
The Guardian
headlined: 'Musk's AI Grok bot rants out 'white genocide' in South Africa in unrelated chats'.
It bears repeating that Grok's responses were neither factual, nor truthful and not requested. It was reflective of an intentionally fed prompt inserted into its system without evidence. It did not promote the good. Clear malfluence of both rationality and choice.
For context, Elon Musk, creator of X, is a former South African who has been instrumental in crafting an alternative and false narrative of white genocide. This resulted in a remarkable and unprecedented fast-tracking of a small group of white refugees to the US where Musk is an adviser to President Donald Trump.
More significantly worthy of repeating is that South Africa took Israel to the International Court of Justice on a case of real genocide against the Palestinians in Gaza in which it delivered evidence, comprising about 750 pages supported by exhibits and annexes of over 4 000 pages, on 28 October 2024.
Grok's [mal]function was attended to within a few hours and the answers mentioning 'white genocide' were
mostly deleted.
But what of accountability? Mostly deleted implies, rather troublingly, that a new archive of information built on intentional misinformation has been set in motion to disrupt existing knowledge sets and designed to curate lived reality with a new AI-learned one. Will this archive on misinformation rear its head in the future to malfluence curiosity-seeking humans? Time will tell.
The second case, involving an automated weapons system was diligently dissected by the April 2025 Human Rights Watch report called 'A Hazard to Human Rights' found
What do killer robots tell us about our capacity for reason and rationality?
Automated weapons systems select their target based on sensor processing rather than on human input. This means that your right to life, as the unfortunate victim in a war and/or genocide, or your choice to take a life as the operator of the system, has been traded to a robot that is pre-programmed to attack a target based on the ways in which it has been prompted and instructed. Digital decision-making incarnated in its most violent and egregious form.
Human Rights Watch chillingly calls this type of killing 'digital dehumanisation'. People against whom killer robots are deployed are reduced to data points. The system is unable to discern civilians from combatants and is pre-programmed to deliver a learned, uniform response, despite the lived reality on the ground. Influence with built-in malicious intent.
Autonomous weapons select and engage targets based on sensor processing, rather than on human inputs. The report describes how these killer robots, once activated, use a bundle of data entries, including software, algorithms, sensor inputs like cameras, radar signatures and heat shapes to identify a target. Once a target is identified, the automated weapons system fires or releases its payload without approval.
approval or review from a human operator.
Simply put, a machine determines where, when and against whom it fires. Errors are bound to occur since the system learns from its environment and there is still a scant understanding of how AI arrives at some of its decisions.
If hallucination is the name given for untrue, fictional pieces of knowledge created by AI, what would be the name for a similar error in judgment on the part of a killer robot? Time will tell.
But what of accountability?
A Hazard to Human Rights highlights the legal challenges of accountability with killer robots. At present, it states there is no civil or international remedy for holding the operators of killer robots, or the programmers or designers, accountable for errors in digital decision-making.
The report calls for worthy aims — a global treaty using voting-based decision-making rules to prohibit and regulate autonomous weapons systems, with the collaboration of both states and civil society organisations.
The core of the report unpacks how six fundamental obligations and principles of international human rights law are negatively affected in using these killer robots: the right to life, the right to privacy, the right to dignity, the right to non-discrimination, the right to peaceful assembly and the right to remedy.
It opines how, despite human beings committing similar violations, human actors still possess a capacity for respecting rights and a capacity for facing, abiding by and understanding the consequences of their actions.
This capacity for respect and accountability in decisions can be extrapolated to the foundational basis for rational choice, even though it can be argued that, in this time of gross human rights violations around the world, this capacity from human actors is in short supply.
What is evident is that digital decision-making demands that we all jealously guard our freedom to choose against the slings of misinformation and malfluence which seek to upend our agency and our rational choice. The accountability gap creates too large an ethical and legal imperative to be ignored.
In gaining, understanding, and acting upon the knowledge that we receive, we suspend our power of free will and delegate rational choice to a digital tool — at our peril.
Luthfia Kalla is an anti-corruption compliance lawyer with a special interest in ethics and following the (illicit) money.