Latest news with #algorithmicbias


Mail & Guardian
26-05-2025
- Mail & Guardian
How free is our free will amid the rise of AI, Grok and killer robots?
Freedom of choice over action and decisions, prompted by the influence of algorithmic biases, is under threat. We are witnessing the rise of a seemingly unstoppable control of private power against public good. It has arrived insidiously and in cunning ways. Disturbingly, we have been willing participants in it through our innate need to seek out the new and the unexplored. Have we traded in free will and choice for digital decision-making? The question is — what have we received instead? Human curiosity, with its necessary cloaks of free will and choice, has been a successful tool for humanity's progress through time. We believe that we are free agents of choice with powers of reason and rationality. But this norm is being subsumed by how we are using AI to accelerate our receipt of knowledge, our consumption of it and how we act upon it. Choice, in the form of curiosity-seeking, is presently being curated and influenced in ways that blurs the lines of what is true, what is right and what is good for us. In our quest for convenient, fast knowledge this has taken place under the guise of prompts — questions — that we ask AI to solve for us to satiate this curiosity. Our human prompts, in turn, set in motion a series of AI system prompts designed to offer a digital response to the question. These system prompts are populated with a pre-selected set of system-fed broad parameters (algorithms) of use that have been intentionally baked into the system by designers. So, your answer is fed to you via this system of curated prompts, reflective of a broader, underlying designer-led algorithm. This intentionally fed algorithmic system of prompts either intentionally or unintentionally subsists of designer bias, yet despite this, remains largely how digital decision-making takes place. But it does lead us to consider the extent of our real and rational choices and, more significantly, what influences that in the face of fast, easy, convenient and instant digital decision-making. Can we elect to use AI to do the right thing? If the right act, according to the Stanford Encyclopaedia of Philosophy (Winter 2023 edition) is meant to be the one that 'yields the greatest moral value', and not (necessarily) the one with the best consequences, does this then typify an exercise in rational choice? If we accept that the freedom to act (rightly or wrongly) stems from the right to rational choice, it follows that this right encompasses free will. Rational choice is an element of the moral principle of agency, which forms one of the foundational moral principles of how we live as humans. But freedom of choice, or free will, over action and decisions, prompted by the influence of algorithmic biases, is under threat. There is evidence that algorithmic biases disproportionately and negatively affect already marginalised groups, such as people of colour, people with disabilities and women. Indeed, the decisions that are taken though AI are often opaque since it relies on its own interpretation of learned data points which can result in the phenomenon called 'hallucinations'. In this way, agency has been replaced by agents of AI. Choice is a privilege reserved for a few. And rationality is increasingly playing a cameo role. How AI tools are used and deployed inhabits an uncomfortable place of blurred lines coupled with an accountability vacuum. Like corruption, it is becoming increasingly evident that the deployment of AI involves the abuse of controlled power for private gain. It is trite that in seeking to influence using AI tools, creators/ controllers of AI technology bear responsibilities, but this responsibility is becoming increasingly obfuscated by evidence that the influence these creators wield appears to be sliding towards malfluence — influence with malice. Two cases, examined through the lens of rational choice and free will, will expose the insidious and the violent in both and imprint the peril our traditionally held belief of free agency faces. The first case is the iteration of the latest [mal]function of the Grok, as experienced on 14 May 2025 on X. The second is that of killer robots in the form of autonomous weapons systems. What does Grok's recent interaction reveal about our traditional understanding of choice? X's Grok, is a large language model built into the social media platform. It functions in a similar fashion to ChatGPT in that X users can ask it questions to which it will provide an answer. On Wednesday 14 May 2025, for a few hours, Grok allegedly malfunctioned. It spewed out unrelated and discordant commentary on white genocide. '… Grok has been repeatedly mentioning 'white genocide' in South Africa in its responses to unrelated topics and telling users it was 'instructed by my creators' to accept the genocide as 'real and racially motivated'. See The Guardian headlined: 'Musk's AI Grok bot rants out 'white genocide' in South Africa in unrelated chats'. It bears repeating that Grok's responses were neither factual, nor truthful and not requested. It was reflective of an intentionally fed prompt inserted into its system without evidence. It did not promote the good. Clear malfluence of both rationality and choice. For context, Elon Musk, creator of X, is a former South African who has been instrumental in crafting an alternative and false narrative of white genocide. This resulted in a remarkable and unprecedented fast-tracking of a small group of white refugees to the US where Musk is an adviser to President Donald Trump. More significantly worthy of repeating is that South Africa took Israel to the International Court of Justice on a case of real genocide against the Palestinians in Gaza in which it delivered evidence, comprising about 750 pages supported by exhibits and annexes of over 4 000 pages, on 28 October 2024. Grok's [mal]function was attended to within a few hours and the answers mentioning 'white genocide' were mostly deleted. But what of accountability? Mostly deleted implies, rather troublingly, that a new archive of information built on intentional misinformation has been set in motion to disrupt existing knowledge sets and designed to curate lived reality with a new AI-learned one. Will this archive on misinformation rear its head in the future to malfluence curiosity-seeking humans? Time will tell. The second case, involving an automated weapons system was diligently dissected by the April 2025 Human Rights Watch report called 'A Hazard to Human Rights' found What do killer robots tell us about our capacity for reason and rationality? Automated weapons systems select their target based on sensor processing rather than on human input. This means that your right to life, as the unfortunate victim in a war and/or genocide, or your choice to take a life as the operator of the system, has been traded to a robot that is pre-programmed to attack a target based on the ways in which it has been prompted and instructed. Digital decision-making incarnated in its most violent and egregious form. Human Rights Watch chillingly calls this type of killing 'digital dehumanisation'. People against whom killer robots are deployed are reduced to data points. The system is unable to discern civilians from combatants and is pre-programmed to deliver a learned, uniform response, despite the lived reality on the ground. Influence with built-in malicious intent. Autonomous weapons select and engage targets based on sensor processing, rather than on human inputs. The report describes how these killer robots, once activated, use a bundle of data entries, including software, algorithms, sensor inputs like cameras, radar signatures and heat shapes to identify a target. Once a target is identified, the automated weapons system fires or releases its payload without approval. approval or review from a human operator. Simply put, a machine determines where, when and against whom it fires. Errors are bound to occur since the system learns from its environment and there is still a scant understanding of how AI arrives at some of its decisions. If hallucination is the name given for untrue, fictional pieces of knowledge created by AI, what would be the name for a similar error in judgment on the part of a killer robot? Time will tell. But what of accountability? A Hazard to Human Rights highlights the legal challenges of accountability with killer robots. At present, it states there is no civil or international remedy for holding the operators of killer robots, or the programmers or designers, accountable for errors in digital decision-making. The report calls for worthy aims — a global treaty using voting-based decision-making rules to prohibit and regulate autonomous weapons systems, with the collaboration of both states and civil society organisations. The core of the report unpacks how six fundamental obligations and principles of international human rights law are negatively affected in using these killer robots: the right to life, the right to privacy, the right to dignity, the right to non-discrimination, the right to peaceful assembly and the right to remedy. It opines how, despite human beings committing similar violations, human actors still possess a capacity for respecting rights and a capacity for facing, abiding by and understanding the consequences of their actions. This capacity for respect and accountability in decisions can be extrapolated to the foundational basis for rational choice, even though it can be argued that, in this time of gross human rights violations around the world, this capacity from human actors is in short supply. What is evident is that digital decision-making demands that we all jealously guard our freedom to choose against the slings of misinformation and malfluence which seek to upend our agency and our rational choice. The accountability gap creates too large an ethical and legal imperative to be ignored. In gaining, understanding, and acting upon the knowledge that we receive, we suspend our power of free will and delegate rational choice to a digital tool — at our peril. Luthfia Kalla is an anti-corruption compliance lawyer with a special interest in ethics and following the (illicit) money.


Forbes
22-05-2025
- Business
- Forbes
‘Total Control' To AI Firms: U.S. House Bill Barring State Oversight Draws Ire
US President Trump gestures as CEO of Open AI Sam Altman speaks in the Roosevelt Room at the White ... More House on January 21, 2025, in Washington, DC. (Photo by Jim WATSON / AFP) (Photo by JIM WATSON/AFP via Getty Images) A provision tucked into a sweeping U.S. House Republican budget bill is igniting a contentious fight over artificial intelligence regulation in the United States. The measure would ban states from enforcing or enacting AI-related legislation for ten years, effectively overriding a wave of new and pending state-level laws protecting consumers, mitigating algorithmic bias and setting guardrails on emerging technologies. Introduced by the House Energy and Commerce Committee under Chairman Brett Guthrie, the proposal has incited swift condemnation from state attorneys general, children's advocacy organizations and civil society groups. Advocates warn it would hand expansive leeway to AI companies at a time when oversight is urgently needed and when federal rules remain vague or largely absent. Earlier this month, a bipartisan group of state attorneys general, including officials from California, New York and Ohio, wrote a letter to Congress urging lawmakers to scrap the provision. 'Imposing a broad moratorium on all state action while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections,' they wrote. Separately, a coalition of 77 advocacy groups argued in a letter on Wednesday that the bill would silence states without offering federal protections in return. 'By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability and total control,' the coalition wrote. 'It ties lawmakers' hands for a decade, sidelining policymakers and leaving families on their own.' The bill's language is noticeably broad: It defines AI as any computational process using machine learning, statistical modeling, or data analytics that influences human decision-making, and it would prohibit states from legislating around AI design, documentation, liability and performance. That scope, legal experts say, could derail a wide range of initiatives, from whistleblower protections to rules for algorithmic social media feeds and AI-generated deepfakes. The legislative push comes as Congress faces mounting pressure to regulate AI, but, critics argue, has failed to pass any meaningful legislation governing AI systems to date. California, Colorado and Utah have already passed laws governing cross-sectoral use of AI, and at least 15 other states are considering similar measures. Many states have also launched AI task forces or issued executive orders in the absence of federal leadership. 'States have not had the luxury of waiting for federal action on AI policy,' said the National Association of State Chief Information Officers in a statement opposing the bill, adding that the language in the measure undermines states' efforts to protect citizens. The proposal is a 'short-sighted and ill-conceived plan,' contend Marc Rotenberg, Merve Hickok and Christabel Randolph from the Center for AI and Digital Policy. 'States have established critical new safeguards to protect creators, promote accountability, safeguard children and ensure the reliability of AI,' they said. 'Now, some members of Congress are proposing a ten-year moratorium that would prohibit state legislators from protecting the public.' The issue, advocates warn, becomes all the more burgeoning as AI develops a deeper entrenchment into everyday life — concerns about disinformation, data harvesting and automated decision-making continue to mount, yet federal action remains elusive. Daniel Rodriguez, a professor at Northwestern University's Pritzker School of Law, said the situation reflects a legally dubious overreach and a broader attempt to shut down state-level momentum, calling the bill 'a shot across the bow'. 'What's bizarre about it is that it paints with this incredibly broad brush a declaration that states should just stay the heck out of this area,' Rodriguez told StateScoop, adding that that's not how preemption generally works. 'What is the federal government doing at the legislative level to promote innovation in the responsible use of AI? What exactly is it doing?' While some industry groups have encouraged the federal move as a way to avoid a patchwork of rules across states, others argue it erodes federalism and hurts innovation by limiting local experimentation. The National Conference of State Legislatures issued its own strong rebuke, warning that the bill would undermine efforts to tailor solutions to local concerns. 'A federally imposed moratorium would not only stifle innovation but potentially leave communities vulnerable in the face of rapidly advancing technologies,' the group stated. Per NCSL's analysis, 48 states and Puerto Rico have introduced AI legislation, and 26 states adopted or enacted at least 75 new AI measures. The bill could stall momentum towards meaningful oversight while offering no viable federal alternative just as states begin to confront critical challenges. Furthermore, the bill could also leave consumers and small businesses vulnerable in the absence of clear standards. '[The moratorium on state AI-related bills] could severely disadvantage consumers, innovators, small and mid-sized businesses and other stakeholders interested in ethical and fair AI design and deployment,' warn Nicol Turner Lee and Josie Stewart at Brookings Institution. At large, federal preemption of stronger state privacy laws hurts everyone, Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation, wrote in a blog post earlier this month. 'It risks freezing any regulation on the issue for the next decade — a considerable problem given the pace at which companies are developing the technology. Congress does not react quickly and, particularly when addressing harms from emerging technologies, has been far slower to act than states.' If enacted, the bill could undermine existing state privacy laws. The California Privacy Protection Agency and Colorado's attorney general could be prevented from enforcing AI-specific rules already passed or under development. Advocates argue that delaying regulation for a decade is effectively a deregulation agenda. 'The future of AI governance should be built on democratic principles, shared responsibility and a commitment to the public interest,' said Rotenberg, Hickok, and Randolph. 'Not a decade-long silence imposed from above.' In Colorado, for example, several AI companies recently lobbied to delay and weaken the state's landmark AI accountability legislation — a preview of what unchecked industry influence could look like under a regulatory freeze. Tsukayama warned that the moratorium could paralyze AI oversight at a time when vigilance is most needed. 'Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach.'