19-05-2025
'What if Superintelligent AI Goes Rogue?' Why We Need a New Approach to AI Safety
You will hear about "super intelligence," at an increasing rate over the coming months. Though it is the most advanced AI technology ever created, its definition is simple. Superintelligence is the point at which AI intelligence passes human intelligence in general cognitive and analytic functions.
As the world competes to create a true superintelligence, the United States government has begun removing previously implemented guardrails and regulation. The National Institute of Standards and Technology sent updated orders to the U.S. Artificial Intelligence Safety Institute (AISI). They state to remove any mention of the phrases "AI safety," "responsible AI," and "AI fairness." In the wake of this change, Google's Gemini 2.5 Flash AI model increased in its likelihood to generate text that violates its safety guidelines in the areas of "text-to-text safety" and "image-to-text safety."
If Superintelligence Goes Rouge
We are nearing the Turing horizon, where machines can think and surpass human intelligence. Think about that for a moment, machines outsmarting and being cleverer than humans. We must consider all worst-case scenarios so we can plan and prepare to prevent that from ever occurring. If we leave superintelligence to its own devices, Stephen Hawking's prediction of it being the final invention of man could come true.
AI apps are pictured.
AI apps are pictured.
Getty Images
Imagine if any AI or superintelligence were to be coded and deployed with no moral guidelines. It would then act only in the interest of its end goal, no matter the damage it could do. Without these morals set and input by human engineers the AI would act with unmitigated biases.
If this AI was deployed with the purpose of maximizing profit on flights from London to New York, what would be the unintended consequences? Not selling tickets to anyone in a wheelchair? Only selling tickets to the people that weigh the least? Not selling to anyone that has food allergies or anxiety disorders? It would maximize profits without taking into account any other factors than who can pay the most, take up the least time in boarding and deplaning, and cause the least amount of fuel use.
Secondarily, what if we allow an AI superintelligence to be placed in charge of all government spending to maximize savings and cut expenses? Would it look to take spend away from people or entities that don't supply tax revenue? That could mean removing spending from public school meal programs for impoverished children, removing access to health care to people with developmental disabilities, or cutting Social Security payments to even the deficit. Guardrails and guidelines must be written and encoded by people to ensure no potential harm is done by AI.
A Modern Approach Is Needed for Modern Technology
The law is lagging behind technology globally. The European Union (EU) has ploughed ahead with the EU AI Act, which at a surface glance appears to be positive, but 90 percent of this iceberg lurks beneath the surface, potentially rife with danger. Its onerous regulations put every single EU company at a disadvantage globally with technological competitors. It offers little in the way of protections for marginalized groups and presents a lack of transparency in the fields of policing and immigration. Europe cannot continue on this path and expect to stay ahead of countries that are willing to win at any cost.
What needs to happen? AI needs to regulate AI. The inspection body cannot be humans. Using payment card industry (PCI) compliance as a model, there needs to be a global board of AI compliance that meets on a regular basis to discuss the most effective and safe ways AI is used and deployed. Those guidelines are then the basis for any company to have their software deemed AI Compliant (AIC).
The guidelines are written by humans, but enforced by AI itself. Humans need to write the configuration parameters for the AI program and the AI program itself needs to certify the technology meets all guidelines, or report back vulnerabilities and wait for a resubmission. Once all guidelines are met a technology will be passed as AIC. This technology cannot be spot checked like container ships coming to port—every single line of code must be examined. Humans cannot do this, AI must.
We are on the precipice of two equally possible futures. One is a world where bad actors globally are left to use AI as a rogue agent to destabilize the global economy and rig the world to their advantage. The other is one where commonsense compliance is demanded of any company wanting to sell technology by a global body of humans using AI as the tool to monitor and inspect all tech. This levels the field globally and ensures that those who win are those that are smartest, most ethical, and the ones that deserve to get ahead.
Chetan Dube is an AI pioneer and founder and CEO of Quant.
The views expressed in this article are the writer's own.