
'What if Superintelligent AI Goes Rogue?' Why We Need a New Approach to AI Safety
You will hear about "super intelligence," at an increasing rate over the coming months. Though it is the most advanced AI technology ever created, its definition is simple. Superintelligence is the point at which AI intelligence passes human intelligence in general cognitive and analytic functions.
As the world competes to create a true superintelligence, the United States government has begun removing previously implemented guardrails and regulation. The National Institute of Standards and Technology sent updated orders to the U.S. Artificial Intelligence Safety Institute (AISI). They state to remove any mention of the phrases "AI safety," "responsible AI," and "AI fairness." In the wake of this change, Google's Gemini 2.5 Flash AI model increased in its likelihood to generate text that violates its safety guidelines in the areas of "text-to-text safety" and "image-to-text safety."
If Superintelligence Goes Rouge
We are nearing the Turing horizon, where machines can think and surpass human intelligence. Think about that for a moment, machines outsmarting and being cleverer than humans. We must consider all worst-case scenarios so we can plan and prepare to prevent that from ever occurring. If we leave superintelligence to its own devices, Stephen Hawking's prediction of it being the final invention of man could come true.
AI apps are pictured.
AI apps are pictured.
Getty Images
Imagine if any AI or superintelligence were to be coded and deployed with no moral guidelines. It would then act only in the interest of its end goal, no matter the damage it could do. Without these morals set and input by human engineers the AI would act with unmitigated biases.
If this AI was deployed with the purpose of maximizing profit on flights from London to New York, what would be the unintended consequences? Not selling tickets to anyone in a wheelchair? Only selling tickets to the people that weigh the least? Not selling to anyone that has food allergies or anxiety disorders? It would maximize profits without taking into account any other factors than who can pay the most, take up the least time in boarding and deplaning, and cause the least amount of fuel use.
Secondarily, what if we allow an AI superintelligence to be placed in charge of all government spending to maximize savings and cut expenses? Would it look to take spend away from people or entities that don't supply tax revenue? That could mean removing spending from public school meal programs for impoverished children, removing access to health care to people with developmental disabilities, or cutting Social Security payments to even the deficit. Guardrails and guidelines must be written and encoded by people to ensure no potential harm is done by AI.
A Modern Approach Is Needed for Modern Technology
The law is lagging behind technology globally. The European Union (EU) has ploughed ahead with the EU AI Act, which at a surface glance appears to be positive, but 90 percent of this iceberg lurks beneath the surface, potentially rife with danger. Its onerous regulations put every single EU company at a disadvantage globally with technological competitors. It offers little in the way of protections for marginalized groups and presents a lack of transparency in the fields of policing and immigration. Europe cannot continue on this path and expect to stay ahead of countries that are willing to win at any cost.
What needs to happen? AI needs to regulate AI. The inspection body cannot be humans. Using payment card industry (PCI) compliance as a model, there needs to be a global board of AI compliance that meets on a regular basis to discuss the most effective and safe ways AI is used and deployed. Those guidelines are then the basis for any company to have their software deemed AI Compliant (AIC).
The guidelines are written by humans, but enforced by AI itself. Humans need to write the configuration parameters for the AI program and the AI program itself needs to certify the technology meets all guidelines, or report back vulnerabilities and wait for a resubmission. Once all guidelines are met a technology will be passed as AIC. This technology cannot be spot checked like container ships coming to port—every single line of code must be examined. Humans cannot do this, AI must.
We are on the precipice of two equally possible futures. One is a world where bad actors globally are left to use AI as a rogue agent to destabilize the global economy and rig the world to their advantage. The other is one where commonsense compliance is demanded of any company wanting to sell technology by a global body of humans using AI as the tool to monitor and inspect all tech. This levels the field globally and ensures that those who win are those that are smartest, most ethical, and the ones that deserve to get ahead.
Chetan Dube is an AI pioneer and founder and CEO of Quant.
The views expressed in this article are the writer's own.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 hours ago
- Yahoo
Duke Energy settles data breach lawsuit
Duke Energy settled a class-action lawsuit over a data breach that happened last year, according to court documents from the U.S. District Court for Western North Carolina. According to The Charlotte Observer, the 2024 breach impacted thousands of customers and exposed information to cybercriminals. ALSO READ: 23andMe Data Breach Deadline: How to file a claim The Observer reported that terms of the settlement were not disclosed in the latest filings. Court documents show at least 100 people were involved in the lawsuit with claims adding up to over $5 million. Some of the information leaked in the data breach included names, account numbers, emails, Tax IDs and Social Security numbers. VIDEO: How to protect your data on public Wi-Fi
Yahoo
a day ago
- Yahoo
I Asked ChatGPT How Much Money I'll Need To Retire in 15 Years
Under typical circumstances, it is recommended that you consult with a financial advisor when it comes to planning your retirement. There are lots of considerations, such as how much your Social Security monthly payment will be and what your 401(k) and IRA accounts will have in them by the time you decide to clock out of the workforce. Learn More: Try This: These days, though, more people are taking an alternative route to tracking what you'll need in terms of money to retire: artificial intelligence. AARP found that you can use AI as a financial planner to an extent, though it should not have the final say. With all that in mind, be advised that you should still work with a professional person on planning your retirement, but just as an experiment, GOBankingRates asked ChatGPT how much money it would take to retire in 15 years. The general question — 'How much money will I need to retire in 15 years?' — prompted ChatGPT to request specific key information, including current age, desired retirement age and expected annual expenses in today's dollars for retirement. It also requested some calculations, such as life expectancy minus retirement age to determine the years in retirement, and the expected inflation rate. Additionally, ChatGPT needed the expected investment return before and during retirement, plus other sources of earnings during retirement, like Social Security or pension payments. Lastly, it looked at current retirement savings and annual contributions made up to the age of retirement. Find Out: With no other prompting, ChatGPT drew an example of a 50-year-old who is looking to retire by age 65 with $60,000 in yearly retirement expenses based on 2025 dollars. If that person lived another 25 years to the age of 90, ChatGPT predicted there would be an average inflation rate of 2.5% per year. The expected investment return before retirement was 6% and 4% during retirement. With a Social Security benefit payment of $25,000 annually starting at age 67 and annual contributions made to a retirement account, the total amount ChatGPT approximated that person would need at $300,000 in current savings. To determine the rough estimate, ChatGPT suggested that someone needed about 25 times their expected annual spending in retirement, based on the 4% rule. Therefore, it stated that if a retiree needed $60,000 per year in today's dollars, and adjusted for 2.5% inflation over 15 years, they would need a nest egg of around $2.15 million in retirement. From there, ChatGPT suggested a retiree would deduct Social Security — which it said would amount to around $625,000 if using the 4% rule to offset some cost of living expenses, as well as 'existing savings growth and future contributions.' It also indicated that while $2.15 million is the estimated amount of money you'd need to retire in 15 years, there are other factors at play, such as growth from current investments, which would vary from person to person based on their individual portfolio. ChatGPT is notoriously bad at math, so you would definitely want to double check any numbers it offered you. It's also important to note that it may not always have up-to-date information or take into account the nuances of a changing economy or unexpected expenses, like healthcare. While ChatGPT's advice isn't a bad start, it's important to also do your own research — and talking to that human financial advisor isn't a bad idea. More From GOBankingRates Surprising Items People Are Stocking Up On Before Tariff Pains Hit: Is It Smart? 10 Unreliable SUVs To Stay Away From Buying This article originally appeared on I Asked ChatGPT How Much Money I'll Need To Retire in 15 Years Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
2 days ago
- Yahoo
LexisNexis breach: Data broker hack exposed trove of sensitive information, including Social Security numbers
Data analytics firm LexisNexis Risk Solutions said it suffered a data breach that could have affected the names, Social Security numbers, driver's license numbers, and contact information of more than 364,000 people. Spicy AI-generated TACO memes are taking over social media because 'Trump always chickens out' Lego's first book nook is an addictively interactive diorama Forget quiet quitting: I'm using 'loud living' to redefine workplace boundaries The company said in a filing with Maine's attorney general that an 'unauthorized third party' stole data from a third-party platform used for software development. A spokesperson told TechCrunch, which earlier reported about the breach, that an unknown hacker accessed its GitHub account. The breach dates back to last Christmas, though the company said it only discovered it on April 1. 'Upon learning of the issue, we promptly launched an investigation with the assistance of leading external cybersecurity experts, notified law enforcement, and took steps to review and further enhance our security controls,' LexisNexis said in a notice that's being sent out to consumers. 'We also initiated an extensive review of the impacted data to identify personal information that may have been affected.' Reached for comment by Fast Company, a spokesperson for LexisNexis Risk Solutions confirmed the third-party breach and emphasized that it did not contain financial or credit card information. 'There was no compromise of our own systems, infrastructure, or products,' the spokesperson said. 'We are notifying approximately 360,000 individuals and appropriate regulators. We have also reported this incident to law enforcement.' LexisNexis is part of a massive industry in which data brokers collect and sell access to personal and financial data for risk and fraud assessment. That information can have wide repercussions for consumers. For example, The New York Times reported last year that LexisNexis had received driving data from automakers, which the firm would then sell to insurance companies, potentially leading to higher premiums. LexisNexis also operates a large database of legal documents and public records. The Consumer Financial Protection Bureau (CFPB) said in December that it planned to introduce rules that would limit the ability of data brokers to sell sensitive information on Americans. But the new Trump administration halted those operations, and the CFPB officially scrapped the plans earlier this month. 'The Bureau is withdrawing this NPRM [notice of proposed rulemaking] in light of updates to Bureau policies,' its listing in the Federal Register said. This post originally appeared at to get the Fast Company newsletter: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data