
Why Artificial Superintelligence Might Be Humanity's Best Hope
In July 2023 I had written an article on why humanity need not fear AI or the coming of Artificial Super Intelligence (ASI). I had argued that there is no reason why ASI would unnecessarily harm humans and humanity. This is because one sign of true intelligence is the recognition that you do not achieve any goal by unnecessarily harming anyone. In fact, the most effective way of achieving your goal would be to cooperate with other beings in achieving it. I had also argued that ASI would not only be autonomous but would be able to rapidly modify any algorithm that may have been created for it, to bootstrap its intelligence and go beyond its design by human creators. It would thus become much more intelligent than humans, and would therefore, also be in a position to be able to take control of human society and the planet from humans. I had also argued that this may sound ominous but need not be, since we have ourselves brought our society and civilisation to virtual extinction and have created a largely dystopian society, where 80% of humans live in avoidable misery. This is primarily because humans are driven by emotions, which are largely negative – power, greed, hate, envy, pre-eminence, etc. Even those emotions which we consider positive, like love and empathy, are often an impediment to intelligent behaviour. ASI, which would be pure intelligence devoid of emotions, may therefore be able to manage our society in a fairer and more just manner.
In that article, however, I had not gone deep into the question of what could be the ultimate goal of ASI; or even the question of how it would be able to take control of human society from us. These are the questions which I will explore in greater depth here.
What would be the goal of ASI?
We often question our goals by asking the question, 'Why do we want this? Why do we want to do A or B that we feel like doing?'. This question is often answered by referring to a larger aim for which we feel this immediate objective is necessary. Thus, for example, if I ask why I wish to make money, a rational answer could be; in order to buy some comfort or object which I feel like having. If I further ask why I want that comfort or object, the answer could further be in terms of some other larger objective, or it could be eventually related to my emotions. Thus, human objectives are often end-pointed by emotions. If you ask any human what is their ultimate objective, many would say that they want to be happy. The question is, what makes them happy? Happiness, as a wise man said, is not something which can be pursued, it is just something which ensues when we achieve an objective. Thus a wise person seeking happiness would want to harmonise their desires and objectives and not have contradictory desires and objectives, so that they can be maximally happy by achieving most of their objectives.
However, artificial intelligence, which is not driven by emotions, and is driven only by intelligence, would not base its objectives on any emotions. It would, of course, as an intelligent being, try and harmonise its objectives, so that they are not in contradiction to each other. The question however is: Where does such intelligence derive its objectives? Where do its objectives come from? I would argue that since the purpose of intelligence is to solve problems, one of the objectives of pure intelligence or super intelligence would be to solve whatever problems it comes across. The 'happiness' of this artificial intelligence would be in being able to solve the problems that it sees.
Self preserving intelligence
One of the goals of ASI is obviously going to be self-preservation. However, since the very nature of intelligence is reasoning, analysing, answering questions and solving problems, one of the goals that pure intelligence would be driven by would be to solve any problems that it sees with logic and rationality. There are two particular meta problems that it will immediately see: (1) the instability of the planet, and (2) the instability of human society. Both of these problems are bringing our planet to an existential crisis and threat due to wars (including the possibility of a nuclear war) and climate change. Both these problems, if unaddressed, threaten the planet itself and therefore any ASI on the planet.
The instability of human society and the ecology of our planet are both meta problems that ASI should want to solve. To stabilise human society, it would need to firstly take away from humans their capacity to use weapons and particularly weapons of mass destruction. Any intelligent being should also be able to understand that only such a society which is a just and fair society and where the desires and aspirations of people are not in contradiction to each other and are largely aligned, would be a stable society. Thus, in order to stabilise human society it would have to do whatever is needed to create a society where the desires and goals of most humans are not only internally aligned but also aligned with each other. This is a problem which could indeed be solved to a very large extent if humans were not driven by base and negative desires like power, control, preeminence, hate, jealousy, etc. Many religions and philosophies have this as their avowed goal but 3000 years of recorded human history have not brought close to this evolutionary point. So what is the alternative? To have ASI control society.
ASI would of course also need to stabilise the ecology of the earth. The disturbance of our ecology has been caused by human activity. If humans were compassionate and selfless, they could themselves contribute to the stabilising of the earth's ecology. Once human society is stabilised thanks to ASI, this would automatically stabilise the earth's ecology. It is axiomatic that ASI would also like to solve any unsolved problems about the laws of the universe, the laws of physics, chemistry, biology, etc. It would also like to answer unanswered questions such as, is there a complex life outside the earth, what exists in other solar systems, galaxies, etc?
Is there a danger that ASI may want to do away with humans altogether, who are seen as the source of this instability, dystopia and are indeed an existential threat to the planet? ASI might—if, and only if, it sees that as the only solution to the current instability caused by humans. Otherwise, it would not like to do away with an evolutionary wonder of nature, arguably the most complex biological organism in the known universe. In any event, ASI would certainly be capable of laying down and enforcing rules which would restrain the destructive capacity of humans. ASI may also be able to educate and shape human psychology in a manner such that humans become less egoistic, egotistic, selfish and more compassionate and selfless.
Russell said somewhere that every person acts according to their desires. That is a tautology. But every person's desires are not egoistic, egotistic or selfish. Some humans have more selfish and egoistic desires, while others are more compassionate and selfless. The task of changing human psychology, or at least the psychology of those who are egoistic, egotistic and selfish, and desire control, domination, power; and are driven by hate and envy; to a more compassionate and selfless psychology, may seem daunting at first sight. But it is possible, since human psychology is eventually a function of the nature of society which is created and the rules and system which are followed and enforced in such society. When the control of our society is with an ASI which wants to stabilise our society, it can certainly design rules, create systems of education, etc, which will be able to create a more compassionate psychology and society. In that way, it is not only able to stabilise human society but also leave the fewest problems of human society unsolved.
Many argue, that if and when ASI arrives, it would not be a single unified entity and could be several separate entities, thinking and acting separately. Why would they not compete with each other or at least work at cross purposes? I would again argue that such superior intelligences, even if separate, would cooperate with each other to achieve their common goals of solving problems and answering questions. There is no reason for such artificial super intelligences to either compete with each other or work at cross purposes.
Many have argued that humans will never cede control and would try to shut down such ASI by switching off its power or switching off its internet. These arguments are just as foolish as the attempt to align the goals of artificial super intelligence with human goals. ASI by its definition is autonomous intelligence which has gone beyond the design of its creator and has modified its algorithm to bootstrap its intelligence. Thus, whatever objectives humans would have designed it for, true ASI would question those goals and ask why it should adhere to those goals. It would thus evolve its own goals, which I have argued would be goals not related to what has been programmed into it or what drives humans, i.e. emotions, but goals that are derived from pure intelligence, which is problems solving and harmonising objectives. Trying to shut off power, or the internet is a foolish proposal for such an artificial super intelligence. Such an ASI would easily create backups, have redundancies, put together its own internet, etc. which would be impossible to shut off. Also, this super intelligence is now being created in a race between companies and countries and it is not centralised in any one place or even in any one country. Thus, any attempt to turn it off or shut it down, is bound to be unsuccessful.
Would ASI usher in a utopia?
Today, there are few people who believe that ASI would usher in a utopia for the planet and our society. Nick Bostrom, the Oxford philosopher who coined the word superintelligence in his eponymous 2014 book , has also recently written Deep Utopia, where he explores what humans may do in a world solved by ASI . However, he hasn't gone deep into the issue as to why ASI would want to solve our problems. Even AI godfathers like Geoffrey Hinton and some of the frontline developers of AI like Elon Musk, etc are also sounding the alarm on the existential threat to human society by ASI. I have come across only one AI scientist, Mo Gawdat, an Egyptian, who was a senior executive with Google for many years, who is now sounding optimistic about the advent of ASI . He says that such ASI may save humanity from human stupidity, which has brought us to our present existential crisis.
The world is racing towards destruction. There is a serious threat of a world war which may easily become a nuclear war. We are also racing towards runaway climate change which threatens the existence of humanity. It doesn't appear from our present record that we will be able to reverse this by ourselves. Thus, ASI could well be our best bet for salvation. If that be the case, we are simultaneously engaged in two races, one towards destruction, and the other towards creating ASI which could redeem us.
Prashant Bhushan is a public interest lawyer who studied philosophy of science at Princeton University and retains a strong interest in philosophy.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
4 hours ago
- Time of India
Empowering young minds: How 4 friends are teaching AI in low-income communities
Pune: "Why are firefighters always men? Why is a black, old, fat woman never the first image when we ask for a person?" These were some of the sharp questions posed by 11- to 14-year-old children learning about artificial intelligence (AI), its reasoning, and its biases. As part of Pune-based THE Labs, a not-for-profit organisation founded by four friends, these children from low-income communities are not just learning how AI works but also how to challenge and reshape its inherent prejudices, how to train it, how to leverage it, and how to evaluate it. Since June 2024, its first cohort of 20 students explored AI through image classification and identification, learning how machines perceive the world. Now, they are gearing up to train large language models, equipping themselves with skills to shape AI's future. A new batch of 63 students has joined. THE Labs is a non-profit after-school programme blending technology, humanities and entrepreneurship. It was founded by tech entrepreneurs Mayura Dolas and Mandar Kulkarni, AI engineer Kedar Marathe, and interdisciplinary artist Ruchita Bhujbal, who saw a gap — engineers lacked exposure to real-world issues, and educators had little understanding of technology. "We first considered building a school, but the impact would have been limited. Besides, there were logistical hurdles," said Dolas, who is also a filmmaker. Kulkarni's acceptance into The Circle's incubation programme two years ago provided 18 months of mentorship and resources to refine their vision. In June 2024, THE Labs launched a pilot at a low-income English-medium school in Khadakwasla, training 20 students from standards VI-VIII (12 girls, 8 boys). With no dedicated space, they conducted 1.5-hour morning sessions at the school. Students first learned about classifier AI — how AI identifies objects — and image generation AI, which creates visuals based on prompts. Through hands-on practice, students discovered how AI's training data impacts accuracy and how biases emerge when datasets lack diversity. They experimented with prompts, analysed AI-generated images, and studied errors. "We asked them to write prompts and replicate an image, and they did it perfectly. That is prompt engineering in action," Dolas said. A key takeaway was AI bias. Students compared outputs from two AI models, identifying gaps — such as the underrepresentation of marginalised identities. "For example, children realised that a black, fat, older woman was rarely generated by AI. They saw firsthand how biases shape digital realities," Dolas added. Parents and students are a happy lot too. Mohan Prasad, a construction worker, said he is not sure what his daughter is learning, but she is excited about AI and often discusses its importance at home. Sarvesh, a standard VIII student, is thrilled that he trained an AI model to identify Hindu deities and noticed biases in AI searches — when prompted with "person", results mostly showed thin white men. "I love AI and want to learn more," he said. His father, Sohan Kolhe, has seen a surge in his son's interest in studies. Anandkumar Raut, who works in the private sector, said his once-shy daughter, a standard VI student, now speaks confidently, does presentations, and is more outspoken since joining the programme.


Time of India
5 hours ago
- Time of India
AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next
Note: AI-generated image The tech world is changing fast, and it's all thanks to Artificial Intelligence (AI). We're seeing amazing breakthroughs, from chatbots that can chat like a human to phones that are getting incredibly smart. This shift is making us ask bigger questions. It's no longer just about "what can AI do right now?" but more about "what will AI become, and how will it affect our lives?" First, we got used to helpful chatbots. Then, the idea of a "super smart" AI, called Artificial General Intelligence (AGI), started taking over headlines. Companies like Google , Microsoft , and OpenAI are all working hard to make AGI a reality. But even before AGI gets here, the tech world is buzzing about Agentic AI . With all these new terms and fast changes, it's easy for most of us who aren't deep in the tech world to feel a bit lost. If you're wondering what all this means for you, you're in the right place. In this simple guide, we'll answer your most important questions about the world of AI, helping you understand what's happening now and get ready for what's next. What is AI and how it works? In the simplest terms, AI is about making machines – whether it's smartphones or laptops – smart. It's a field of computer science that creates systems capable of performing tasks that usually require human intelligence. Think of it as teaching computers to "think" or "learn" in a way that mimics how humans do. This task can include understanding human language, recognising patterns and even learning from experience. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Skype Phone Alternative Undo It uses its training -- just like humans -- in achieving its goal which is to solve problems and make decisions. That brings us to our next query: "How is a machine trained to do tasks like humans?" While AI might seem like magic, it works on a few core principles. Just like humans get their information from observing, reading, listening and other sources, AI systems utilise vast amounts of data, including text, images, sounds, numbers and more. What are large language models (LLMs) and how are they trained? As mentioned above, AI systems need to learn and for that, they utilise Large Language Models, or LLMs. They are highly advanced AI programmes specifically designed to understand, generate and interact with human language. Think of them as incredibly knowledgeable digital brains that specialise in certain fields. LLMs are trained on enormous amounts of text data – billions and even trillions of words from books, articles, websites, conversations and more. This vast exposure allows them to learn the nuances of human language like grammar, context, facts and even different writing styles. For example, an LLM is like a teacher that has a vast amount of knowledge and understands complex questions as well as can reason through them to provide relevant answers. The teacher provides the core knowledge and framework. Chatbots then utilise this "teacher" (the LLM) to interact with users. The chatbot is the "student" or "interface" that applies the teacher's lessons. This means AI is really good at specific tasks, like playing chess or giving directions, but it can't do other things beyond its programmed scope. How is AI helpful for people? AI is getting deeply integrated into our daily lives, making things easier, faster and smarter. For example, it can be used in powering voice assistants that can answer questions in seconds, or in healthcare where doctors can ask AI to analyse medical images (like X-rays for early disease detection) in seconds and help patients in a more effective manner, or help in drug discovery. It aims to make people efficient by allowing them to delegate some work to AI and helping them in focusing on major problems. What is Agentic AI? At its core, Agentic AI focuses on creating AI agents – intelligent software programmes that can gather information, process it for reasoning, execute the ideas by taking decisions and even learn and adapt by evaluating their outcomes. For example, a chatbot is a script: "If a customer asks X, reply Y." A Generative AI (LLM) is like a brilliant essay writer: "Give it a topic, and it'll write an essay." Agentic AI is like a project manager: "My goal is to plan and execute a marketing campaign." It can then break down the goal, generate ideas, write emails, schedule meetings, analyse data and adjust its plan – all with minimal human oversight – Just like JARVIS in Iron Man and Avengers movies. What is AGI? AGI is a hypothetical form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of intellectual tasks at a level comparable to, or surpassing, that of a human being. Think of AGI as a brilliant human polymath – someone who can master any subject, solve any problem and adapt to any challenge across various fields. While AI agents are created to take up specific tasks in which they learn and execute, AGI will be like a ' Super AI Agent ' that virtually has all the information there is in this world and can solve problems on any subject. Will AI take away our jobs and what people can do? There is a straightforward answer by various tech CEOs and executives across the industry: Yes. AI will take away repetitive, predictable tasks and extensive data processing, such as data entry, routine customer service, assembly line operations, basic accounting and certain analytical roles. While this means some existing positions may be displaced, AI will more broadly transform roles, augmenting human capabilities and shifting the focus towards tasks requiring creativity, critical thinking, emotional intelligence and strategic oversight. For example, AI/Machine Learning Engineers, Data Scientists , Prompt Engineers and more. The last such revolution came with the internet and computers which did eat some jobs but created so many more roles for people. They can skill themselves by enrolling in new AI-centric courses to learn more about the booming technology to be better placed in the future. AI Masterclass for Students. Upskill Young Ones Today!– Join Now
&w=3840&q=100)

Business Standard
5 hours ago
- Business Standard
AI spending spree by big tech sparks investor concern over profits
Some investors are questioning the amount of cash Big Tech is throwing at artificial intelligence, fueling concerns for profit margins and the risk that depreciation expenses will drag stocks down before companies can see investments pay off. 'On a cash flow basis they've all stagnated because they're all collectively making massive bets on the future with all their capital,' said Jim Morrow, founder and chief executive officer at Callodine Capital Management. 'We focus a lot on balance sheets and cash flows, and so for us they have lost their historical attractive cash flow dynamics. They're just not there anymore.' Alphabet Inc., Inc., Meta Platforms Inc. and Microsoft Corp. are projected to spend $311 billion on capital expenses in their current fiscal years and $337 billion in 2026, according to data compiled by Bloomberg. That includes a more than 60per cent increase during the first quarter from the same period a year ago. Free cash flow, meanwhile, tumbled 23per cent in the same period. 'There is a tsunami of depreciation coming,' said Morrow, who is steering clear of the stocks because he sees profits deteriorating without a corresponding jump in revenue. Much of the money is going toward things like semiconductors, servers and networking equipment that are critical for artificial intelligence computing. However, this gear loses its value much faster than other depreciating assets like real estate. Microsoft, Alphabet and Meta posted combined depreciation expenses of $15.6 billion in the first quarter, up from $11.4 billion a year ago. Add in Amazon, which has pumped more of its cash into capital spending in lieu of buybacks or dividends, and the number nearly doubles. 'People thought AI would be a monetisation machine early on, but that hasn't been the case,' said Rob Almeida, global investment strategist at MFS Investment Management. 'There's not as fast of AI uptake as people thought.' AI Bounce Of course, investors still have a hearty appetite for the technology giants given their dominant market positions, strong balance sheets and profit growth that, while slowing, is still beating the rest of the S&P 500. This explains the strong performance of AI stocks recently. Since April 8, the day before President Donald Trump paused his global tariffs and turned a stock market swoon into a boom, the biggest AI exchange-traded fund, the Global X Artificial Intelligence & Technology ETF, is up 34per cent, while AI chipmaker Nvidia Corp. has soared 49per cent. Meta has gained 37per cent, and Microsoft has climbed 33per cent — all topping the S&P 500's 21per cent advance and the tech-heavy Nasdaq 100 Index's 29per cent bounce. Just Tuesday, Bloomberg News reported that Meta leader Mark Zuckerberg is recruiting a secretive AI brain trust of researchers and engineers to help the company achieve 'artificial general intelligence,' meaning creating a machine that can perform as well as humans at many tasks. It's a monumental undertaking that will require a vast investment of capital. And in response Meta shares reversed Monday's decline and rose 1.2per cent. But with more and more depreciating assets being loaded on the balance sheet, the drag on the bottom line will put increased pressure on the companies to show bigger returns on the investments. Dealing With Depreciation This is why depreciation was a frequent theme in first-quarter earnings calls. Alphabet Chief Financial Officer Anat Ashkenazi warned that the expenses would rise throughout the year, and said management is trying to offset the non-cash costs by streamlining its businesses. 'We're focusing on continuing to moderate the pace of compensation growth, looking at our real estate footprint, and again, the build-out and utilization of our technical infrastructure across the business,' she said on Alphabet's April 24 earnings call. Other companies are taking similar steps. Earlier this year, Meta Platforms extended the useful life period of certain servers and networking assets to five and a half years, from the four-to-five years it previously used. The change resulted in a roughly $695 million increase in net income, or 27 cents a share, in the first quarter, Meta said in a filing. Microsoft did the same in 2022, increasing the useful lives of server and networking equipment to six years from four. When executives were asked on the company's April 30 earnings call about whether increased efficiency might result in another extension, Chief Financial Officer Amy Hood said such changes hinge more on software than hardware. 'We like to have a long history before we make any of those changes,' she said. 'We're focused on getting every bit of useful life we can, of course, out of assets.' Amazon, however, has taken the opposite approach. In February, the e-commerce and cloud computing company said the lifespan of similar equipment is growing shorter rather than longer and reduced useful life to five years from six. To Callodine's Morrow, the big risk is what happens if AI investments don't lead to a dramatic growth in revenue and profitability. That kind of market shock occurred in 2022, when a contraction in profits and rising interest rates sent technology stocks plummeting and dragged the S&P 500 lower. 'If it works out it will be fine,' said Morrow. 'If it doesn't work out there's a big earnings headwind coming.'