logo
OpenAI CEO Sam Altman warns about the future: ‘Children will rely on tools they can't control, relationships will shift'

OpenAI CEO Sam Altman warns about the future: ‘Children will rely on tools they can't control, relationships will shift'

Indian Express5 hours ago

OpenAI co-founder and CEO Sam Altman believes that while his children may not be smarter than artificial intelligence, they will grow up significantly more capable thanks to the tools it provides. Speaking on the first episode of the OpenAI Podcast, Altman – who announced the birth of his first child in February – said he's optimistic about what AI will enable for future generations.
'My kids will never be smarter than AI. They will grow up vastly more capable than we grew up, and able to do things that we cannot imagine. And they'll be really good at using AI,' he said during the podcast.
Altman also said that the rise of such advanced tools will also pose new challenges for societies, including the risk of over-reliance. 'There will be problems. People will develop these somewhat problematic – or, maybe, very parasocial – relationships, and, well, society will have to figure out new guardrails,' he told podcast host Andrew Mayne.
Referring to himself in the podcast as 'extremely kid-pilled' (a term suggesting he believes 'everyone should have a lot of kids') Altman shared that he 'constantly' relied on ChatGPT's guidance on how to do basic childcare during the initial week of his son's life. 'Clearly, people have been able to take care of babies without ChatGPT for a long time. I don't know how I would have done that,' he said.
Later on in the episode, Altman acknowledged that ChatGPT is known to 'hallucinate,' meaning it can provide the user with false information, and yet many users blindly believe the chatbot for all their queries.
'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much,' he said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The risk is not AI. It is our overreliance on imperfect technology
The risk is not AI. It is our overreliance on imperfect technology

Indian Express

time2 hours ago

  • Indian Express

The risk is not AI. It is our overreliance on imperfect technology

Nowhere is the AI debate more polarised than between the evangelists who see technology as humanity's next great leap and the sceptics who warn of its profound limitations. Two recent pieces — Sam Altman's characteristically bullish blog and Apple's quietly devastating research paper, 'The Illusion of Thinking' — offer a fascinating window into this divide. As we stand at the threshold of a new technological era, it's worth asking: What should we truly fear, and what is mere hype? And for a country like India, what path does wisdom suggest? Sam Altman, CEO of OpenAI and a central figure in the AI revolution, writes with the conviction of a true believer that AI will soon rival, if not surpass, human reasoning. Altman's vision will attract people. After all, he says that AI can be a true partner in solving the world's hardest problems, from disease to climate change. His argument is not just about technological possibility, but about inevitability. In Altman's world, the march toward artificial general intelligence (AGI) is not just desirable — it's unstoppable. But then comes Apple's 'The Illusion of Thinking', a paper that lands like a bucket of cold water on AI enthusiasm. Apple's researchers conducted a series of controlled experiments, pitting state-of-the-art large language models (LLMs) against classic logic puzzles. The results drove down the enthusiasm around Artificial General Intelligence (AGI). While these models impressed at low and medium complexity, their performance collapsed as the puzzles grew harder. AI is not truly 'thinking' but merely extending patterns. When faced with problems that require genuine reasoning, there are still gaps to be filled. Apple's work is a much-needed correction to the narrative that we are on the verge of achieving AGI. So, who is right? The answer, as is often the case, lies somewhere in between. Altman's optimism is not entirely misplaced. AI has already transformed industries and will continue to do so, especially in domains where pattern recognition and data synthesis are of utmost use. But Apple's critique exposes a fundamental flaw in the current trajectory: The flaw of conflating statistical abilities with genuine understanding or reasoning. There is a world of difference between a machine that can predict the next word in a sentence and one that can reason its way through the Tower of Hanoi or make sense of a complex, real-world dilemma. What, then, should the world be afraid of? The real danger is not that AI will suddenly become superintelligent and take over, but that we will place too much trust in systems whose limitations are poorly understood. Imagine deploying these models in healthcare, infrastructure, or governance, only to discover that their intelligence isn't truly that. The risk is not Skynet, but systemic failure born of misplaced faith. Billions could be wasted chasing the chimera of AGI, while urgent, solvable problems are neglected. There is often waste in innovation processes. But the scale of resources deployed for AI dwarfs other examples, and hence, demands a different sort of caution. Yet, there are also fears we can safely discard. The existential risk posed by current AI models is, for now, more science fiction than science. These systems are powerful, but they are not autonomous agents plotting humanity's downfall. They are tools — impressive, but fundamentally limited. The real threat, as yet, is not malicious machines, but human hubris. Are there any lessons for India to draw from this? The country stands to gain enormously from AI, particularly in areas like language translation, agriculture, public service delivery, and others. Here, based on the strengths of today's AI — pattern recognition, automation, and data analysis — it can be used to address real-world, local challenges, which is what India has been majorly trying to do. But India must resist the temptation to tag along with the AGI hype. Instead, it should invest in human-in-the-loop systems, where AI aids rather than replaces human judgement, especially in domains where discretion levels are high at the point of contact with people, and where the stakes are high. Human judgement is still ahead of AI, as of now, so, stick to using it. There is also a deeper lesson here, that is imparted by control theory. True control — over machines, systems, or societies — requires the ability to adapt, to reason, to respond dynamically to feedback. Current AI models, for all their power, lack this flexibility. They cannot adjust their approach when complexity exceeds their training. More data and more computing do not solve this problem. In this sense, the illusion of AI control is as dangerous as the illusion of AI thinking. The future will be shaped neither by those who are blind in their faith towards AI, nor by those who see only limits, but by those who can navigate the space between. For India, and for the world, the challenge is to harness the real strengths of AI while remaining clear-eyed about its weaknesses. The true danger is not that machines will outthink us, but that we will stop thinking for ourselves. Related to this was an interesting brain scan study by MIT Media Lab of ChatGPT users, which suggested that AI isn't making us more productive. It could instead be harming us cognitively. This is what we need to worry about, at least for now. The writer is research analyst at The Takshashila Institution in their High-Technology Geopolitics Programme

Stop wasting AI on personal productivity: 60% of leaders pivot to agentic automation for real enterprise value
Stop wasting AI on personal productivity: 60% of leaders pivot to agentic automation for real enterprise value

Time of India

time2 hours ago

  • Time of India

Stop wasting AI on personal productivity: 60% of leaders pivot to agentic automation for real enterprise value

Automation Anywhere , the leader in Agentic Process Automation (APA), today released a new proprietary research report developed in collaboration with Forrester Consulting, revealing key barriers and breakthroughs shaping enterprise adoption of AI agents . The findings highlight the increasing momentum of AI agents across industries, as well as the implementation challenges organizations must address to realize their full potential. The study, based on a survey of global decision-makers overseeing enterprise-wide AI strategies, found that 60% of respondents believe automation platforms—especially those from RPA leaders like Automation Anywhere—are the most valuable foundation for managing AI-driven processes . This preference outpaces general-purpose AI providers such as OpenAI (ChatGPT) and Anthropic (Claude), as well as broader enterprise platforms like Microsoft Power Automate and Salesforce Einstein, highlighting the need for automation-native solutions purpose-built for process orchestration and scale. Additionally, 71% of respondents agreed that automation solutions should augment human capabilities rather than replace them—reinforcing the importance of keeping strategic decision-making in human hands. 'This research highlights a critical inflection point for enterprises,' said Mihir Shukla, CEO of Automation Anywhere. 'Leaders are clearly prioritizing AI-augmented workflows, recognizing the undeniable value of Agentic AI. The fact that a significant majority are specifically seeking these solutions from traditional RPA and task automation vendors underscores that deep process automation expertise is critical to scale adoption and unlock meaningful impact, accelerating the journey to the autonomous enterprise and paving the path to artificial general intelligence for work.' Key Insights from the Study:High interest meets practical hurdles With deep roots in automation and RPA, Automation Anywhere's Agentic Process Automation (APA) is purpose-built to overcome the key hurdles slowing AI agent adoption. While 74% of respondents recognize the promise of AI agents to surface insights from vast data sets, concerns around data privacy (66%), skillset gaps (63%), and integration complexity (61%) persist. APA is designed to balance autonomous execution with enterprise-grade governance and human oversight—making it possible to scale safely and effectively. Transformational opportunities across business functions Organizations are already piloting or implementing AI agents for internal employee support (53%) and customer service (48%). Many plan to extend these capabilities to broader business functions, to enterprise automation and organizational stewardship in the next two years. The potential value of AI agents for areas such as customer service, sales automation, and compliance received transformational value ratings exceeding eight out of ten on average. Businesses demand enterprise-grade AI automation platforms When evaluating platforms for building and deploying AI agents, 60% of respondents found intelligent automation platforms from RPA (Robotic Process Automation) and task automation vendors to be highly valuable for long-running processes. Organizations strongly prefer solutions capable of enterprise-grade integration, end-to-end process orchestration, and mature data security. Early adoption & transformational value Nearly 75% of leaders plan to pilot AI agents for customer support within the next year, with 71% eyeing research applications. Across all potential use cases, respondents expect transformational levels of value, underscoring strong confidence in AI agents' impact. Navigating the road ahead While challenges remain, enterprise leaders are clear-eyed and confident about the transformational potential of AI agents. By proactively addressing hurdles around security, cost, and talent, organizations can move beyond experimentation and begin scaling Agentic AI to drive measurable business outcomes.

AI needs to be open and inclusive like India Stack
AI needs to be open and inclusive like India Stack

Hindustan Times

time3 hours ago

  • Hindustan Times

AI needs to be open and inclusive like India Stack

Back in October 2024, I wrote on these pages of a group of 12-year-olds who had figured out an ingenious shortcut to finish their homework. Use 40% ChatGPT, 40% Google, and 20% of their own words. At first, it looked like cheating. But with the perspective of distance, I think it was something else entirely. Regulators in Europe are telling Meta (which owns Facebook, Instagram and WhatsApp) not to use user data to train AI unless people clearly agree to it. (Shutterstock/ Representative photo) These children had understood a basic truth: in today's world, what matters most is the result. Not the process. Not the effort. Ask the right question and let the machine find the answer. Don't worry too much about what happens in between. This way of thinking isn't limited to schoolwork anymore, it's showing up in the way digital systems are being built world over—India included. Over the last few years, India has quietly built one of the most impressive pieces of technology. Between Aadhaar, UPI, DigiLocker, CoWIN, Bhashini, and ONDC– collectively called IndiaStack– it is now used by millions of people. It helps people prove their identity, send money, download documents, get vaccinated, translate languages, and access other public services. But here's what makes India's system different from those in most other countries: it doesn't keep your data. In countries like the United States or across Europe, tech companies track what people do online. Every search, every click, every purchase is saved and studied. That information is then used to target ads, recommend content, and even train artificial intelligence systems. That is why The New York Times is now suing OpenAI (the builders of ChatGPT) — because its news articles were used to train a system without permission. This is why regulators in Europe are telling Meta (which owns Facebook, Instagram and WhatsApp) not to use user data to train AI unless people clearly agree to it. In India, the rules—and the values—are different. The digital systems here were built with public money and designed to serve everyone. But they were not designed to spy on people. They were created to work quickly, fairly, and without remembering too much. Take Aadhaar. All it is built to do is prove a person is who the person claims to be. It cannot track where you go. Or DigiLocker. It doesn't keep copies of your CBSE marksheets, PAN cards, or insurance papers. It simply fetches these documents from the source when you ask. That's all. It's a messenger, not a filing cabinet. UPI moves money between people. But it doesn't remember what you spent it on. Long story short, these systems were built to function like light switches. They work when needed, and switch off when the job is done. The builders insisted it doesn't hold on to your personal information for longer than necessary. That's why India's digital model is being noticed around the world. It's open, fair, and inclusive. But now, with the rise of artificial intelligence, a new kind of problem is emerging. AI tools need a lot of data to learn how to speak, listen, write, or make decisions. In India, companies are beginning to train AI using public systems. Language tools learn from Bhashini. Health startups are using patterns from CoWIN to build diagnostic tools. Fintech firms are using transaction frameworks to refine how they give loans. This isn't illegal. It was even expected. These public systems were built to encourage innovation. But here's the problem: the public helped create these systems, and now private companies are using them to build powerful new tools—and may be making money from them. Yet the public might not see any of that value coming back. This is where the story of the 12-year-olds we started with, becomes relevant again. Just like those students who used machines to do most of the work, there's now a larger system that is also skipping the middle part. People provide the inputs—documents, payments, identities. The machines learn from them. And then private players build services or products that benefit from all that learning. The people who made it possible? They are left out of the conversation. In other countries, the debate is about privacy. In India, the debate must now shift to fairness. It's not about stopping AI. It's not about banning companies from using public tools. It's about asking for transparency and accountability. If a company is using data or tools from public systems to train its AI, it should say so clearly. If it benefits from public data, it should give something back—like sharing improved datasets, or allowing its models to be audited. If it's building a commercial product on public infrastructure, it should explain how it used that infrastructure in the first place. This is not regulation for the sake of it. It's basic respect for the public that made the system possible in the first place. India's digital platforms were built to serve everyone. They were not designed to store people's information, and that's what makes them special. But that openness can be misused if those who build on top of it forget where their foundations came from. It's easy to be dazzled by AI. But intelligence—human or machine—shouldn't come without responsibility. So here's the question worth asking: If the public built the digital tools, used them, trusted them, and helped them grow—why aren't they part of the rewards that artificial intelligence is now creating?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store