logo
OpenAI CEO Sam Altman says he's 'more worried about the 62 year old than the 22 year old' when AI…

OpenAI CEO Sam Altman says he's 'more worried about the 62 year old than the 22 year old' when AI…

Time of India4 hours ago
OpenAI CEO Sam Altman
revealed he's more concerned about older workers than younger ones as artificial intelligence reshapes the job market, citing age-related differences in adaptability to technological change. "I'm more worried about what it means, not for the 22-year-old, but for the 62-year-old that doesn't want to go retrain or reskill,"
Altman
said, while conversing in "Huge If True" podcast. His comments came while discussing AI's potential to displace workers across industries.
The CEO acknowledged that "some classes of jobs will totally go away" and predicted "half of the entry-level white-collar workforce will be replaced by AI" within five years. However, he expressed confidence that younger workers would navigate these changes more successfully.
Young workers better positioned for AI job market shift
Altman called current college graduates the "luckiest kids in all of history," arguing that powerful AI tools like
GPT-5
will enable unprecedented entrepreneurial opportunities. He believes individuals will soon build billion-dollar companies that previously required "teams of hundreds."
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
15 Everyday Foods You didn't Know Could Kill You
HealthSupportMag.com
Undo
"This always happens, and young people are the best at adapting to this," Altman explained, referencing historical patterns of technological disruption and
workforce adaptation
.
The OpenAI chief anticipates entirely new career paths emerging, suggesting future graduates might pursue roles that seem unimaginable today, including space exploration missions.
Older workers face greater AI adaptation challenges
Altman's age-focused concern reflects broader workforce trends showing generational divides in technology adoption. His comments suggest that workers approaching retirement may find AI transitions more difficult than digital natives entering the job market.
Despite acknowledging the disruptive potential, Altman emphasized that society has proven "quite resilient" to technological shifts throughout history. Altman's tactical advice remains consistent across age groups: "Just using the tools really helps." He urges workers of all ages to integrate AI beyond basic searches, emphasizing that hands-on experience will be crucial for navigating the coming transformation.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI's latest step towards advanced artificial intelligence
OpenAI's latest step towards advanced artificial intelligence

Hindustan Times

timea few seconds ago

  • Hindustan Times

OpenAI's latest step towards advanced artificial intelligence

ALMOST TWO decades after the birth of the iPhone, Steve Jobs remains the model for any tech founder seeking to wow the world with their latest product. The launch events he pioneered at Apple, with their mix of showmanship and glamour, seized the world's attention and gave prospective customers the feeling that the future had finally arrived. It was in these glittering footsteps that Sam Altman , the boss of OpenAI, attempted to follow on August 7th, when the artificial-intelligence (AI) firm launched GPT-5, its latest model. The hour-long launch, with its tech specs and live demos, wore Apple's influence proudly. The firm, which is seeking a fresh round of funding at a valuation of $500bn, made much of AI as a consumer technology. Until now, users have had to contend with an alphabet soup of models. For tasks that prioritised speed, there was the dainty 4o; for elegant prose, 4.5; for heavy-duty coding work, the juggernaut that was o3-pro. All these are now incorporated into GPT-5, a so-called 'unified' model that can decide for itself how best to approach any question it is asked. As a consequence, some casual users could be exposed to frontier AI for the first time. Chart Once the curtain fell and the spotlight went out, though, experts who heard Mr Altman's presentation were asking the same question as those who used to tune in to Jobs: just how good is the technology? GPT-5 looks to be the best in the world across various domains, excelling in areas including software engineering and scientific reasoning. According to OpenAI it also comes closest yet to beating human experts on an internal benchmark measuring 'complex, economically valuable knowledge work'. But the model is world-beating only by a slim margin: it fares slightly better than OpenAI's GPT-o3, released in April, which was in itself just a modest improvement over last year's GPT-o1. In other words, GPT-5 is not the transformational leap that some were hoping for. But a few more years of steady progress like this could yield AI systems of transformative power. The incremental improvement should not be a surprise. GPT-5 comes less than two months after OpenAI's last release, o3-pro, and the update represents about two months of progress in the fast-moving AI space. Moreover, according to METR, a research lab, GPT-5 is almost exactly where you might expect the frontier of AI capability to be in the summer of 2025. In 2019 GPT-2 could achieve 50% accuracy on the sorts of tasks that took software engineers two seconds to complete correctly. By 2020, GPT-3 could rival those engineers for tasks that took eight seconds; by 2023, GPT-4 could reliably tackle ones that took four minutes. The data, METR says, suggests a doubling every 200-odd days. More than 800 days later and GPT-5, right on trend, can handle tasks that would take a human a little over two hours. What does this mean for the achievement of 'artificial general intelligence' (AGI)? Boosters have said that within a couple of years models could reach AGI, or the point at which they do so much of the labour currently performed by white-collar workers that they reshape the global economy. GPT-5 suggests the technology could still be on track towards such a goal. Within two years, METR's trend suggests a model will be able to complete an entire working day's worth of labour. Superintelligent models, as those with capabilities beyond AGI are known, may take only a few more years. As a consequence, GPT-5 has some safety experts worried. Gaia Marcus, director of the UK's Ada Lovelace Institute, a British think-tank which monitors AI progress, warned that the release of GPT-5 makes it 'even more urgent' to comprehensively regulate how models can be used. The Future of Life Institute, a safety group which once called for a six-month pause on all AI development, warns that GPT-5's software-development abilities show OpenAI is engaged in a reckless pursuit of 'recursive self-improvement'—building AI systems that can improve themselves. The trends suggest that, if current progress continues, world-changing AI systems could emerge within a few years. GPT-5 does not dispel the idea.

Why Trump's 'give us a cut and take license' chip move has experts worried
Why Trump's 'give us a cut and take license' chip move has experts worried

First Post

timea few seconds ago

  • First Post

Why Trump's 'give us a cut and take license' chip move has experts worried

In a move that has stunned the tech industry, US President Donald Trump's administration has reached an unprecedented agreement with Nvidia and AMD, requiring the chipmakers to give 15% of profits from their China sales to the US government. read more In a surprising step that is sure to cause dismay among American companies, a US official said on Sunday that Nvidia and AMD have agreed to give the US government 15% of the profits from sales of cutting-edge computer chips to China. Sales of H20 chips to China were suspended by US President Donald Trump's administration in April. However, Nvidia claimed last month that Washington had agreed to let the company to resume sales and that it intended to begin delivery shortly. STORY CONTINUES BELOW THIS AD The Financial Times said that the chipmakers consented to the agreement in order to receive export permits for their semiconductors, notably AMD's MI308 processors. According to the article, the Trump administration has not yet decided how to spend the funds. On Friday, another US official said that the Commerce Department has started to provide licenses for the sale of H20 AI processors to China. In Monday's pre-market trading, Nvidia and AMD's shares dropped 1.8% and 3.3%, respectively. It is unprecedented for a president to agree to pay the US government from sales in China, and it is Trump's most recent attempt to influence business decisions. Last week, he called incoming Intel CEO Lip-Bu Tan 'highly conflicted' because of his connections to Chinese companies and asked that he immediately quit. He also harasses industry CEOs to invest in America to support local employment and manufacturing. When asked if Nvidia had agreed to pay 15% of revenues to the United States, an Nvidia spokesperson said in a statement: 'We follow rules the US government sets for our participation in worldwide markets.' The spokesperson added: 'While we haven't shipped H20 to China for months, we hope export control rules will let America compete in China and worldwide.' STORY CONTINUES BELOW THIS AD AMD did not respond to a request for comment on the news, which was first reported by the Financial Times earlier on Sunday. The US Department of Commerce did not immediately respond to a request for comment. China's foreign ministry, approached for comment on Monday, said that China had repeatedly expressed its position on the issue of US chip exports to China. The ministry in the past has accused the US of using technology and trade issues to 'maliciously contain and suppress China'. US Commerce Secretary Howard Lutnick said last month the planned resumption of sales of the AI chips was part of US negotiations with China to get rare earths and described the H20 as Nvidia's 'fourth-best chip' in an interview with CNBC. Lutnick said it was in US interests to have Chinese companies using American technology, even if the most advanced was prohibited from export, so they continued to use an American 'tech stack'. STORY CONTINUES BELOW THIS AD The US official said the Trump administration did not feel the sale of H20 and equivalent chips was compromising US national security. The official did not know when the agreement would be implemented nor exactly how, but said the administration would be following the law. Alasdair Phillips-Robins, who served as an adviser at the Commerce Department during former President Joe Biden's administration, criticized the move. 'If this reporting is accurate, it suggests the administration is trading away national security protections for revenue for the Treasury," Phillips-Robins said. Nvidia generated $17 billion in revenue from China in the fiscal year ending January 26, representing 13% of total sales. AMD reported $6.2 billion in China revenue for 2024, accounting for 24% of total revenue. Meanwhile, the industry watches anxiously as Intel's Lip-Bu Tan, is due to meet Trump today amid mounting scrutiny over the tech sector's tightrope between compliance and autonomy.

Why India's IT layoffs expose a skills crisis—and how academia must reinvent itself for the AI era
Why India's IT layoffs expose a skills crisis—and how academia must reinvent itself for the AI era

The Hindu

timea few seconds ago

  • The Hindu

Why India's IT layoffs expose a skills crisis—and how academia must reinvent itself for the AI era

Recent waves of job losses in India's IT sector—impacting fresh graduates and mid-level professionals alike—have brought the spotlight on a systemic issue: the gap between changing AI technology with experience-based business model demands and the current skill levels of professionals in the industry. While up-skilling spends by tech companies have significantly spiked as they rush to prepare a new pool of AI talent for the future, those without the requisite skills are being shown the door. This talent shortfall underscores the necessity for academia to evolve beyond traditional teaching and assessment frameworks and align tightly with workplace realities. Sangeet Pal Choudhary, in his recent book, 'Reshuffle', brilliantly argues AI's best value lies in capturing entire workflows rather than just automating individual tasks. This presents a unique business model advantage, enabling companies to charge for performance and outcomes rather than products. It's no wonder, hence, that IT contracts are moving to an Experience Level Agreement (XLA), a framework that prioritizes the user's work-ready experience and the ability to drive outcomes over backend technical metrics. Too many graduates arrive in the workforce unprepared—not just on the latest programming language, but unable to collaborate, communicate, or solve open-ended problems. Universities still measure credit hours and exam scores, not presentation skills, GitHub portfolios, or shipped products. First-principle skills, not legacy learning Industry's shift from Service Level Agreements (SLAs) to XLAs is grounded in first principle thinking—questioning what is truly essential. Elon Musk's SpaceX famously slashed launch costs by breaking rockets down to material and manufacturing, disregarding legacy assumptions. Education must do the same. Instead of 'four years plus electives,' what's needed is the cost of acquiring real skills and validating them under stress. We are at a stage when pharmaceutical giants now ask for graduates already fluent in Julia and molecular modelling on day one—because XLAs demand that new hires actively contribute to R&D pipelines immediately, not after six months of onboarding. The traditional 'major in computer science, minor in bioinformatics' doesn't work unless the student knows the domain's tech stack and can reason through molecular behaviour with first-principle modelling. The same is true for other emerging technology areas such as Virtual Reality, where Rust and Unity hold the reins. Real-world competitions offer the blueprint. In events like Micromouse, teens build robots that must navigate mazes, handle embedded code, and balance mechanical design—all judged on working solutions, not just theoretical diagrams. Here, failure is instructive, benchmarking is immediate, and systems thinking (problem decomposition, optimisation, interdisciplinary teamwork) becomes the backbone of learning. The blueprint: Back-to-back XLAs with academia Forward-thinking companies now demand 'back-to-back' XLAs from academic partners: if the hiring contract requires new employees to deploy machine learning models in production, the university must guarantee students graduate with those skills verified—through real project delivery, live portfolio reviews, and skills testing at age 16, 17, and 18, not just at graduation. GitHub commits matter more than GPAs; hackathon wins count more than exam scores. Universities that endure will become living talent accelerators, tied directly—via modular courses, competitive benchmarking, and industry rotations—to evolving business needs. Faculty will mix academics and practitioners. Assessment will track outcomes delivered in the workplace, echoed by XLA metrics: how fast a new hire becomes productive, how well teams collaborate, and how deftly they solve first-principle problems. The alternative? Bootcamps, online academies, and corporate training centres will become the true pipeline for emerging fields. Academia will only remain relevant if it ditches legacy activities for outcome-focused, skill-validated, XLA-inspired learning—starting today.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store