
OpenAI CEO Sam Altman says AI is like an intern today, but it will soon match experienced software engineers
OpenAI CEO Sam Altman says that AI is aking to an intern and predicted that AI agents could help humanity discover new knowledge from next year. The statement by Altman comes at a time when there is growing anxiety over the loss of jobs due to the increasing capabilties of AI models. You may be interested in
Speaking at the Snowflake Summit last week, Altman said, 'Today [AI] is like an intern that can work for a couple of hours but at some point it'll be like an experienced software engineer that can work for a couple of days,'
'I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge, or can figure out solutions to business problems that are very non-trivial,' Altman added.
Meanwhile, the OpenAI CEO while speaking at the Milken Institute's Global Conference last month said, 'You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI,'
Notably, Anthropic CEO Dario Amodei had recently claimed that AI could wipe out almost half of all entry level white collar jobs in the next 5 years as the new technology gets better by time.
Google CEO Sundar Pichai, however, seemed more optimistic while speaking at the Lex Fridman podcast last week when he said that the technology will serve as an 'accelerator' and will free up humans to do more creative tasks. The tech leader also stated that Google will be hiring software engineers in the short to medium term.
Disagreeing with Anthropic CEO's statement, Pichai said, 'I respect that … I think it's important to voice those concerns and debate them.'
Notably, AI companies like Google and OpenAI had launched their software engineering agents earlier in the year which are aimed at replacing software enginners.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
29 minutes ago
- Time of India
Sundar Pichai sees AI as a tool, not a threat: 8 ways tech professionals can maintain their relevance
In the ever-intensifying tug of war between artificial intelligence and human ingenuity, Google CEO Sundar Pichai didn't pick a side—he reinforced a partnership. At Bloomberg's Tech Conference in San Francisco, Pichai offered more than just a glimpse into Google's future. He dropped a subtle yet seismic remark: 'Whoever is running it [Google] will have an extraordinary AI companion. ' It wasn't a throwaway line. It was a manifesto. As tech companies double down on AI, automating everything from emails to engineering, fear of redundancy looms large. But Pichai's vision breaks from the doomsday narrative. He doesn't see AI as a replacement for people—but as a relentless amplifier of human capability. 'I view this as making engineers dramatically more productive,' he said. He isn't talking about a handover—he's talking about a hand-in-hand future. His own experiments—'vibe coding' with AI-powered tools like Cursor and Replit—speak volumes. It's not just about leading from the front; it's about co-creating with code. And in doing so, Pichai set a powerful precedent: in a world where machines are accelerating, humans must learn how to steer. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Switch to UnionBank Rewards Card UnionBank Credit Card Apply Now Undo For tech professionals standing at the edge of this transformation, the takeaway is clear: if you want to remain relevant in the age of AI, you must evolve into something machines can't replicate. Here's how to stay irreplaceable in a world where your next colleague might be code. Build, don't just operate: Become the architect of automation The future will not be kind to those who merely operate systems built by others. It will reward those who design them. Dive into AI model architecture, algorithm training, and prompt engineering. Whether you're a data analyst or backend developer, re-skill to be a creator of AI tools, not just a consumer. Learn foundational AI principles—vector embeddings, transformers, and tokenization, not just applications like ChatGPT. Anchor your career in the human-only zone AI is fast, but it's not empathetic. It's logical, but not ethical. It can mimic reasoning, but not real-world judgment. Your ability to lead, negotiate, mentor, and listen, those are unautomatable trait. Build a career portfolio rooted in these human-exclusive capabilities. Develop skills in stakeholder communication, ethical reasoning, and emotional intelligence. These are your firewalls. Speak the language of machines—fluently If AI is going to be your co-pilot, learn how to talk to it. Prompt engineering is becoming the new coding. Whether you're training an AI or delegating a task to it, knowing how to communicate effectively with LLMs will be a make-or-break skill. Practise designing layered prompts with context, constraints, and roleplay to get consistent, reliable AI outputs. Think like a product, not just an employee Your resume is no longer a list of qualifications—it's a roadmap of your adaptability. AI systems will commodify many skill sets. The only way to stay valuable is to evolve like a product: Iterate constantly, gather feedback, and reinvent as needed. Build a personal learning system. Set quarterly skill goals, track learning KPIs, and always be in beta. Be the ethicist in the room Tech is no longer neutral. AI decisions affect hiring, health care, policing, and global equity. Professionals who understand algorithmic bias, explainability, and fairness will be invaluable. The more powerful the AI, the more vital it becomes to have humans who can say, 'Should we?' instead of just 'Can we?' Study real-world AI failures (like COMPAS or Amazon's biased hiring tool) to prepare for conversations that matter. Learn to lead across human + machine teams Leadership now requires a hybrid mindset. You must be able to manage human talent while integrating machine output. That means understanding workflows where AI handles execution and humans handle escalation. Use AI for sprint planning, bug triage, or documentation, but keep strategic decision-making firmly in human hands. Stay loud in the public conversation Silence won't protect your career. In an AI-driven world, your voice, whether through writing, speaking, or teaching, becomes a differentiator. Those who shape the narrative are harder to replace. Publish your learnings. Write AI guides. Lead meetups. The more visible your thinking, the more defensible your role. Don't compete with AI, collaborate intelligently Sundar Pichai didn't romanticize a machine-driven world. He recognized that the future belongs to those who can co-create with AI, not fear it. It's not man versus machine. It's man with machine, if you're prepared to grow. In the tug of war between technology and humanity, the rope isn't slipping from your hands. But you must grip harder, with skills, ethics, adaptability, and vision. Because relevance isn't a title, it's a habit. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.


Hindustan Times
37 minutes ago
- Hindustan Times
'Today, AI is like an intern that can work for a couple of hours…,' says OpenAI CEO Sam Altman
The world is steadily transitioning towards embracing Artificial Intelligence (AI), slowly adopting tools and automation processes in day-to-day lives. While the technology is simplifying business processes and tasks, people are now fearing that AI could replace jobs in future. However, many industry experts also assure that AI will work alongside humans. Now, at the Snowflake Summit 2025, OpenAI CEO Sam Altman shares greater insight on how people will start to embrace AI in real time. Reportedly, Altman provided a statement that AI could replace entry-level jobs or interns. However, Gen Z could actually benefit from the technology. This claim also supports the recent Oxford Economics study, which talks about how companies are hiring fewer college graduates in recent times. Know what the OpenAI CEO said more about AI taking human jobs. Also read: Google pauses 'Ask Photos' AI Feature to address performance issues Sam Altman chaired a panel with Snowflake CEO Sridhar Ramaswamy at the Snowflake Summit 2025, during which he said that AI could perform similar tasks to junior-level employees, eventually replacing the hours of work done by interns. Altman stated, 'Today AI is like an intern that can work for a couple of hours, but at some point it'll be like an experienced software engineer that can work for a couple of days.' He further added that AI could resolve business problems and that 'we start to see agents that can help us discover new knowledge.' Also read: Microsoft launches Xbox Copilot beta on Android app to assist gamers with real-time support While it seems like a very practical prediction, it is not the first time we have heard something like this. As businesses and companies are heavily investing in AI tools, it is not only saving them money on hiring resources, but it is so fast tracking certain tasks which used to take hours with human intelligence. But how is Gen Z vastly embracing AI? At Sequoia Capital's AI Ascent event, Altman highlighted how different generations of people are using AI in the real world. He said, many are using AI as a replacement for Google. However, Gen Z is using AI as an advisor, whereas younger generations are using the technology as an operating system. Therefore, people in their twenties are heavily relying on AI tools like as ChatGPT to perform the majority of tasks. This also showcases a great example of how AI will work alongside humans, but this could also create an imbalance in the job market, especially for people who are just starting new in the job industry. Mobile Finder: Apple iPhone 17 Pro Max LATEST specs, features, and price


Economic Times
an hour ago
- Economic Times
Meta set to throw billions at startup that leads AI data market
Three months after the Chinese artificial intelligence developer DeepSeek upended the tech world with a model that rivaled America's best, a 28-year-old AI executive named Alexandr Wang came to Capitol Hill to tell policymakers what they needed to do to maintain US US, Wang said at the April hearing, needs to establish a 'national AI data reserve,' supply enough power for data centers and avoid an onerous patchwork of state-level rules. Lawmakers welcomed his feedback. 'It's good to see you again here in Washington,' Republican Representative Neal Dunn of Florida said. 'You're becoming a regular up here.' Wang, the chief executive officer of Scale AI, may not be a household name in the same way OpenAI's Sam Altman has become. But he and his company have gained significant influence in tech and policy circles in recent years. Scale uses an army of contractors to label the data that tech firms such as Meta Platforms Inc. and OpenAI use to train and improve their AI models, and helps companies make custom AI applications. Increasingly, it's enlisting PhDs, nurses and other experts with advanced degrees to help develop more sophisticated models, according to a person familiar with the matter. Put simply: The three pillars of AI are chips, talent and data. And Scale is a dominant player in the last of the startup's stature is set to grow even more. Meta is in talks to make a multibillion-dollar investment in Scale, Bloomberg News reported over the weekend. The financing may exceed $10 billion in value, making it one of the largest private company funding events of all time. The startup was valued at about $14 billion in 2024, as part of a funding round that included backing from many ways, Scale's rise mirrors that of OpenAI. Both companies were founded roughly a decade ago and bet that the industry was then on the cusp of what Wang called an 'inflection point of AI.' Their CEOs, who are friends and briefly lived together, are both adept networkers and have served as faces of the AI sector before Congress. And OpenAI, too, has been on the receiving end of an 11-figure investment from a large tech firm. Scale's trajectory has shaped, and been shaped by, the AI boom that OpenAI unleashed. In its early years, Scale focused more on labeling images of cars, traffic lights and street signs to help train the models used to build self-driving cars. But it has since helped to annotate and curate the massive amounts of text data needed to build the so-called large language models that power chatbots like ChatGPT. These models learn by drawing patterns from the data and their respective labels. At times, that work has made Scale a lightning rod for criticisms about the unseen workforce in places such as Kenya and the Philippines that supports AI development. Scale has faced scrutiny for relying on thousands of contractors overseas who were paid relatively little to weed through reams of online data, with some saying they have suffered psychological trauma from the content they're asked to review. In a 2019 interview with Bloomberg, Wang said the company's contract workers earn 'good' pay — 'in the 60th to 70th percentile of wages in their geography.'Scale AI spokesperson Joe Osborne noted that the U.S. Department of Labor recently dropped an investigation into the company's compliance with fair labor business has evolved. More tech firms have begun to experiment with using synthetic, AI-generated data to train AI systems, potentially reducing the need for some of the services Scale historically provided. However, the leading AI labs are also struggling to get enough high-quality training data to build more advanced AI systems that are capable of fielding complex tasks as well as, or better than, meet that need, Scale has increasingly turned to better-paid contractors with graduate degrees to improve AI systems. These experts participate in a process known as reinforcement learning, which rewards a system for correct answers and punishes it for incorrect experts who work with Scale are tasked with constructing tricky problems – tests, essentially – for the models to solve, according to a person familiar with the matter who asked not to be named because the information is private. As of early 2025, 12% of the company's pool of contributors who work on the process of improving these models had a PhD in fields such as molecular biology and more than 40% had a master's degree, law degree or MBA in their field, the person of this process is aimed at companies that want to use AI for medical and legal applications, the person said. One area of focus, for example, is getting AI models to better answer questions regarding tax law, which can differ greatly from country to country and even state to like those are driving significant growth for the company. Scale generated about $870 million in revenue in 2024 and expects $2 billion in revenue this year, Bloomberg News reported in April. Scale has seen demand for its network of experts increase in the wake of DeepSeek, the person familiar with the matter said, as more companies invest in models that mimic human reasoning and carry out more complicated has also deepened its relationship with the US government through defense deals. Wang, a China hawk, has cozied up to lawmakers on the hill who are concerned about China's ascendance in AI. And Michael Kratsios, a former executive at Scale, is now one of President Donald Trump's top tech aides, helping to steer US policy on AI. For Meta, partnering more deeply with Scale may simultaneously help it keep pace with AI rivals like Google and OpenAI, and also help it build deeper ties with the US government at a time when it's pushing more into defense tech. For Scale, a tie-up with Meta offers a powerful and deep-pocketed ally. It would also be a fitting full-circle moment for Wang. Shortly after launching Scale, Wang said he was asked by one venture capitalist when he knew he wanted to build a startup. In response, Wang said he 'rattled off some silly answer about being inspired by The Social Network,' the film about the founding of Facebook.