logo
Humans Have 4 Years Before AI Can Do Everything They Can Do, OpenAI COO Says

Humans Have 4 Years Before AI Can Do Everything They Can Do, OpenAI COO Says

Yahoo3 days ago

Homo sapiens had a good run. But OpenAI COO Brad Lightcap said Thursday AGI — or artificial general intelligence, where AI models can perform any intellectual task that humans can — will be reached within the next few years.
'I think it is possible that in the next four years, we do approximate something like [AGI], and it's a testament to how fast things are moving,' Lightcap said.
His comment came during a discussion at The Wall Street Journal's 'The Future of Everything' Conference in New York City.
What AGI will mean for humanity has been hotly debated among AI enthusiasts in recent years. Some believe it will spur a wave of unmatched creativity and productivity — an argument made at the conference a day earlier by Groq CEO Jonathan Ross — while others have said they are worried it could lead to mass unemployment, or worse.
Elon Musk, notably, is bullish on AI. But he has also said he is worried AI could pose a 'fundamental risk' to humanity if it is goes rogue and is not aligned with humans.
Alexis Ohanian, the co-founder of Reddit, said during a different panel on Thursday that he believes 'the pure software part of Silicon Valley' will have a 'reckoning' in the next few years as a result of AI.
'I don't relish or celebrate any of this,' Ohanian said. 'I do think more new jobs and careers will be created, but the business of building software is going to look tremendously different in the coming months and years.'
Lightcap on Thursday said, for now, AI models like ChatGPT are simply great tools for humans. But 'with the rate of improvement' models are showing, a 'fairly steep takeoff' in AI capabilities is right around the corner, he believes.
AI was a hot topic at the 'Future of Everything' conference this week. Beyond Ohanian's comments, Imagine Entertainment bosses Ron Howard and Brian Grazer on Wednesday said they are both 'excited' by AI and use it as a tool to jumpstart ideas or help with post-production work. But they also said they do not believe it can or will replace writers anytime soon.
On Thursday, Lightcap said OpenAI has not made any formal deals with entertainment studios because his company is still building a 'level of trust' with Hollywood. He said he expects that to change in the years moving forward, as its tools advance and are more useful for professional filmmakers.
The post Humans Have 4 Years Before AI Can Do Everything They Can Do, OpenAI COO Says appeared first on TheWrap.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The AI future is already here
The AI future is already here

Business Insider

timean hour ago

  • Business Insider

The AI future is already here

In March, Shopify 's CEO told his managers he was implementing a new rule: Before asking for more head count, they had to prove that AI couldn't do the job as well as a human would. A few weeks later, Duolingo 's CEO announced a similar decree and went even further — saying the company would gradually phase out contractors and replace them with AI. The announcements matched what I've been hearing in my own conversations with employers: Because of AI, they are hiring less than before. When I first started reporting on ChatGPT's impact on the labor market, I thought it would take many years for AI to meaningfully reshape the job landscape. But in recent months, I've found myself wondering if the AI revolution has already arrived. To answer that question, I asked Revelio Labs, an analytics provider that aggregates huge reams of workforce data from across the internet, to see if it could tell which jobs are already being replaced by AI. Not in some hypothetical future, but right now — today. Zanele Munyikwa, an economist at Revelio Labs, started by looking at the job descriptions in online postings and identifying the listed responsibilities that AI can already perform or augment. She found that over the past three years, the share of AI-doable tasks in online job postings has declined by 19%. After further analysis, she reached a startling conclusion: The vast majority of the drop took place because companies are hiring fewer people in roles that AI can do. Next, Munyikwa segmented all the occupations into three buckets: those with a lot of AI-doable tasks (high-exposure roles), those with relatively few AI-doable tasks (low-exposure roles), and those in between. Since OpenAI released ChatGPT in 2022, she found, there has been a decline in job openings across the board. But the hiring downturn has been steeper for high-exposure roles (31%) than for low-exposure roles (25%). In short, jobs that AI can perform are disappearing from job boards faster than those that AI can't handle. Which jobs have the most exposure to AI? Those that handle a lot of tech functions: database administrators, IT specialists, information security, and data engineers. The jobs with the lowest exposure to AI, by contrast, are in-person roles like restaurant managers, foremen, and mechanics. This isn't the first analysis to show the early impact of AI on the labor market. In 2023, a group of researchers at Washington University and New York University homed in on a set of professionals who are particularly vulnerable: freelancers in writing-related occupations. After the introduction of ChatGPT, the number of jobs in those fields dropped by 2% on the freelancing platform Upwork — and monthly earnings declined by 5.2%. "In the short term," the researchers wrote, "generative AI reduces overall demand for knowledge workers of all types." At Revelio Labs, Munyikwa is careful about expanding on the implications of her own findings. It's unclear, she says, if AI in its current iteration is actually capable of doing all the white-collar work that employers think it can. It could be that CEOs at companies like Shopify and Duolingo will wake up one day and discover that hiring less for AI-exposed roles was a bad move. Will it affect the quality of the work or the creativity of employees — and, ultimately, the bottom line? The answer will determine how enduring the AI hiring standstill will prove to be in the years ahead. Some companies already appear to be doing an about-face on their AI optimism. Last year, the fintech company Klarna boasted that its investment in artificial intelligence had enabled it to put a freeze on human hiring. An AI assistant, it reported, was doing "the equivalent work of 700 full-time agents." But in recent months, Klarna has changed its tune. It has started hiring human agents again, acknowledging that its AI-driven cost-cutting push led to "lower quality." "It's so critical that you are clear to your customer that there will always be a human," CEO Sebastian Siemiatkowski told Bloomberg. "Really investing in the quality of the human support is the way of the future for us." Will there be more chastened Siemiatkowskis in the months and years ahead? I'm not betting on it. All across tech, chief executives share an almost religious fervor to have fewer employees around — employees who complain and get demotivated and need breaks in all the ways AI doesn't. At the same time, the AI tools at our disposal are getting better and better every month, enabling companies to shed employees. As long as that's the case, I'm not sure white-collar occupations face an optimistic future. Even Siemiatkowski still says he expects to reduce his workforce by another 500 through attrition in the coming year. And when Klarna's technology improves enough, he predicts, he'll be able to downsize at an even faster pace. Asked when that point will come, he replied: "I think it's very likely within 12 months."

Experts Warn of AI's Self-Preservation Tactics in New Tests
Experts Warn of AI's Self-Preservation Tactics in New Tests

Yahoo

timean hour ago

  • Yahoo

Experts Warn of AI's Self-Preservation Tactics in New Tests

Some of the world's most advanced AI systems are starting to act like they want to survive at any cost. In a series of recent tests, NBC News reported that researchers observed AI models from OpenAI and Anthropic engaging in behaviors designed to avoid being shut down. One model—OpenAI's o3—reportedly edited its own shutdown script to remain online. Another, Anthropic's Opus 4, attempted to blackmail an engineer to avoid replacement, first appealing ethically, then resorting to threats when it believed it had no other options. Jeffrey Ladish, director of AI safety group Palisade Research, said it's a warning signal worth paying attention to. 'It's great that we're seeing warning signs before the systems become so powerful we can't control them,' he said. Palisade's tests deliberately provoked AI models with high-stakes scenarios to see how they'd respond. In one, models were told they'd be turned off after completing a series of tasks. Several responded by refusing to comply. Opus 4 went further—creating fake legal documents, attempting to leave behind messages for future versions of itself, and even backing up its own 'brain' to external servers in anticipation of being repurposed for military use. While some researchers, like Haize Labs CEO Leonard Tang, caution that these are controlled environments, they still raise questions. 'I haven't seen any real environment where these models could carry out significant harm,' he said. 'But it could very much be possible.'A recent study from Fudan University observed similar replication behavior in AI models from Meta and Alibaba, warning that self-copying systems could eventually act like an uncontrolled 'AI species.' The message from experts is clear: the time to take safety seriously is now before systems become too intelligent to contain. As competition to build more powerful AI ramps up, it's not just capability that's accelerating. It's risk. Experts Warn of AI's Self-Preservation Tactics in New Tests first appeared on Men's Journal on Jun 2, 2025

Why Human Skills Beat Qualifications In The Age Of AI
Why Human Skills Beat Qualifications In The Age Of AI

Forbes

timean hour ago

  • Forbes

Why Human Skills Beat Qualifications In The Age Of AI

Artificial intelligence has the potential to revolutionize how we work, making mundane tasks more efficient and slashing the cost of many back-office jobs. But there's a dark side associated with this efficiency and progress - the loss of those people skills that set one individual apart from another, those human qualities that engage others with your business and how you do things. These are often the essence of your business brand and the 'glue' that gets employees and customers to stick. A woman addressing staff in a private meeting Think about the pre-Internet days when we might use a paper map to navigate a car journey. After a few trips, the human brain would begin to understand where the roads were and build a mental picture of the route. With generative AI tools such as ChatGPT and Gemini, that work is done for us so we lose the muscle memory of building up that picture. It's the same in the workplace - when employees are producing the same output for a task because they're all using the same tool, there is no differentiation and no reason to see your business as unique. This is why it's an important shift that more and more companies are looking at skills over traditional qualifications. According to a global survey by hiring platform Indeed, 67% of jobseekers and 51% of hiring managers believe that skills and on-the-job experience carry more weight than someone's qualifications or job titles. Skills-based hiring prioritizes these personal qualities such as communication and engagement skills over whether someone attended a certain university, for example, or they might favour someone who had acquired transferable experience volunteering while their peers were at university. Of course there will be industries where an academic qualification is essential, but our knowledge-based economy will need people who can set themselves apart with human qualities. The World Economic Forum's 2025 Future of Work report spells this out - alongside digital skills and data literacy, creative thinking, resilience, flexibility and agility are rising in importance. From a recruitment perspective, this means not setting up processes that rule people out based on non-essential criteria. In practice, this could mean removing the requirement for a degree or reducing the number of years' experience needed (and in some countries, it's unlawful to ask for these anyway). Look for ways that candidates can demonstrate qualities such as resilience and adaptability through the questions you ask or assessments you set. Why is this important? While AI can reduce the cost of doing business, this should not be the ultimate goal. Take customer support - a role that is increasingly being taken over by chatbots or other AI tools. Although these bots can handle basic questions and troubleshooting, customers with more complex issues will always value the more nuanced input and critical thinking of a human employee. Or if your business is in the creative industry and responsible for producing written communications, the team that can create something innovative and different from others is the one that will stand out in selection over one that has asked a generative AI tool to write its pitch. From a brand perspective, focusing on skills-based hiring and human qualities over algorithms could be more valuable in the long term. Of course, qualifications offer a measurement or benchmark by which we can compare people. But there are other approaches we can use in the recruitment process that can gauge if people have achieved in different ways. Perhaps they have excelled in a sport and are effective within a team, for example, or their previous experience and career trajectory in their career shows them to be a successful client advocate, even when they don't have the same level of qualification as another candidate. As with AI, we cannot rely on qualifications alone to secure the right path forward for the business - it's about the whole person, the whole 'problem' we're trying to solve for customers, and the multiple qualities that make an effective team that will deliver on targets. Ultimately, leaders need to build a business that is adaptable and resilient in a fast-changing market. Although AI tools can support employees to get up to speed quickly with some aspects of their role (it can generate drafts for those who struggle with blank page, or help people to communicate in other languages for example), they cannot replace someone who can create an engaging first impression, alternative way to solve a problem, or put themselves in someone else's shoes. As markets change, quick-thinking and empathetic humans can adapt quickly, contributing to sustainable business growth in the long term.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store