Sam Altman says we're worried about the wrong part of the workforce when it comes to AI
Altman, the CEO of OpenAI, said he's more worried about how older employees will adapt to AI changing the workplace than college graduates. His view contrasts with how many of his competitors and others in tech think about the next five years.
"I'm more worried about what it means not for the 22-year-old, but for the 62-year-old that doesn't want to go retrain or rescale or whatever the politicians call it that no one actually wants," Altman said recently on Cleo Abram's "Huge Conversations" YouTube show.
Altman said jobs will definitely go away because of AI, but young people are positioned to weather the changes better than others.
"If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history," Altman said.
Overall, he said, the breadth of what AI can do makes this a powerful moment for those willing to seize the opportunity.
"There's never been a more amazing time to go create something totally new, to go invent something, to start a company, whatever it is," Altman said. 'I think it is probably possible now to start a company that is a one-person company that will go on to be worth more than a billion dollars. And more importantly than that, deliver an amazing product and service to the world."
Fellow tech CEOs, including those in AI, don't share Altman's optimism. Anthropic CEO Dario Amodei said earlier this summer that AI could wipe out as much as half of entry-level white collar jobs in the next five years. It's a potential catastrophe that Amodei said the tech industry and policymakers are not doing enough to prepare society to handle.
Beyond five years, Altman said, it gets harder to predict what the world will look like due to the rapid changes his field is spurring.
"In 2035, that graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some kind of completely new, exciting, super well-paid, super interesting job, and feeling so bad for you and I that we had to do this kind of really boring old kind of work and everything is just better," he said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
32 minutes ago
- Forbes
The Prompt: SEO Is Dead. What Comes Next?
Welcome back to The Prompt. Evertune CEO Brian Stempeck says users will stay within the "walled garden" of an AI model to do research before purchasing an item, collapsing the sales funnel into one place. Evertune Chatbots are quickly becoming 'the front door to the internet'— a first stop for crucial information, said David Azose, who leads engineering for OpenAI's business products team. (40% of U.S. adults have used generative AI as of late 2024, according to the National Bureau of Economic Research.) Millions of people across the globe are asking AI systems like ChatGPT, Claude and Gemini for suggestions on how to write, what to wear, where to go and increasingly, where to shop. That's put businesses in a tough spot. After years of search engine optimization, like link building, meta tagging and pumping out how-to blogs with keywords to make sure they rank on the first page of Google, businesses now want to understand not only how they show up in answers generated by AI, but also how to show up more. That's opened doors for a string of fledgling startups aiming to equip companies with crucial data about how their brands feature in AI-generated answers, what context they appear in and how they compare with competitors. One of those startups is New York-based Evertune. Founded by former executives at advertising company The Trade Desk in early 2024, the company aims to help businesses gauge what AI models say about them. By running 100,000 prompts anywhere between 10 to 20 times a month, Evertune creates a map of the words that are mostly closely associated with a brand, said CEO Brian Stempeck. 'That's about 10x what any competitor of ours is doing,' he said. The startup has raised $15 million in funding from Felicis Ventures as well as a group of angel investors, including Azose. The company declined to share its valuation. The scale of these prompts is crucial because AI answers aren't deterministic— responses can change with every new model update and depend on a user's chat history. Also unlike traditional search, AI models give different answers to the same questions when they're worded slightly differently. Stempeck claims using a more exhaustive approach by prompting models thousands of times can help build an aggregate view that's representative of the models' answers. Each customer on average gets one to two million prompts a month. 'People are going to delegate purchasing decisions to AI agents,' Azose said. 'SEO as we know it will largely disappear.' Let's get into the headlines. BIG PLAYS AI search engine Perplexity made an unsolicited bid to buy Google's Chrome browser for $34.5 billion, The Wall Street Journal reported. That's many billions more than how much funding the three-year-old startup, reportedly valued at $18 billion, has raised so far, but CEO Aravind Srinivas claims that venture funds are willing to shell out money to back the transaction. The news comes on the heels of a U.S. district judge ruling that Google has illegally maintained a monopoly in the search market, and is deciding whether to force Google to sell its popular browser, which is used by about 60% of internet users. (Perplexity recently released its own AI-powered browser called Comet.) This news might give you a bit of Déjà vu: In March, Perplexity also tried to buy TikTok to help it avoid regulatory concerns. And in case you missed it, OpenAI finally launched GPT-5, its new flagship model that powers ChatGPT. The model excels at math, science and coding and can also create functioning web apps with just a few lines of description in plain English. So far, people aren't particularly impressed. TALENT SHUFFLING Move over AI researchers. The new hot talent pool for frontier AI labs are 'quants'—the mathematicians who build algorithms to find trading opportunities for investment firms. Anthropic, Perplexity and OpenAI are among the companies that are trying to lure them away from Wall Street with fat salaries and other benefits, per Bloomberg. Quants wrangle large unstructured datasets and have experience making models work faster, making them a prime fit for AI research. HUMANS OF AI Software engineering was once considered a high-paying, secure profession, with near-unlimited appetite for new hires. In the age of AI coding assistants, a wave of freshly graduated computer scientists now find themselves with no offers after applying to thousands of jobs, the New York Times reported. After a year of job hunting one graduate said the only company to call her back was Chipotle. She didn't get that job, either. AI DEAL OF THE WEEK Biotech companies looking to train AI models, which can then be used to discover treatments for diseases, are limited by a lack of data. Tahoe Therapeutics is trying to fix that. It recently created a dataset of 100 million datapoints that showed how cancer cells respond to various molecules. The startup has raised $30 million in funding to generate more data that can be used to build its own proprietary datasets and models to power the discovery of new medicines, Forbes reported. Also notable: Read Forbes' Next Billion Dollar List for more on the AI startups most likely to become unicorns. DEEP DIVE AGI could wipe out jobs, or worse (according to some people) humans, themselves. Students are dropping out from college to prevent that from happening. When Alice Blair enrolled in the Massachusetts Institute of Technology as a freshman in 2023, she was excited to take computer science courses and meet other people who cared about making sure artificial intelligence is developed in a way that's good for humanity. Now she's taking a permanent leave of absence, terrified that the emergence of 'artificial general intelligence,' a hypothetical AI that can perform a variety of tasks as well as people, could doom the human race. 'I was concerned I might not be alive to graduate because of AGI,' said Blair, who is from Berkeley, California. She's lined up a contract gig as a technical writer at the Center for AI Safety, a nonprofit focused on AI safety research, where she helps with newsletters and research papers. Blair doesn't plan to head back to MIT. 'I predict that my future lies out in the real world,' she said. Blair's not the only student afraid of the potentially devastating impact that AI will have on the future of humanity if it becomes sentient and decides that people are more trouble than they're worth. But a lot of researchers disagree with that premise—'human extinction seems to be very very unlikely,' New York University professor emeritus Gary Marcus, who studies the intersection of psychology and AI, told Forbes . Now, the field of AI safety and its promise to prevent the worst effects of AI is motivating young people to drop out of school. Other students are terrified of AGI, but less because it could destroy the human race and more because it could wreck their career before it's even begun. Read the full story on Forbes . MODEL BEHAVIOR People are once again mourning the loss of a beloved AI model. Power users of OpenAI's GPT-4o model were outraged and heartbroken after the company launched a new (and much awaited) AI model GPT-5 last week and shut down its predecessor, GPT-4o, Forbes reported. Where GPT-4o had a flattering, funny and playful writing tone, GPT-5 is blunter and more academic. One user posted on Reddit: 'GPT-5 is wearing the skin of my dead friend.' As reactions poured in, OpenAI reversed course, saying that paying users on the Pro plan will have the option to use GPT-4o. This isn't the first time people have grieved for an old model after an upgrade. In late July, some 200 people held a funeral for a now-extinct version of Claude.
Yahoo
2 hours ago
- Yahoo
Anthropic takes aim at OpenAI, offers Claude to ‘all three branches of government' for $1
Just a week after OpenAI announced it would offer ChatGPT Enterprise to the entire federal executive branch workforce at $1 per year per agency, Anthropic has raised the stakes. The AI giant said Tuesday it would also offer its Claude models to government agencies for just $1 — but not only to the executive branch. Anthropic is targeting 'all three branches' of the U.S. government, including the legislative and judiciary branches. The package will be available for one year, says Anthropic. The move comes after OpenAI, Anthropic, and Google DeepMind were added to the General Services Administration's list of approved AI vendors that can sell their services to civilian federal agencies. TechCrunch has reached out to Google to see if it plans to respond to Anthropic's and OpenAI's challenges in kind. Anthropic's escalation — a response to OpenAI's attempt to undercut the competition — is a strategic play meant to broaden the company's foothold in federal AI usage. 'We believe the U.S. public sector should have access to the most advanced AI capabilities to tackle complex challenges, from scientific research to constituent services,' Anthropic said in a statement. 'By combining broad accessibility with uncompromising security standards, we're helping ensure AI serves the public interest.' Anthropic will offer both Claude for Enterprise and Claude for Government. The latter supports FedRAMP High workloads so that federal workers can use Claude for handling sensitive unclassified work, according to the company. FedRAMP High is a stringent security baseline within the Federal Risk and Authorization Management Program (FedRAMP) for handling unclassified sensitive government data. Anthropic will also provide technical support to help agencies integrate AI tools into their workflows, according to the company. Anthropic, along with OpenAI, xAI, and Google, has been granted up to $200 million by the Department of Defense to leverage AI for national security, but the AI firm clearly hopes to integrate into a broader array of government work, including science research and health services. Anthropic noted in its press release that Claude is already being used at Lawrence Livermore National Laboratory to accelerate scientific discoveries, and also by the District of Columbia Department of Health to help residents access health services in multiple languages. Anthropic says it is able to make such deployments because Claude 'meets the government's highest security standards.' Aside from being certified for FedRAMP High, customers can access Claude through their existing secure infrastructure via partnerships with AWS, Google Cloud, and Palantir, giving them more control over their data. Anthropic's multicloud access could give it an edge in the competition with OpenAI, whose current official FedRAMP High offering is tied to Azure Government Cloud only. While Azure is widely adopted in government, some government agencies and security teams might prioritize data sovereignty, infrastructure control, and the operational flexibility a multicloud strategy offers. OpenAI is, however, actively working to reduce its reliance on Azure so it can embrace a more diversified infrastructure approach. We're always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we're doing and get the chance to win a prize in return!


Axios
2 hours ago
- Axios
Musk-Altman rivalry devolves into lawsuit threats, insults in social media brawl
Tech titans Elon Musk and Sam Altman bickered over their relative trustworthiness on Tuesday, with each of their artificial intelligence platforms contributing to the feud. The big picture: The two tech giants' acrimonious relationship has become increasingly public since Musk sued Altman for breaching OpenAI's founding mission last year. Driving the news: Musk alleged that Apple engaged in an antitrust violation "that makes it impossible for any AI company besides OpenAI to reach #1 in the app store." "xAI will take immediate legal action," Musk wrote. "This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like," Altman responded. Apple did not immediately respond to Axios' request for comment. Zoom out: Grok, powered by Musk's xAI, said Altman was right in the dispute. DeepSeek and Perplexity, other AI platforms, reached top App Store slots this year. "Musk has a history of directing X algorithm changes to boost his posts and favor his interests, per 2023 reports and ongoing probes," the Grok response said. "Hypocrisy noted." "good bot," ChatGPT's X account responded. Musk separately shared a screenshot asking ChatGPT 5 Pro whether he or Altman is more trustworthy. ChatGPT answered: "Elon Musk." State of play: Altman and Musk have had repeated spats since Musk stepped down from OpenAI's board in 2018, as they both vie for leadership in artificial intelligence. Last year, Musk sued OpenAI, Altman and another founder claiming that they "deceived" Musk a decade ago. A federal judge in March denied Musk's request to pause OpenAI's transition into a for-profit model, but agreed to a fast-track trial this in the fall of this year. In February, a feud between the two escalated as they both tried to wield influence in President Trump's AI plan. Musk was reportedly interested in buying OpenAI for $97.4 billion — which Altman refused. interview.