
Nvidia CEO: You won't lose your job to AI—you'll 'lose your job to somebody who uses AI'
You should, however, expect your job status to be threatened by people who understand artificial intelligence better than you do, Huang said at the Milken Institute's Global Conference 2025 on May 6.
"Every job will be affected, and immediately. It is unquestionable," said Huang, 62, whose $3.3 trillion company designs some of the computer chips that power popular AI tools. "You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI."
There aren't any job postings on Indeed that AI can do completely on its own, CEO Chris Hyams told CNBC Make It on March 31. But two-thirds of roles on the platform include tasks that AI can perform reasonably well, said Hyams.
Humans who can train AI systems to do so are becoming more desirable for employers, said Huang: "There are about 30 million people in the world who know how to program and use this technology to its extreme. The instrument we invented, we know how to use, but the other 7-and-a-half billion people don't."Not every CEO in the AI industry fully agrees with Huang. The tech could eliminate half of all entry-level white-collar jobs within the next one-to-five years, Anthropic CEO Dario Amodei told Axios on Wednesday.
"Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don't have jobs," Amodei predicted, adding that he sees AI evolving from assisting many entry-level jobs to automating their responsibilities entirely.
One potential end result, he said: CEOs will simply stop listing as many new jobs for hire.
Companies like Shopify, Duolingo and Fiverr are already encouraging — or mandating — that some, or all, of their employees incorporate AI into their work. At Shopify, managers are encouraged to exhaust those tools before asking for more headcount, according to a company-wide memo from CEO Tobi Lutke.
Huang, for his part, has said that AI will lead to at least some job creation, particularly in fields like software engineering and computer programming.
"What used to be human-coded softwares running on CPUs are now machine learning generated softwares running on GPUs," he said at The Hill and Valley Forum in April. "Every single layer of the tooling of it ... is being invented right now and it creates tons of jobs at the next layer ... A whole bunch of new trade jobs have to be created."
Huang has frequently touted AI's current ability to help workers do their jobs more efficiently. He personally uses chatbots like Google's Gemini and OpenAI's ChatGPT to write his first drafts, he said on a January 7 episode of Wharton organizational psychologist Adam Grant's "ReThinking" podcast.
You can also use these tools for more complex projects, he noted at the conference.
"If you don't know how to program a computer, you just tell the AI, 'I don't know how to program [computers]. How do I program them?' And the AI will tell you exactly how to [do so]," he said. "You could draw a schematic and show it to it. You could draw a picture and ask it what to do."
His recommendation: Get comfortable with AI, especially if you're a student. Billionaire entrepreneur and investor Mark Cuban similarly advises students to learn how to use AI tools.
"When I talk to kids today and they ask me what I would do if I were 12 today, my answer is always the same, read books and learn how to use [artificial intelligence] in every way, shape and form you can," Cuban, 66, wrote in a February 17 post on social media platform BlueSky.
Since 2019, Cuban has committed millions of dollars to hosting free AI bootcamps for high school students in low-income U.S. areas. His programs aim to help develop "under-appreciated" talent who can ultimately help boost the country's global competitiveness, he told the Wall Street Journal in October 2020.
At the conference, Huang expressed a complementary viewpoint. "You could argue that artificial intelligence is probably our best way to increase the GDP," he said. "Don't be that person who ignores this technology ... Take advantage of AI."
,
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Times
a few seconds ago
- New York Times
A.I. Start-Up Perplexity Offers to Buy Google's Chrome Browser for $34.5 Billion
In an unlikely bid that shows the growing brashness of young artificial intelligence companies, the A.I. start-up Perplexity has made an unsolicited offer to buy Google's Chrome web browser for $34.5 billion. The tiny company made its offer against the backdrop of an upcoming antitrust decision against the tech giant. In a ruling due as early as this week, U.S. District Judge Amit Mehta could force Google to sell its web browser as a way of reducing the company's dominance in the internet search market. The Perplexity chief executive, Aravind Srinivas, said in a letter to Sundar Pichai, chief executive of Google's parent company, Alphabet, that its offer to buy the Chrome browser was 'designed to satisfy an antitrust remedy in highest public interest by placing Chrome with a capable, independent operator.' Google did not immediately respond to a request for comment. Perplexity's offer was previously reported by The Wall Street Journal. Perplexity is among the many companies that want to challenge Google's search engine through online chatbots and similar technologies that respond to queries with short sentences rather than just a list of links. The Chrome browser could give it an edge among Google's many challengers, including Microsoft, OpenAI and the Silicon Valley start-up But the unsolicited bid is a long shot, since Perplexity itself is valued at an estimated $18 billion. Jesse Dwyer, a spokesman for the company, told The New York Times that outside investors had agreed to back a potential deal. Judge Mehta ruled last year that Google had violated antitrust rules to maintain its dominance in the search market. The Justice Department has pushed for a federal court to force Google to sell its Chrome browser in a series of aggressive remedy proposals after prevailing in its antitrust case against the search giant. The department has argued that forcing Google to divest Chrome and share search results and ads with rivals would create more competition. The government told Judge Mehta that Google's monopoly — it controls about 90 percent of the search market — cannot be remedied without forceful structural changes to the company. And without a remedy like the sale of Chrome, Google was poised to dominate A.I., the government argued. 'This court's remedy should be forward looking and not ignore what's on the horizon,' said David Dahlquist, the government's lead litigator. 'Google is using the same strategy that they did for search and now applying it to Gemini,' he added, referring to Google's A.I. technology. Google has argued against the sale of Chrome, proposing smaller tweaks to its business model. Perplexity and other A.I. companies testified in April during the antitrust remedies hearing by Judge Mehta that it was interested in buying Chrome. Perplexity was founded in 2022 by a group of A.I. researchers including Mr. Srinivas, who previously worked at OpenAI. In an effort to boost usage of its A.I.-powered search engine, the company has started to offer a web browser of its own, called Comet. The New York Times sent Perplexity a cease and desist letter last year demanding that the company stop using its content to help power its A.I. technologies.
Yahoo
28 minutes ago
- Yahoo
China Urges Firms to Avoid Nvidia H20 Chips After Trump Resumes Sales
(Bloomberg) -- Beijing has urged local companies to avoid using Nvidia Corp.'s H20 processors, particularly for government-related purposes, complicating the chipmaker's return to China after the Trump administration reversed an effective US ban on such sales. Sunseeking Germans Face Swiss Backlash Over Alpine Holiday Congestion New York Warns of $34 Billion Budget Hole, Biggest Since 2009 Crisis To Head Off Severe Storm Surges, Nova Scotia Invests in 'Living Shorelines' Five Years After Black Lives Matter, Brussels' Colonial Statues Remain A New Stage for the Theater That Gave America Shakespeare in the Park Over the past few weeks, Chinese authorities have sent notices to a range of firms discouraging use of the less-advanced semiconductors, people familiar with the matter said. The guidance was particularly strong against the use of H20s for any government or national security-related work by state enterprises or private companies, said the people, who asked not to be identified because the information is sensitive. The letters didn't, however, constitute an outright ban on H20 use, according to the people. Industry analysts broadly agree that Chinese companies still covet those chips, which perform quite well in certain crucial AI applications. President Donald Trump said Monday that the processor 'still has a market' in the Asian country despite also calling it 'obsolete.' Nvidia and Advanced Micro Devices Inc. both recently secured Washington's approval to resume lower-end AI chip sales to China, on the controversial and legally questionable condition that they give the US government a 15% cut of the related revenue. But even with Trump's team on board, the two companies face the challenge that their Chinese customers are under Beijing's pressure to purchase domestic chips instead. Beijing's overall push affects AI accelerators from AMD in addition to Nvidia, one of the people said, though it's unclear whether any letters specifically mentioned AMD's MI308 chip. Shares of Chinese AI chip designer Cambricon Technologies Corp. surged to their daily limit of 20% on the news of China's guidance, leading a rally in peers such as Semiconductor Manufacturing International Corp. Beijing's stance could limit Trump's ability to turn his export control about-face into a windfall for government coffers, a deal that highlighted his administration's transactional approach to national security policies long treated as nonnegotiable. Still, Chinese companies may not be ready to jump ship to local semiconductors. 'Chips from domestic manufacturers are improving dramatically in quality, but they might not be as versatile for specific workloads that China's domestic AI industry hopes to focus on,' said Homin Lee, a senior macro strategist at Lombard Odier in Singapore. Lee added that he anticipates 'strong' demand for the chips the Trump administration is allowing Nvidia and AMD to sell. Rosenblatt Securities analyst Kevin Cassidy said he doesn't anticipate that Nvidia's processor sales to China will be affected because 'Chinese companies are going to want to use the best chips available.' Nvidia and AMD's chips are superior to local alternatives, he said. Beijing asked companies about that issue in some of its letters, according to one of the people, posing questions such as why they buy Nvidia H20 chips over local versions, whether that's a necessary choice given domestic options, and whether they've found any security concerns in the Nvidia hardware. The notices coincide with state media reports that cast doubt on the security and reliability of H20 processors. Chinese regulators have raised those concerns directly with Nvidia, which has repeatedly denied that its chips contain such vulnerabilities. The Financial Times reported that some Chinese companies are planning to decrease orders of Nvidia chips in response to the letters. Right now, the people said, China's most stringent chip guidance is limited to sensitive applications, a situation that bears similarities to the way Beijing restricted Tesla Inc. vehicles and Apple Inc. iPhones in certain institutions and locations over security concerns. China's government also at one point barred the use of Micron Technology Inc. chips in critical infrastructure. It's possible that Beijing may extend its heavier-handed Nvidia and AMD guidance to a wider range of settings, according to one person with direct knowledge of the deliberations, who said that those conversations are in early stages. AMD declined to comment on Beijing's notices, while Nvidia said in a statement that 'the H20 is not a military product or for government infrastructure.' China has ample supplies of domestic chips, Nvidia said, and 'won't and never has relied on American chips for government operations.' China's Ministry of Industry and Information Technology and the Cyberspace Administration of China didn't respond to faxed requests for comment on this story, which is based on interviews with more than a half-dozen people familiar with Beijing's policy discussions. The White House didn't respond to a request for comment. The Chinese government's posture raises questions about the Trump administration's explanation for why the US is allowing those exports mere months after effectively banning such sales. Multiple senior US officials have said their policy reversal was the result of trade talks with China, but Beijing has publicly indicated that the resumed H20 shipments weren't part of any bilateral deal. China's recent notices to companies suggest that the Asian country may not have sought such a concession from Washington in the first place. Beijing's concerns are twofold. For starters, Chinese officials are worried that Nvidia chips could have location-tracking and remote-shutdown capabilities — a suggestion that Nvidia has vehemently denied. Trump officials are actively exploring whether location tracking could be used to help curtail suspected smuggling of restricted components into China, and lawmakers have introduced a bill that would require location verification for advanced AI chips. Second, Beijing is intensely focused on developing its domestic chip capabilities, and wants Chinese companies to shift away from Western chips in favor of local offerings. Officials have previously urged Chinese firms to choose domestic semiconductors over Nvidia H20 processors, Bloomberg reported last September, and have introduced energy efficiency standards that the H20 chip doesn't meet. Nvidia designed the H20 chip specifically for Chinese customers to abide by years of US restrictions on sales of its more advanced hardware, curbs designed to limit Beijing's access to AI that could benefit the Chinese military. The H20 chip has less computational power than Nvidia's top offerings, but its strong memory bandwidth is quite well suited to the inference stage of AI development, when models recognize patterns and draw conclusions. That's made it a desirable product to companies like Alibaba Group Holding Ltd. and Tencent Holdings Ltd. in China, where domestic chip champion Huawei Technologies Co. is struggling to produce enough advanced components to meet market demand. By one estimate from Biden officials — who considered but did not implement controls on H20 sales — losing access to that Nvidia chip would make it three to six times more expensive for Chinese companies to run inference on advanced AI models. 'Beijing appears to be using regulatory uncertainty to create a captive market sufficiently sized to absorb Huawei's supply, while still allowing purchases of H20s to meet actual demands,' said Lennart Heim, an AI-focused researcher at RAND, of China's push for companies to avoid American AI chips. 'This signals that domestic alternatives remain inadequate even as China pressures foreign suppliers.' In his remarks Monday, Trump said China's Huawei already offers chips comparable to the Nvidia H20, echoing previous remarks by officials in his administration who've defended the decision to resume H20 exports partly on those grounds. The US should keep the Chinese AI ecosystem reliant on less-advanced American technology for as long as possible, these officials say, in order to deprive Huawei of the revenue and know-how that would come from a broader customer base. Other administration officials have strongly objected to that logic, Bloomberg has reported, arguing that resuming H20 exports will only embolden China's tech champions and bolster the country's overall computing power. Commerce Secretary Howard Lutnick and other Trump officials have also claimed that the H20 move was part of a deal to improve American access to Chinese rare-earth minerals — despite the Trump team's previous assertions that such an arrangement wasn't on the table. 'As the Chinese deliver their magnets, then the H20s will come off,' Lutnick said last month. Treasury Secretary Scott Bessent said in late July that the magnet issue had been 'solved.' The first Nvidia H20 and AMD MI308 licenses arrived a bit over a week after Bessent's declaration — after Nvidia Chief Executive Officer Jensen Huang met with the president and both companies agreed to share their China revenue with the US government. --With assistance from Yanping Li, Sangmi Cha and Emily Forgash. (Updates with additional analyst commentary in ninth paragraph.) Why It's Actually a Good Time to Buy a House, According to a Zillow Economist Bessent on Tariffs, Deficits and Embracing Trump's Economic Plan The Social Media Trend Machine Is Spitting Out Weirder and Weirder Results The Game Starts at 8. The Robbery Starts at 8:01 Klarna Cashed In on 'Buy Now, Pay Later.' Now It Wants to Be a Bank ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
28 minutes ago
- Yahoo
Anthropic's Claude AI model can now handle longer prompts
Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company's popular AI coding models. For Anthropic's API customers, the company's Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That's roughly five times Claude's previous limit (200,000 tokens), and more than double the 400,000 token context window offered by OpenAI's GPT-5. Long context will also be available for Claude Sonnet 4 through Anthropic's cloud partners, including on Amazon Bedrock and Google Cloud's Vertex AI. Anthropic has built one of the largest enterprise businesses among AI model developers, largely by selling Claude to AI coding platforms such as Microsoft's GitHub Copilot, Windsurf, and Anysphere's Cursor. While Claude has become the model of choice among developers, GPT-5 may threaten Anthropic's dominance with its competitive pricing and strong coding performance. Anysphere CEO Michael Truell even helped OpenAI announce the launch of GPT-5, which is now the default AI model for new users in Cursor. Anthropic's product lead for the Claude platform, Brad Abrams, told TechCrunch in an interview that he expects AI coding platforms to get a 'lot of benefit' from this update. When asked if GPT-5 put a dent in Claude's API usage, Abrams downplayed the concern, saying he's 'really happy with the API business and the way it's been growing.' Whereas OpenAI generates most of its revenue from consumer subscriptions to ChatGPT, Anthropic's business centers around selling AI models to enterprises through an API. That's made AI coding platforms a key customer for Anthropic, and could be why the company is throwing in some new perks to attract users in the face of GPT-5. Last week, Anthropic unveiled an updated version of its largest AI model, Claude Opus 4.1, which pushed the company's AI coding capabilities a bit further. Generally speaking, AI models tend to perform better on all tasks when they have more context, but especially for software engineering problems. For example, if you ask an AI model to spin up a new feature for your app, it's likely to do a better job if it can see the entire project, rather than just a small section. Abrams also told TechCrunch that Claude's large context window also helps it perform better at long agentic coding tasks, in which the AI model is autonomously working on a problem for minutes or hours. With a large context window, Claude can remember all its previous steps in long-horizon tasks. But some companies have taken large context windows to an extreme, claiming their AI models can process massive prompts. Google offers a 2 million token context window for Gemini 2.5 Pro, and Meta offers a 10 million token context window for Llama 4 Scout. Some studies suggest there's a limit to how large context windows can be, and AI models are not great at processing massive prompts. Abrams said that Anthropic's research team focused on increasing not just the context window for Claude, but the 'effective context window,' suggesting that its AI can understand most of the information it's given. However, he declined to reveal Anthropic's exact techniques. When prompts to Claude Sonnet 4 are over 200,000 tokens, Anthropic will charge more to API users, at $6 per million input tokens and $22.50 per million output tokens (up from $3 per million input tokens and $15 per million output tokens).