logo
Jensen Huang says the US needs to win a key people battle with China

Jensen Huang says the US needs to win a key people battle with China

Yahoo15-07-2025
If the US wants to lead in AI, it needs to win over the world's developers, Jensen Huang said.
"50% of the world's AI developers are in China," the Nvidia CEO said.
The US should stop restricting access and focus on expanding its tech influence, Huang added.
Nvidia's CEO, Jensen Huang, has a message for the US: To lead in AI, you need to win over the world's developers, starting with the ones in China.
The tech titan said on an episode of "Memos to the President" published Monday that leadership in AI isn't just about hardware or regulation — it's about people. And right now, many of them are outside America's reach.
"50% of the world's AI developers are in China," he said. "The first job of any platform is to win all developers."
Huang said that developers now come from everywhere — Africa, Latin America, Southeast Asia, and the Middle East — as demand for AI spreads across every country, industry, and company.
Huang said the US must ensure developers around the world are building on the "American tech stack," from chips to infrastructure to cloud platforms.
"The American tech stack should be the global standard," he said. "Just as the American dollar is the global standard."
Huang said that Washington needs to stop restricting access and start focusing on expanding influence.
"The more your technology is everywhere, the more developers you're going to have," he added.
Huang's comments come just before Nvidia announced that it will resume selling its H20 chips to China. The company said in a statement on Tuesday that the US government has "assured Nvidia that licences will be granted," with deliveries expected to begin soon.
The move marks a reversal from the Trump administration's earlier crackdown on advanced chip exports to China. In April, Nvidia warned that the restrictions could cost it billions in lost revenue.
An Nvidia spokesperson declined to comment.
Huang has been outspoken about the strength of China's AI industry.
In an interview earlier this year with Ben Thompson, the author of Stratechery, Huang said that China is doing "fantastic" in the AI market, with homegrown models like DeepSeek and Manus emerging as credible challengers to US-built systems.
He also said China's AI researchers are some of the very best in the world, and it's no surprise that US companies like OpenAI and Anthropic are hiring them.
"Our competition in China is really intense," Huang said in May at the Computex Taipei tech conference in Taiwan.
While China races ahead, Huang has been critical of Washington's response. He said in Taiwan that US chip export controls — aimed at slowing China's AI progress — have backfired.
"The export control gave them the spirit, the energy, and the government support to accelerate their development. So I think, all in all, the export control is a failure," he said.
Read the original article on Business Insider
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

D.Law Named to Inc.'s 2025 Best Workplaces List
D.Law Named to Inc.'s 2025 Best Workplaces List

Los Angeles Times

time13 minutes ago

  • Los Angeles Times

D.Law Named to Inc.'s 2025 Best Workplaces List

The annual list recognizes the businesses that set the standard for workplace success and awards excellence in company culture is proud to announce it has been named to Inc.'s 2025 Best Workplaces list, honoring companies that have built exceptional workplaces and vibrant cultures that support their teams and businesses. This year's list, featured on is the result of comprehensive measurement and evaluation of American companies that have excelled in creating exceptional workplaces and company cultures – whether inperson or remote. The award process involved a detailed employee survey conducted by Quantum Workplace, covering critical elements such as management effectiveness, perks, professional development and overall company culture. Each company's benefits were also audited to determine overall score and ranking. is honored to be included among the 514 companies recognized this year. 'At we believe our people are the heartbeat of everything we do. From professional growth opportunities to team celebrations and community involvement, we're committed to cultivating a workplace where everyone feels empowered, supported, and inspired to do their best work,' said Emil Davtyan, founder and managing attorney. 'Being named one of Inc.'s Best Workplaces confirms that our focus on people-first culture makes a meaningful difference. We're proud to be part of a new generation of firms that are redefining what it means to be in this profession.' Founded in 2015, is a purpose-driven law firm focused on providing compassionate, effective legal services to clients across California. With a team-oriented approach and an emphasis on continuous learning, has quickly built a reputation for both its client service and its inclusive internal culture. The firm is part of a growing movement to redefine what it means to be an employment law firm by putting people, empathy and impact at the center of everything it does. 'Inc.'s Best Workplaces program celebrates the exceptional organizations whose workplace cultures address their employees' welfare and needs in meaningful ways,' says Bonny Ghosh, editorial director at Inc. 'As companies expand and adapt to changing economic forces, maintaining such a culture is no small feat. Yet these honorees have not only achieved it – they continue to elevate the employee experience through thoughtful benefits, engagement, and a deep commitment to their teams. is an employment law firm dedicated to defending the rights of workers under California employment law. Based in Los Angeles with offices throughout the state, represents workers in every industry, whether they work for large corporations or small companies. specializes in the full range of employment law, including wrongful termination, pay and overtime issues, discrimination and harassment, workplace retaliation, leaves of absence and more. attorneys have over 350 years of collective experience in employment law and have helped over 150,000 workers get more than $1.5 billion in settlements.

Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct
Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct

Forbes

time44 minutes ago

  • Forbes

Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct

In today's column, I first examine the new ChatGPT Study Mode that has gotten big-time headline news and then delve into whether the crafting of this generative AI capability could be similarly undertaken in the mental health realm. The idea is this. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or feature creation. It seems to be nothing new per se, other than specifying a set of detailed instructions, as dreamed up by various educational specialists, telling the AI what it is to undertake in an educational context. That's considered 'new' in the sense that it is an inspiring use of custom instructions and a commendable accomplishment that will be of benefit to students and eager learners. Perhaps by gathering psychologists and mental health specialists, an AI-based Therapy Mode could similarly be ingenuously developed. Mindful readers asked me about this. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. ChatGPT Study Mode Introduced A recent announcement by OpenAI went relatively far and wide. They cheerfully introduced ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, and identified these salient points (excerpts): As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Assuming that they only or mainly used custom instructions to bring forth this useful result, it gives great hope and spurs avid attention to the amazing power of custom instructions. You can do a lot with custom instructions. But I would wager that few know about custom instructions and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all subsequent conversations. This usually only applies to your instance. I'll explain in a moment how to do so across instances and allow other people to tap into your use of custom instructions. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Mixed Bag The beauty of custom instructions is that they serve as an overarching form of guidance to the generative AI. They are considered to have a global scope for your instance. All conversations that you have will be subject to whatever the custom instruction says should take place. With such power comes some downsides. Imagine that I am using the AI and have a serious question that should not be framed in a poem. Lo and behold, I ask the solemn question and get a poetic answer. The AI is following what the custom instruction indicated. Period, end of story. The good news is that you can tell the AI that you want it to disregard the custom instructions. When I enter a question, I could mention in the prompt that the AI is not to abide by the custom instructions. Voila, the AI will provide a straightforward answer. Afterward, the custom instructions will continue to apply. The malleability is usually extensive. For example, I might tell the AI that for the next three prompts, do not abide by the custom instructions. Or I could tell the AI that the custom instructions are never to be obeyed unless I say in a prompt that they should be obeyed. I think you can see that this is a generally malleable aspect. Goofed Up Custom Instructions The most disconcerting downside of custom instructions is that you might inadvertently say something in the instructions that is to your detriment. Maybe you won't even realize what you've done. Consider my poetic-demanding custom instruction. I could include a line that insists that no matter what any of my prompts say, never allow me to override the custom instruction. Perhaps I thought that was a smart move. The problem will be that later, I might forget that I had included that line. When I try to turn off the custom instruction via a prompt, the AI might refuse. Usually, the AI will inform you of such a conflict, but there's no guarantee that it will. Worse still is a potential misinterpretation of something in your custom instructions. I might have said that the AI should never mention ugly animals in any of its responses. What in the world is an ugly animal? The sky is the limit. Unfortunately, the AI will potentially opt not to mention all kinds of animals that were not what I had in my mind. Would I realize what is happening? Possibly not. The AI responses would perchance mention some animals and not mention others. It might not be obvious which animals aren't being described. My custom instruction is haunting me because the AI interprets what I said, though the interpretation differs from what I meant. AI Mental Health Advice Shifting gears, let's aim to use custom instructions for the betterment of humanity, rather than the act of simply producing poetic responses. The ChatGPT Study Mode pushes the AI to perform Socratic dialogues with the user and gives guidance rather than spitting out answers. The custom instructions get this to occur. Likewise, the AI attempts to assess the level of proficiency of the user and adjusts to their skill level. Personalized feedback is given. The AI tracks your progress. It's nifty. All due to custom instructions. What other context might custom instructions tackle? I'll focus on the context of mental health. Here's the deal. We get together a bunch of psychologists, psychiatrists, therapists, mental health professionals, and the like. They work fervently on composing a set of custom instructions telling the AI how to perform therapy. This includes diagnosing mental health conditions. It includes generating personal recommendations on aiding your mental health. We could turn the generic generative AI that saunters around in the mental health context and turn it into something more bona fide and admirable. Boom, drop the mic. The World Is Never Easy If you are excited about the prospects of these kinds of focused custom instructions, such as for therapy, I am going to ask you to sit down and pour yourself a glass of fine wine. The reason I say this is that there have indeed been such efforts in the mental health realm. And, by and large, the result is not as standout as you might have hoped for. First, the topic of mental health is immense and involves risks to people when inappropriate therapy is employed. Trying to devise a set of custom instructions that can fully and sufficiently provide bona fide therapy is not only unlikely but also inevitably misleading. I say this because some have tried this route and made outlandish claims of what the AI can do as a result of the loaded custom instructions. Watch out for unfulfilled claims. See my extensive coverage at the link here. Second, any large set of custom instructions on performing therapy is bound to be incomplete, contain misinterpretable indications, and otherwise be subject to the downsides that I've noted above. The nature of using custom instructions as an all-in-one solution in this arena is like trying to use a hammer on everything, even though you ought to be using a screwdriver on screws, and so on. Third, some argue that using custom instructions for therapy is better than not having any custom instructions at all. The notion is that if you are using a generic generative AI that is working without mental health custom instructions, you are certainly better off by using one that at least has custom instructions. The answer there is that it depends on the nature of the custom instructions. There is a solid chance that the custom instructions might worsen what the AI is going to say. You can just as easily boost the AI as you can undercut the AI. Don't fall into the trap that custom instructions mean things are necessarily for the better. Accessing Custom GPTs I had earlier alluded to the aspect that there is a means of allowing other users to employ your set of custom instructions. Many of the popular LLMs tend to allow you to generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. Unfortunately, as with all new toys, some have undermined these types of AI applets. There are AI applets that contain custom instructions written by licensed therapists who genuinely did their best to craft therapy-related custom instructions. That seems encouraging. But I'm hoping you now realize that even the best of intentions might not come out suitably. Good intentions don't guarantee suitable results. Those custom instructions could have trouble brewing within them. There are also AI applets that brashly claim to be for mental health, yet they are utterly shallow and devised by someone who has zero expertise in mental health. Don't let your guard down by flashy claims. The more egregious ones are AI applets that are marketed as though they are about mental health, when the reality is that it is a scam. The custom instructions have nothing to do with therapy. Instead, the custom instructions attempt to take over your AI, grab your personal info, and generally be a pest and make life miserable for you. Wolves in sheep's clothing. The Full Meal Deal Where do we go from here? The use of custom instructions for therapy when aiming to bring forth an AI-based Therapy Mode in a generic generative AI is not generally a good move. Even if you assemble a worthy collection of the best psychologists and mental health experts, you are trying to put fifty pounds into a five-pound bag. It just isn't a proper fit. The better path is being pursued. I am a big advocate and doing research on generative AI and LLMs that are built from the ground up for mental health advisement, see my framework layout at the link here. The approach consists of starting from the beginning when devising an LLM to make it into a suitable therapy-oriented mechanism. This is in stark contrast to trying to take an already completed generic generative AI and reshape it into a mental health context. I believe it is wiser to take a fresh uplift instead. Bottom Line Answered For readers who contacted me and asked whether the ChatGPT Study Mode foretells that the same impressive results of education-oriented custom instructions can be had in other domains, yes, for sure, there are other domains that this can readily apply to. Is mental health one of those suitable domains? I vote no. Mental health advisement deserves more. A final thought for now. Voltaire astutely observed: 'No problem can withstand the assault of sustained thinking.' We need to put on our thinking caps and aim for the right solution rather than those quick-fix options that might seem viable but contain unsavory gotchas and injurious hiccups. Sustained thinking is worth its weight in gold.

Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership
Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership

Yahoo

timean hour ago

  • Yahoo

Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership

OpenAI CEO Sam Altman has secured a new partnership with the U.S. General Services Administration that will give federal agencies access to the company's leading frontier models through ChatGPT Enterprise for $1 per agency for the next year, according to an OpenAI announcement. The initiative, described as a core pillar of President Donald Trump's AI Action Plan, will provide federal employees with secure access to ChatGPT Enterprise and new training resources, OpenAI says, as well as additional advanced features during a 60-day introductory period. Don't Miss: The same firms that backed Uber, Venmo and eBay are investing in this pre-IPO company disrupting a $1.8T market — Bill Gates Warned About Water Scarcity. Historic Federal AI Partnership Focused on Productivity and Training Under the agreement, participating executive branch agencies can use OpenAI's most capable models through ChatGPT Enterprise for $1 yearly. According to OpenAI, the program is designed to help government workers allocate more time to public service priorities and less time to administrative tasks. OpenAI will also work with partners Slalom and Boston Consulting Group to support secure deployment and provide agency-specific training. Security safeguards are a key component of the rollout. OpenAI says that ChatGPT Enterprise does not use business data, including inputs or outputs, to train its models, and these same protections will apply to federal use. Trending: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can OpenAI for Government Broadens AI Access Beyond the GSA Deal The agreement is the first major initiative for the company under OpenAI for Government, a program designed to deliver advanced AI tools to public servants nationwide. The umbrella program consolidates OpenAI's existing federal, state, and local partnerships, including collaborations with the U.S. National Labs, Air Force Research Laboratory, NASA, National Institutes of Health, and the Treasury. Through OpenAI for Government, the company will offer secure and compliant access to its most capable models, limited custom models for national security applications, and hands-on support for integration into agency workflows. The first pilot under this program will be with the Department of Defense's Chief Digital and Artificial Intelligence Office under a contract with a $200 million ceiling, OpenAI says. The work will explore how frontier AI can improve administrative operations, healthcare access for service members, program data analysis, and proactive cyber defense, all within the company's usage Results Show Significant Time Savings for Public Servants OpenAI cited results from state-level pilot programs to demonstrate the technology's impact on productivity. Pennsylvania state employees saved an average of 95 minutes per day on routine tasks when using ChatGPT. In North Carolina, 85% of participants in a Department of State Treasurer pilot reported a positive experience with ChatGPT. At the federal level, OpenAI models are already in use at Los Alamos, Lawrence Livermore, and Sandia national laboratories to accelerate scientific research, strengthen national security readiness, and drive public sector innovation. AI Integration Expands Across Federal Agencies The Trump administration's interest in AI predates the OpenAI-GSA deal announcement. Earlier this year, Altman joined Trump at the White House to announce Stargate, a massive data center initiative designed to strengthen U.S. AI infrastructure. In May, Altman and other AI executives accompanied the president to the Middle East to promote deals aligned with U.S. foreign policy goals. While agencies hold vast datasets that could enhance AI systems, OpenAI has confirmed that interactions with federal employees will not be used for model training, addressing potential privacy concerns. Read Next: In a $34 Trillion Debt Era, The Right AI Could Be Your Financial Advantage — Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store