Latest news with #Stratechery


Time of India
20-05-2025
- Business
- Time of India
China's progress in AI cannot be limited and should not be underestimated, says Nvidia CEO Jensen Huang
Live Events Amid escalating technological rivalry between the United States and China, Nvidia CEO Jensen Huang underscored China's growing influence in artificial intelligence (AI), describing its progress as undeniable and an interview with web portal Stratechery, Huang said the rapid rise of Chinese AI companies such as DeepSeek is impressive."China's doing fantastic; 50% of the world's AI researchers are Chinese and you're not going to hold them back, you're not going to stop them from advancing AI. Let's face it, DeepSeek is deeply excellent work," he was in reference to export controls the US has implemented on advanced chips (such as Nvidia's A100/H100) to prevent uncontrolled AI diffusion to China and other simple terms, AI diffusion refers to efforts to slow or control the spread of advanced AI technologies (especially foundational models and compute infrastructure) to geopolitical said the idea to not have America compete in the Chinese market, where 50% of the developers are, makes no sense from a computing infrastructure and computing architectural perspective. "We ought to go and give American companies the opportunity to compete in China," he warned that if US companies don't compete in China, it will in turn allow the Chinese to build a rich ecosystem and new platforms, which would not be this month, Nvidia announced partnerships in the Gulf region , notably with Saudi Arabia and Qatar, to advance AI infrastructure and capabilities. Huang said those countries have an "extraordinary opportunity"."They have an abundance of energy and a shortage of labour, and the potential of their countries is limited by the amount of labour that they have, the amount of people that they have," he said.

Yahoo
19-05-2025
- Business
- Yahoo
NVIDIA's Huang blasts U.S. export rules, points to $15 billion in missed sales
-- NVIDIA Corporation (NASDAQ:NVDA) CEO Jensen Huang sharply criticized U.S. export restrictions on AI chips, warning that the rules are diminishing America's technology leadership while inflicting heavy financial damage on the company. 'If the goal of the diffusion rule is to ensure that America has to lead, the diffusion rule as it was written will exactly cause us to lose our lead,' Huang said in a recent Stratechery interview. The company has been barred from selling advanced chips like the H20 to China, a key market for its AI infrastructure. As a result, Huang revealed that NVIDIA wrote off $5.5 billion in inventory and forfeited approximately $15 billion in sales. 'No company in history has ever written off that much inventory,' Huang said. 'We walked away from $15 billion of sales and probably... $3 billion worth of taxes.' Huang argued that cutting off China risks unintentionally strengthening rival ecosystems by forcing them to innovate domestically beyond U.S. influence. 'If we don't compete in China, and we allow the Chinese ecosystem to build a rich ecosystem because we're not there to compete for it… their leadership and their technology will diffuse all around the world,' he warned. He emphasized that AI is a 'full stack' technology and cannot be regulated effectively by restricting just one layer, such as semiconductors. 'You can't just say, 'Let's go write a diffusion rule, protect one layer at the expense of everything else,' it's nonsensical,' he added. Instead of limiting exports, Huang urged policymakers to focus on expanding the global reach of American platforms and technologies. 'The idea of AI diffusion limiting other countries' access to American technology is a mission expressed exactly wrong, it should be about accelerating the adoption of American technology everywhere before it's too late,' he said. Related articles NVIDIA's Huang blasts U.S. export rules, points to $15 billion in missed sales Moody's downgrades JPMorgan, Bank of America, Wells Fargo in blow to U.S. banks UBS upgrades these 2 U.S. airline stocks Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
19-05-2025
- Business
- Business Insider
Nvidia CEO says Chinese AI researchers are 'world class' — and US companies are hiring 'a whole bunch' of them
Jensen Huang, the CEO of Nvidia, thinks that China's AI researchers are some of the very best in the world — so it's no surprise that US companies are bringing them on board. "The researchers, the AI scientists in China, they're world-class. These are not Chinese AI researchers, they're world-class AI researchers," Huang said in an interview with Ben Thompson, author of " Stratechery." "You walk up and down the aisles of Anthropic or OpenAI or DeepMind, there's a whole bunch of AI researchers there, and they're from China. Of course, it's sensible, and they're extraordinary, and so the fact that they do extraordinary work is not surprising to me." As a whole, the country is doing "fantastic" in the AI market, Huang said, with models from Chinese-based companies — such as DeepSeek and Manus — emerging as powerful challengers to systems designed in the US. "Let's face it, DeepSeek is deeply excellent work," Huang said. "To give them anything short of that is a lack of confidence so deep that I just can't even tolerate it." The challenges presented by international rivals, Huang added, are absolutely essential to the continued improvement of AI companies in the US. "Everybody loves competition. Companies need competition to inspire themselves, nations need that, and there's no question we spur them," he said. "However, I fully expected China to be there every step of the way. Huawei is a formidable company. They're a world-class technology company." Intense competition could become problem though, Huang said, if US-based companies don't have all the tools they need. Building out the "ecosystem" required to support AI, he said, is already difficult — and Huang believes operations could be kneecapped by restrictive regulations. He was particularly critical of the "Diffusion rule," a Biden-era policy that would've established limits on exports of US-made AI chips come May 15. "You can't just say, "Let's go write a diffusion rule, protect one layer at the expense of everything else." It's nonsensical," he said. "The idea that we would limit American AI technology right at the time when international competitors have caught up, and we pretty much predicted it." On May 12, the White House announced it would rescind the rule, mere days before it was set to go into effect. "The idea of AI diffusion limiting other countries access American technology is a mission expressed exactly wrong. It should be about accelerating the adoption of American technology everywhere before it's too late," Huang said. "If the goal is for America to lead, then AI diffusion did exactly the opposite of that." Competition flows both ways, he said — and American companies should be battling it out in the Chinese market. "The idea that we would have America not compete in the Chinese market, where 50% of the developers are, makes absolutely no sense from a computing infrastructure, computing architectural perspective," Huang said. "We ought to go and give American companies the opportunity to compete in China, offset the trade deficit, generate tax income for the American people, build, hire jobs, create more jobs."


The Verge
13-05-2025
- Politics
- The Verge
AI therapy is a surveillance machine in a police state
Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has recently been pitching a future where his AI tools give people something that 'knows them well,' not just as pals, but as professional help. 'For people who don't have a person who's a therapist,' he told Stratechery 's Ben Thompson, 'I think everyone will have an AI.' The jury is out on whether AI systems can make good therapists, but this future is already legible. A lot of people are anecdotally pouring their secrets out to chatbots, sometimes in dedicated therapy apps, but often to big general-purpose platforms like Meta AI, OpenAI's ChatGPT, or xAI's Grok. And unfortunately, this is starting to seem extraordinarily dangerous — for reasons that have little to do with what a chatbot is telling you, and everything to do with who else is peeking in. This might sound paranoid, and it's still hypothetical. It's a truism someone is always watching on the internet, but the worst thing that comes of it for many people is some unwanted targeted ads. Right now in the US, though, we're watching the impending collision of two alarming trends. In one, tech executives are encouraging people to reveal ever more intimate details to AI tools, soliciting things users wouldn't put on social media and may not even tell their closest friends. In the other, the government is obsessed with obtaining a nearly unprecedented level of surveillance and control over residents' minds: their gender identities, their possible neurodivergence, their opinions on racism and genocide. And it's pursuing this war by seeking and weaponizing ever-increasing amounts of information with little regard for legal or ethical restraints. A few data points: Federal law enforcement has indiscriminately arrested and revoked the residency of legal immigrants on the basis of legally protected speech and activism, including a student who was imprisoned for weeks over a newspaper op-ed. President Donald Trump's administration has demanded control of academic programs at top universities and opened investigations into media companies it accuses of prohibited diversity initiatives. Secretary of Health Robert F. Kennedy, Jr. (who has suggested replacing people's antidepressant prescriptions with rehabilitative work camps) has announced plans to build a federal database collecting records of people with autism, drawing on medical files and wearable device data. A recent Health and Human Services report has also implied autism is to blame for gender dysphoria, part of a larger war on transgender people. The Department of Government Efficiency (DOGE) is reportedly working to centralize data about Americans that's currently stored across different agencies, with the intent of using it for surveillance, in ways that could severely violate privacy laws. DOGE head Elon Musk spent the agency's early weeks digging up records of little-known government employees and government-funded organizations with the intent of directing harassment toward them on social media. As this is happening, US residents are being urged to discuss their mental health conditions and personal beliefs with chatbots, and their simplest and best-known options are platforms whose owners are cozy with the Trump administration. xAI and Grok are owned by Musk, who is literally a government employee. Zuckerberg and OpenAI CEO Sam Altman, meanwhile, have been working hard to get in Trump's good graces — Zuckerberg to avoid regulation of his social networks, Altman to support his efforts for ever-expanding energy infrastructure and no state AI regulation. (Gemini AI operator Google is also carefully sycophantic. It's just a little quieter about it.) These companies aren't simply doing standard lobbying, they're sometimes throwing their weight behind Trump in exceptionally high-profile ways, including changing their policies to fit his ideological preferences and attending his inauguration as prominent guests. The internet has been a surveillance nightmare for decades. But this is the setup for a stupidly on-the-nose dystopia whose pieces are disquietingly slotting into place. It's (hopefully) common knowledge that things like web searches and AI chat logs can be requested by law enforcement with a valid warrant for use in specific investigations. We also know the government has extensive, long-standing mass surveillance capabilities — including the National Security Agency programs revealed by Edward Snowden, as well as smaller-scale strategies like social media searches and cell tower dumps. We've been in a surveillance nightmare for decades, but we're living through a dramatic escalation The past few months have seen a sharp escalation in the risks and scope of this. The Trump administration's surveillance crusade is vast and almost unbelievably petty. It's aimed at a much broader range of targets than even the typical US national security and policing apparatus. And it has seemingly little interest in keeping that surveillance secret or even low-profile. Chatbots, likewise, escalate the risks of typical online secret-sharing. Their conversational design can draw out private information in a format that can be more vivid and revealing — and, if exposed, embarrassing — than even something like a Google search. There's no simple equivalent to a private iMessage or WhatsApp chat with a friend, which can be encrypted to make snooping harder. (Chatbot logs can use encryption, but especially on major platforms, this typically doesn't hide what you're doing from the company itself.) They're built, for safety purposes, to sense when a user is discussing sensitive topics like suicide and sex. During the Bush and Obama administrations, the NSA demanded unfettered access to American telephone providers' call records. The Trump administration is singularly fascinated by AI, and it's easy to imagine one of its agencies demanding a system for easily grabbing chat logs without a warrant or having certain topics of discussion flagged. They could get access by invoking the government's broad national security powers or by simply threatening the CEO. For users whose chats veer toward the wrong topics, this surveillance could lead to any number of things: a visit from child protective services or immigration agents, a lengthy investigation into their company's 'illegal DEI' rules or their nonprofit's tax-exempt status, or embarrassing conversations leaked to a right-wing activist for public shaming. Like the NSA's anti-terrorism programs, the data-sharing could be framed in wholesome, prosocial ways. A 14-year-old wonders if they might be transgender, or a woman seeks support for an abortion? Of course OpenAI would help flag that — they're just protecting children. A foreign student who's emotionally overwhelmed by the war in Gaza — what kind of monster would shield a supporter of Hamas? An Instagram user asking for advice about their autism — doesn't Meta want to help find a cure? There are special risks for people who already have a target on their backs — not just those who have sought the political spotlight, but medical professionals who work with reproductive health and gender-affirming care, employees of universities, or anyone who could be associated with something 'woke.' The government is already scouring publicly available information for ways to discredit enemies, and a therapy chatbot with minimal privacy protections would be an almost irresistible target. Even if you're one of the few American citizens with truly nothing to hide in your public or private life, we're not talking about an administration known for laser-guided accuracy here. Trump officials are notorious for governing through bizarrely blunt keyword searches that appear to confuse 'transgenic' with 'transgender' and assume someone named Green must do green energy. They reflexively double down on admitted mistakes. You're one fly in a typewriter away from everybody else. In an ideal world, companies would resist indiscriminate data-sharing because it's bad business. But they might suspect that many people will have no idea it's happening, will believe facile claims about fighting terrorism and protecting children, or will have so much learned helplessness around privacy that they don't care. The companies could assume people will conclude there's no alternative, since competitors are likely doing the same thing. If AI companies are genuinely dedicated to building trustworthy services for therapy, they could commit to raising the privacy and security bar for bots that people use to discuss sensitive topics. They could focus on meeting compliance standards for the Health Insurance Portability and Accountability Act (HIPAA) or on designing systems whose logs are encrypted in a way that they can't access, so there's nothing to turn over. But whatever they do right now, it's undercut by their ongoing support for an administration that holds contempt for the civil liberties people rely on to freely share their thoughts, including with a chatbot. Contacted for comment on its policy for responding to government data requests and whether it was considering heightened protection for therapy bots, Meta instead emphasized its services' good intentions. 'Meta's AIs are intended to be entertaining and useful for users … Our AIs aren't licensed professionals and our models are trained to direct users to seek qualified medical or safety professionals when appropriate,' said Meta spokesperson Ryan Daniels. OpenAI spokesperson Lindsey Held told The Verge that 'in response to a law enforcement request, OpenAI will only disclose user data when required to do so [through] a valid legal process, or if we believe there is an emergency involving a danger of death or serious injury to a person.' (xAI didn't respond to a request for comment, and Google didn't relay a statement by press time.) Fortunately, there's no evidence mass chatbot surveillance has happened at this point. But things that would have sounded like paranoid delusions a year ago — imprisoning a student for writing an op-ed, letting an inexperienced Elon Musk fanboy modify US treasury payment systems, accidentally inviting a magazine editor to a secret groupchat for planning military airstrikes — are part of a standard news day now. The private and personal nature of chatbots makes them a massive, emerging privacy threat that should be identified as soon and as loudly as possible. At a certain point, it's delusional not to be paranoid. The obvious takeaway from this is 'don't get therapy from a chatbot, especially not from a high-profile platform, especially if you're in the US, especially not right now.' The more important takeaway is that if chatbot makers are going to ask users to divulge their greatest vulnerabilities, they should do so with the kinds of privacy protections medical professionals are required to adhere to, in a world where the government seems likely to respect that privacy. Instead, while claiming they're trying to help their users, CEOs like Zuckerberg are throwing their power behind a group of people often trying to harm them — and building new tools to make it easier.
Yahoo
14-02-2025
- Automotive
- Yahoo
Uber's CEO says he wants to find a way to work with Tesla because 'no one wants to compete against Tesla or Elon, if you can help it'
Dara Khosrowshahi, the CEO of Uber, said he prefers not to compete with Elon Musk and Tesla. Musk said in January that Tesla will launch a paid robotaxi service in Austin this summer. Tesla will benefit from having its vehicle offer rides on Uber as well, Khosrowshahi said. Dara Khosrowshahi, the CEO of Uber, says he's optimistic his company will be able to work with Tesla and Elon Musk when it rolls out robotaxis. Khosrowshahi was asked about Tesla during an interview with technology and media analyst, Ben Thompson for his newsletter Stratechery. The interview was published on Thursday. "Yeah listen, no one wants to compete against Tesla or Elon, if you can help it," Khosrowshahi told Thompson. "Their capabilities are pretty extraordinary, but I think the same economic laws apply to them." Musk announced Tesla's robotaxi, also known as the Cybercab, during a launch event in October. Musk told investors during an earnings call in January that a paid robotaxi service will be launching in Austin, Texas this June. Khosrowshahi said in his interview with Thompson that Tesla will benefit from having its vehicles offer rides on Uber. "Then, that Tesla that is both on Uber, and by the way, they could be both on Uber and the network, that is going to create much, much more revenue," Khosrowshahi said. "So it all comes back to revenue generation, you have a box with wheels, you want to maximize the revenue of that box in order to reduce your cost of capital, and economic laws apply to Tesla, just as they do to any other car company," he added. Khosrowshahi told Thompson that the same rule applies to the food business, with restaurants using several delivery services to reach out to more customers. "McDonald's has its own app, and has an incredible brand, has a lot of capital, has terrific reach. They still work with Uber Eats and DoorDash, because they want to drive utilization of the box called the restaurant, that same economic value is going to be true going forward," Khosrowshahi said. "Ultimately, we're hoping that my charm and the economic argument gets Tesla to work with us as well," he continued. "If they want a direct channel, no problem." Uber and Tesla did not respond to requests for comment from Business Insider. This isn't the first time Khosrowshahi has expressed interest in partnering with Musk and Tesla. Uber has partnerships with self-driving vehicle makers like Waymo and Wayve. "Obviously, competing with Elon Musk is no easy matter, and we take nothing for granted," Khosrowshahi told the Financial Times in October. "I don't think this is going to be a winner-take-all marketplace. We believe in the spirit of partnership. We'll see what Tesla does, and either we'll compete with them, or we'll work with them, or a combination thereof," he added. It remains unclear when exactly Tesla's ambitions for the robotaxi can be realized given the regulatory challenges faced by self-driving vehicles. Phil Koopman, a professor at Carnegie Mellon University, told BI in October that it will not be easy for Tesla to get the federal government's approval to sell a vehicle without a steering wheel. Tesla would also have get approvals from individual states if it wants to expand the robotaxi network's coverage, Koopman added. Musk told investors in January that while he hopes that Tesla will have unsupervised full-self-driving vehicles in most countries by the end of 2026, that could also be "limited simply by regulatory issues." "That's my prediction with the best data that I have right now," Musk said. Read the original article on Business Insider