logo
Nvidia, under pressure from U.S. and China, says its chips have no ‘back doors'

Nvidia, under pressure from U.S. and China, says its chips have no ‘back doors'

Washington Post2 hours ago
Nvidia declared its highly sought-after AI chips don't have 'back doors' or 'kill switches' and that it opposes such remote control options, in an effort to insulate itself from the intensifying technological rivalry between the United States and China.
'There is no such thing as a 'good' secret backdoor,' Nvidia Chief Security Officer David Reber Jr. wrote on Tuesday, adding that 'until recently' the idea of building in such vulnerabilities on purpose was 'beyond question.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump says he will impose 100% tariffs on semiconductors and computer chips
Trump says he will impose 100% tariffs on semiconductors and computer chips

USA Today

time14 minutes ago

  • USA Today

Trump says he will impose 100% tariffs on semiconductors and computer chips

President Donald Trump said he plans to impose 100% tariffs on computer chips and semiconductors, with an exception for companies that have committed to building in the United States."We'll be putting a tariff, approximately 100%, on chips and semiconductors," Trump said at an Oval Office event on Aug. 6 with Apple CEO Tim Cook, who was visiting the White House to announce plans for an additional $100 billion in investment in the U.S. Trump said companies, such as Apple, that have committed to building in the United States would be spared from the tariffs. He did not provide details on when the tariffs would go into effect. "If you're building in the United States of America, there's no charge. Even though you're building and you're not producing yet in terms of the big numbers of jobs… there will be no charge.' The tariffs on computer chips and semiconductors could raise prices on goods such as personal computers, smartphones, electric vehicles, and video game April, Nvidia announced plans to manufacture its AI supercomputers entirely in the United States for the first time, committing to building its advanced chips in Arizona and AI supercomputers in Texas. The White House hailed Nvidia's decision as "the Trump Effect in action.""Onshoring these industries is good for the American worker, good for the American economy, and good for American national security," the White House said in a statement while announcing Nvidia's plans. Trump has currently placed tariffs on various sectors, including 50% on steel, aluminum, and copper, and 25% on imported automobiles. Dozens of countries are also facing country-specific tariffs first announced on April 2, dubbed as Liberation Day by Trump.

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

CNET

time14 minutes ago

  • CNET

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

Chatbots can be genuinely impressive when you watch them do things they're good at, like writing realistic-sounding text or creating weird futuristic-looking images. But try to ask generative AI to solve one of those puzzles you find in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged different large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). The more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more and more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can at least try to justify it or explain how you arrived at it. That's a foundational component of society. We are held accountable for the decisions we make. An AI model may not be able to accurately or transparently explain itself. Would you trust it? Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four or five moves ahead. That's the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."

Oil Holds Drop as Traders Dismiss Impact of US Moves Over Russia
Oil Holds Drop as Traders Dismiss Impact of US Moves Over Russia

Bloomberg

time14 minutes ago

  • Bloomberg

Oil Holds Drop as Traders Dismiss Impact of US Moves Over Russia

Oil held a five-day decline — the longest losing run since January — as investors looked beyond US efforts to punish buyers of Russian crude, while tracking a diplomatic push by President Donald Trump to halt the war in Ukraine. West Texas Intermediate was little changed below $65 a barrel after slumping to the lowest close since early June, while Brent settled just shy of $67. On Wednesday, Trump doubled tariffs on Indian goods to 50% due to the nation's purchases of Russian energy, with implementation set to begin in three weeks. Still, there's been no similar US move against China, another major importer of Moscow's oil.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store