logo
Cramer's Lightning Round: Papa John's is a 'wait and see situation'

Cramer's Lightning Round: Papa John's is a 'wait and see situation'

CNBC10-06-2025
Joby Aviation: "...Joby is good, and I'm going with it."
Arm: "You're in good shape. That's Rene Haas...He is a partner of Nvidia."
Papa John's: "I think it's a wait and see situation with Penegor...at Papa John's. So, I'm not going there yet. I'm not saying yes."
Click here to download Jim Cramer's Guide to Investing at no cost to help you build long-term wealth and invest smarter.Disclaimer The CNBC Investing Club holds shares of Nvidia.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Inside Nvidia And AMD's $3 Billion Export 'Tax' Deal With Trump
Inside Nvidia And AMD's $3 Billion Export 'Tax' Deal With Trump

Forbes

time11 minutes ago

  • Forbes

Inside Nvidia And AMD's $3 Billion Export 'Tax' Deal With Trump

Two of the largest companies in the world, Nvidia and AMD, have agreed to pay the U.S. 15% of their revenues from sales of chips and semiconductors to China. This agreement can be viewed as a win-win for technology conglomerates and the U.S. government, as it enables Nvidia and AMD to expand their market share for their products while allowing the U.S. to collect upwards of $3 billion more in incremental revenue from export activities. This novel type of revenue collection is not a tax, as commonly defined. However, it does appear to be a form of an export tax, and its framework presents a potential new opportunity for the Trump administration to raise additional revenues that it would otherwise not have previously collected. This article discusses the basics surrounding this novel new export trade deal for Nvidia and AMD with Trump and the direct and indirect consequences of such a deal for the two companies and beyond. Nvidia And AMD's Market Dominance Nvidia and AMD are currently two of the largest companies in the world, ranking 31st and 167th on the Fortune 500 list, respectively. Importantly, they are among the world's largest manufacturers of computer chips, which have become increasingly important and valuable as these products now appear in numerous devices, from obvious items like computers and smartphones to less obvious items like vehicles and appliances. Furthermore, the use of proprietary technology has become an even more strategic asset as the world has entered the artificial intelligence boom, which relies heavily on advanced processing chips. Nvidia and AMD are among the two most significant manufacturers of these chips, which help enable the use of advanced computing. This technology achievement has been accomplished by securing a significant market share of the Graphics Processing Unit market and playing a pivotal role in enhancing processing power to advance and adapt to artificial intelligence, according to TechSpot. While this technology is valuable, the U.S. limits companies' ability to sell these products to those outside the U.S., most notably, China. Although there are chip manufacturers outside the U.S., the quality of their products may not be the same, creating a natural demand for companies like Nvidia and AMD to expand their sales into these other countries. Trump's Landmark Export Trade Deal With Nvidia And AMD As first reported by Financial Times, Nvidia and AMD have agreed to pay 15% of their revenues from semiconductor and chip sales to China in exchange for specific export licenses that allow the companies to move their products overseas. The specific deal pertains to Nvidia's H20 chips and AMD's MI308 chips. This announcement comes after Nvidia was hit with a ban in April for sending these same H20 chips to China. Yahoo!Finance reports that the prior ban cost the company billions in lost revenue, and the new deal will result in as much as $3 billion in incremental collections to the U.S. government. Billionaire Mark Cuban reacted to this agreement on X with the following statement: This social media post comes as high praise to many, as Cuban has been a known and public skeptic of Trump, and Cuban calls out the vast majority of his strongest adversaries in congratulating the President on this unique trade agreement. While companies often face import and export fees, taxes, and tariffs when moving products across borders, this type of collection is unique in that technology companies have often been better equipped to utilize their intangible assets, enabling them to engage in profit shifting and cost-sharing agreements. As Cuban has exclaimed, the U.S. Government has long sought to tax many of these companies based on their income. However, this new export sales tax might revolutionize the nature and extent to which these companies pay their fair share of money to the U.S. The Consequences Of Nvidia And AMD's Export Trade Deal While the agreement appears to be great news for many key participants, the stock prices of Nvidia and AMD both struggled following the news, according to Forbes. Part of the reason for the mixed reaction is that the 15% collection of revenues for these chips is high, and the tax incidence will potentially fall onto Chinese customers, meaning they will be paying substantially more for the same chips they bought earlier this year. Furthermore, as reported by CNN, it is unclear whether China will even want these chips, as they are not as advanced as the technology that Nvidia and AMD currently produce. However, the broader consequences of this deal are that it creates a framework for the Trump administration (and potentially future administrations) to tax specific types of goods produced by U.S. companies. Other electronic manufacturers, such as Apple and Microsoft, as well as non-electronics companies like DowDuPont and Amgen, export billions of dollars' worth of goods each year. Even arms manufacturers like Lockheed Martin, Boeing, Raytheon Technologies, and Northrop Grumman export significant quantities of weapons to allied countries. Like chips and semiconductors, the U.S. government closely monitors and potentially restricts the types and amounts of weapons sold to each country. Thus, given the new framework provided by Nvidia, AMD, and the Trump administration, these defense manufacturers can potentially now determine whether and to what extent they can expand their foreign sales, while also providing some incremental financial benefits to the U.S. government. Before more companies can negotiate such a deal, the Trump administration will need to defend the legality of Nvidia and AMD's deal, as lawmakers from both sides of the aisle question whether this export tax can be allowed, according to Politico. Despite some of the pessimism and the possibility of legal challenges, the deal reflects ingenuity and thinking outside the box, and, should it be an actual win-win situation, many other companies will potentially line up to strike a similar deal with Trump.

Skipping Nvidia Left Amazon, Apple And Tesla  Behind In AI
Skipping Nvidia Left Amazon, Apple And Tesla  Behind In AI

Forbes

timean hour ago

  • Forbes

Skipping Nvidia Left Amazon, Apple And Tesla Behind In AI

Everyone thinks they are a comic. And everyone in big cap high tech thinks they can design better and/or cheaper AI chip alternatives to the industry-leader, Nvidia. Turns out, it's simply not that easy. Apple and AWS have recently run aground in AI growth, and Tesla has just abandoned their own Dojo Supercomputer chip development, saying they are switching to Nvidia and AMD for training AI models. (Like many semiconductor developers, Nvidia is a client of Cambrian-AI Research). Oh, and today, The Information reported that 'Microsoft's AI Chip Effort Falls Behind'. There is definitely an important trend here. A few companies have eschewed getting locked in to Nvidia, paying the high prices state-of-the-art AI technology commands. This penny-smart but pound-foolish approach left the world's largest consumer electronics company (Apple) and the undisputed cloud leader (AWS) far behind, just when generative AI created massive end user opportunities they could not adequately address. Nvidia's CUDA platform is the de-facto standard for training and deploying large language models for generative AI. CUDA offers unmatched performance, developer tooling and ecosystem support. Companies that build on CUDA — like Microsoft (with OpenAI) and Meta (with LLaMA) — have been able to scale quickly and deliver cutting-edge AI products. By contrast, Amazon and Apple chose to go their own way, and Tesla took the Nvidia off-ramp in 2019. Let's take a look, as each took a different approach, and mostly failed. Apple Maintains Its ABN Strategy (Anybody But Nvidia) Apple's generative AI journey has been even more problematic. After unveiling 'Apple Intelligence' in 2024, the company's most anticipated upgrade — a fully LLM-powered Siri — has been delayed until 2026. Apple has some serious semiconductor bona fides, with its M-class Arm-based chips for desktops and the A-class for mobile. The company is justifiably proud of these efforts. But Apple tried its hand at AI acceleration early on using its own chips and then shifted to Google TPU-based AI development. Not a bad choice, mind you, but the TPU does not have the performance nor the AI development tool-set that Nvidia Blackwell enjoys. The result? Well, how's that AI-enhanced Siri and Apple Intelligence working out for you? Yeah, not at all. To be sure, Apple has significant technical challenges that come with having an installed base and a focus on privacy above all, but not using Nvidia from the start probably cost it more in extra work and time to market than the 'expensive' Nvidia infrastructure would have cost. Siri's architecture, built over a decade ago, wasn't designed for generative AI, and retrofitting it has proven more difficult and taking longer than Apple expected. To make matters worse, Apple's AI teams have faced internal fragmentation, with some pushing for in-house developed AI models and others advocating partnerships with with OpenAI, Perplexity or Google. The company also lost key talent to competitors. Ruoming Pang, who led Apple's foundation models team, left for Meta in 2023. Other researchers followed, citing slow progress and lack of clarity in Apple's AI strategy. Amazon AWS Does Offer Nvidia GPUs, but Prefers its Own Silicon AWS recently paid the price of its slow generative AI sales on Wall Street caused by Amazon's hubris and NIH (not invented here). The market share of new generative AI use cases landing on AWS is reportedly lower than its overall cloud share, with Microsoft taking over the lead. According to IOT-Analytics, Microsoft has about 16% share of new genAI case studies, as does AWS, well below AWS leadership share in 2023 of 37%. AWS is not losing its first-place share in the overall cloud market, at least not yet, but for genAI-specific apps and new enterprise AI workloads, Azure and Google are increasingly competitive, and in some cases are outpacing AWS in genAI-related tools and adoption. Reducing reliance on Nvidia and lowering costs sounded like a good strategy. So, Amazon's AWS division, like Google and Microsoft, invested heavily in custom silicon for training and inference, named, of course, Trainium and Inferentia. The latest release, Trainium2, was launched in 2024 and appears to offer impressive specs: up to 83.2 petaflops of FP8 compute and 6 TB of HBM3 memory bandwidth. Amazon even created a 40,000-chip Trainium UltraCluster to support generative AI workloads. But accelerator performance alone doesn't create AI. You need software, great chip-to-chip networking and a thriving developer ecosystem. AWS developers found Trainium software harder to work with than CUDA, and they reportedly pushed back to management against Trainium's limitations. Management essentially said shut up and get to work. So, Trainium adoption lagged. Amazon realized it needed to invest even more to create the developers ecosystem, and it launched the Build on Trainium initiative — a $110 million investment in university research. While appealing, this effort came years after Nvidia had firmly cemented its dominance in AI research and development. That is $110 million that could have been better spent on Nvidia hardware and better AI. And that $110 million is on top of the money that AWS spent developing the Trainium and Inferentia chips, probably well over $1 billion. So, Amazon decided to invest another $4 billion in Anthropic, the company behind Claude. Anthropic agreed to use Trainium chips for training its models in return. But behind the scenes, tensions emerged. Engineers at Anthropic reportedly also pushed back against Trainium. Many preferred Nvidia's stack for its maturity and tooling. Anthropic teams had to rework their CUDA-based pipelines to work on Trainium, leading to delays and performance issues. While Amazon touted the partnership as a breakthrough, it was a compromise — Anthropic needed funding, and Amazon needed a flagship AI partner. Amazon appears of late to be changing course, deepening its partnership with Anthropic and expanding support for Nvidia GPUs. AWS is building a massive Nvidia cloud infrastructure, Project Ceiba, with over 20,000 Nvidia GPUs. But it is only available to Nvidia engineers for use in developing AI and chips, not for public cloud access. Now Tesla has Seen the Light In 2019, Tesla shifted from using Nvidia to its custom FSD Chip for vehicle Autopilot hardware and neural network inference, replacing Nvidia's Drive PX2 system. And it began a major effort to build its own AI Supercomputer, DOJO, with its in-house chips. Since 2019, Tesla has reportedly spent over $1 billion developing DOJO along with another $500 million developing a DOJO supercomputer in Buffalo, New York. Last week, Elon Musk announced on X that he was ending this program and would instead deploy on Nvidia and AMD GPUs. I suspect Tesla will mostly deploy Nvidia this year and see how AMD's MI400 looks in 2026. Should Cloud Service Providers Even Build Their Own AI Chips? Well, first, let's look at a company that did not. OpenAI has recently reached $12 billion in annualized revenue and broke the $700 million ChatGPT weekly active user barrier. And guess what it uses? Yep, Nvidia. Sam Altman does have the gleam of OpenAI chips in his eye, to be sure, but he also realizes that speed, ease of use and development time matters more to OpenAI than the savings that proprietary chips could provide. At least for now. Meta has its own MTIA chip, but it is used for internal workloads, like recommendation engines for its Facebook and other properties. Microsoft has its own Maia chips starting with the Maia 100, announced in 2023, Used primarily for internal testing and select workloads. The planned successor, Maia 200, is now expected in 2026 due to delays. Maia 200 is designed for data center AI acceleration and inference workloads. We will see if Microsoft learns from Tesla and Apple's mistakes. I suspect Google is perhaps alone in realizing a decent return on its TPU investments, but it has generally failed to attract large outside customers, aside from Apple. But it gets a lot of bang for the buck for internal workloads and training. My advice to CSPs is this: if you can get Nvidia GPUs, use them. If you have a workload for which you believe they are not ideal and can model a decent ROI, then go for it. Otherwise, save your capital. The Consequences of Skipping Nvidia Can be Dire A year in the world of generative AI can mean the difference between heaven and hell, or at least multi-billion-dollar successes or failure. The hubris of some high tech companies have cost them billions of dollars, spent needlessly. Amazon ceded early leadership in cloud AI to Microsoft Azure, which now hosts many of the world's top models. Apple missed the 'AI supercycle' for iPhone upgrades, as consumers saw little reason to buy new devices without meaningful Siri improvements. Tesla has seen the light and is moving fast. All three of these companies now face pressure to catch up — not just in model performance, but in developer mindshare and ecosystem momentum. Yeah, you can build your own AI chip. But you might regret it.

Chip Deal Shows US is for Sale: Rep. Krishnamoorthi
Chip Deal Shows US is for Sale: Rep. Krishnamoorthi

Bloomberg

timean hour ago

  • Bloomberg

Chip Deal Shows US is for Sale: Rep. Krishnamoorthi

Rep. Raja Krishnamoorthi (D-IL) shares his thoughts on President Trump extending the China tariff truce for 90 days and the impact this extension will have on the American workforce. He also states "it's a dark day for our national security" when discussing President Trump deal with Nvidia & AMD over China AI chips. Rep. Krishnamoorthi speaks with Joe Mathieu on the late edition of Bloomberg's "Balance of Power." (Source: Bloomberg)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store