logo
#

Latest news with #ComputeUnifiedDeviceArchitecture

A guide to Nvidia's competitors: AMD, Qualcomm, Broadcom, startups, and more are vying to compete in the AI chip market
A guide to Nvidia's competitors: AMD, Qualcomm, Broadcom, startups, and more are vying to compete in the AI chip market

Business Insider

time11-05-2025

  • Business
  • Business Insider

A guide to Nvidia's competitors: AMD, Qualcomm, Broadcom, startups, and more are vying to compete in the AI chip market

Nvidia is undoubtably dominant in the AI semiconductor space. Estimates fluctuate, but the company has more than 80% market share by some estimates when it comes to the chips that reside inside data centers and make products like ChatGPT and Claude possible. That enviable dominance goes back almost two decades, when researchers began to realize that the same kind of intensive computing that made complex, visually stunning video games and graphics possible, could enable other types of computing too. The company started building its famous software stack, named Compute Unified Device Architecture or CUDA, 16 years before the launch of ChatGPT. For much of that time, it lost money. But CEO Jensen Huang and a team of true believers saw the potential for graphics processing units to enable artificial intelligence. And today, Nvidia and its products are responsible for most of the artificial intelligence at work in the world. Thanks to the prescience of Nvidia's leadership, the company had a big head start when it came to AI computing, but challengers are running fast to catch up. Some were competitors in the gaming or traditional semiconductor spaces, and others have started up from scratch. AMD AMD is Nvidia's top competitor in the market for AI computing in the data center. Helmed by its formidable CEO Lisa Su, AMD launched its own GPU, called the MI300, for the data center in 2024, more than a full year after Nvidia's second generation of data center GPUs started shipping. Though experts and analysts have touted the chip's specifications and potential based on its design and architecture, the company's software is still somewhat behind that of Nvidia, making these chips somewhat harder to program and use to their full potential. Analysts predict that the company has under 15% market share. But AMD executives insist that they are committed to bringing its software up to par and that future expectations for the evolution of the accelerated computing market will benefit the company — specifically, the spread of AI into so-called edge devices like phones and laptops. Qualcomm, Broadcom, and custom chips Also challenging Nvidia are application-specific integrated circuits or ASICs. These custom-designed chips are less versatile than GPUs, but they can be designed for specific AI computing workloads at a much lower cost, which have made them a popular option for hyperscalers. Though multipurpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI-chip market in the long term, custom chips are growing fast. Morgan Stanley analysts expected the market for ASICs to double in size in 2025. Companies that specialize in ASICs include Broadcom and Marvell, along with the Asia-based players Alchip Technologies and MediaTek. Marvell is in part responsible for Amazon's Trainium chips while Broadcom builds Google's tensor processing units, among others. OpenAI, Apple, Microsoft, Meta, and TikTok parent company ByteDance have all entered the race for a competitive ASIC as well. Amazon and Google While also being prominent customers of Nvidia, the major cloud providers like Amazon Web Services and Google Cloud Platform, often called hyperscalers, have also made efforts to design their own chips, often with the help of semiconductor companies. Amazon's Trainium chips and Google's TPUs are the most scaled of these efforts and offer a cheaper alternative to Nvidia chips, mostly for the companies' internal AI workloads. However, the companies have shown some progress in getting customers and partners to use their chips as well. Anthropic has committed to running some workloads on Amazon's chips, and Apple has done the same with Google's. Intel Once the great American name in chip-making, Intel has fallen far behind its competitors in the age of AI. But, the firm does have a line of AI chips called Gaudi that some reports have said can stand up to Nvidia's in some respects. Intel installed a new CEO, semiconductor veteran Lip-Bu Tan, in the first quarter of 2025 and one of his first actions was to flatten the organization so that the AI chip operations reports directly to him. Huawei Though Nvidia's American hopeful challengers are many, China's Huawei is perhaps the most concerning competitor of all for Nvidia and all those concerned with continued US supremacy in AI. Huang himself has called Huawei the "single most formidable" tech company in China. Reports that Huawei's AI chip innovation is catching up are increasing in frequency. New restrictions from the Biden and Trump administrations on shipping even lower-power GPUs to China have further incentivized the company to catch up and serve the Chinese markets for AI. Analysts say further restrictions being considered by the Trump administration are now unlikely to hamper China's AI progress. Startups Also challenging Nvidia are a host of startups offering new chip designs and business models to the AI computing market. These firms are starting out at a disadvantage, as they don't have the full-sized sales and distribution machines decades of chip sales in other types of tech bring. But several are holding their own by finding use cases, customers, and distribution methods that are attractive to customers based on faster processing speeds or lower cost. These new AI players include Cerebras, Etched, Groq, Positron AI, Sambanova Systems, and Tenstorrent, among others.

Nvidia Builds An AI Superhighway To Practical Quantum Computing
Nvidia Builds An AI Superhighway To Practical Quantum Computing

Forbes

time05-05-2025

  • Business
  • Forbes

Nvidia Builds An AI Superhighway To Practical Quantum Computing

At the GTC 2025 conference, Nvidia announced its plans for a new, Boston-based Nvidia Accelerated Quantum Research Center or NVAQC, designed to integrate quantum hardware with AI supercomputers. Expected to begin operations later this year, it will focus on accelerating the transition from experimental to practical quantum computing. 'We view this as a long-term opportunity,' says Tim Costa, Senior Director of Computer-Aided Engineering, Quantum and CUDA-X at Nvidia. 'Our vision is that there will come a time when adding a quantum computing element into the complex heterogeneous supercomputers that we already have would allow those systems to solve important problems that can't be solved today.' Quantum computing, like AI (i.e., deep learning) a decade ago, is yet another emerging technology with an exceptional affinity with Nvidia's core product, the GPU. It is another milestone in Nvidia's successful ride on top of the technological shift re-engineering the computer industry, the massive move from serial data processing (executing instructions one at a time, in a specific order) to parallel data processing (executing multiple operations simultaneously). Over the last twenty years, says Costa, there were several applications where 'the world was sure it was serial and not parallel, and it didn't fit GPUs. And then, a few years later, rethinking the algorithms has allowed it to move on to GPUs.' Nvidia's ability to 'diversify' from its early focus on graphics processing (initially to speed up the rendering of three-dimensional video games) is due to the development in the mid-2000s of its software, the Compute Unified Device Architecture or CUDA. This parallel processing programming language allows developers to leverage the power of GPUs for general-purpose computing. The key to CUDA's rapid adoption by developers and users of a wide variety of scientific and commercial applications was a decision by CEO Jensen Huang to apply CUDA to the entire range of Nvidia's GPUs, not just the high-end ones, thus ensuring its popularity. This decision—and the required investment—caused Nvidia's gross margin to fall from 45.6% in the 2008 fiscal year to 35.4% in the 2010 fiscal year. 'We were convinced that accelerated computing would solve problems that normal computers couldn't. We had to make that sacrifice. I had a deep belief in [CUDA's] potential,' Huang told Tae Kim, author of the recently published The Nvidia Way. This belief continues to drive Nvidia's search for opportunities where 'we can do lots of work at once,' says Costa. 'Accelerated computing is synonymous with massively parallel computing. We think accelerated computing will ultimately become the default mode of computing and accelerate all industries. That is the CUDA-X strategy.' Costa has been working on this strategy for the last six years, introducing the CUDA software to new areas of science and engineering. This has included quantum computing, helping developers of quantum computers and their users simulate quantum algorithms. Now, Nvidia is investing further in applying its AI mastery to quantum computing. Nvidia became one of the world's most valuable companies because the performance of the artificial neural networks at the heart of today's AI depends on the parallelism of the hardware they are running on, specifically the GPU's ability to process many linear algebra multiplications simultaneously. Similarly, the basic units of information in quantum computing, qubits, interact with other qubits, allowing for many different calculations to run simultaneously. Combining quantum computing and AI promises to improve AI processes and practices and, at the same time, escalate the development of practical applications of quantum computing. The focus of the new Boston research center is on 'using AI to make quantum computers more useful and more capable,' says Costa. 'Today's quantum computers are fifty to a hundred qubits. It's generally accepted now that truly useful quantum computing will come with a million qubits or more that are error corrected down to tens to hundreds of thousands of error-free or logical qubits. That process of error correction is a big compute problem that has to be done in real time. We believe that the methods that will make that successful at scale will be AI methods.' Quantum computing is a delicate process, subject to interference from 'noise' in its environment, resulting in at least one failure in every thousand operations. Increasing the number of qubits introduces more opportunities for errors. When Google announced Willow last December, it called it 'the first quantum processor where error-corrected qubits get exponentially better as they get bigger.' Its error correction software includes AI methods such as machine learning, reinforcement learning, and graph-based algorithms, helping identify and correct errors accurately, 'the key element to unlocking large-scale quantum applications,' according to Google. 'Everyone in the quantum industry realizes that the name of the game in the next five years will be quantum error correction,' says Doug Finke, Chief Content Officer at Global Quantum Intelligence. 'The hottest job in quantum these days is probably a quantum error correction scientist, because it's a very complicated thing.' The fleeting nature of qubits—they 'stay alive' for about 300 microseconds—requires speedy decisions and very complex math. A ratio of 1,000 physical qubits to one logical qubit would result in many possible errors. AI could help find out 'what are the more common errors and what are the most common ways of reacting to it,' says Finke. Researchers from the Harvard Quantum Initiative in Science and Engineering and the Engineering Quantum Systems group at MIT will test and refine these error correction AI models at the NVAQC. Other collaborators include quantum startups Quantinuum, Quantum Machines, and QuEra Computing. They will be joined by Nvidia's quantum error correction research team and Nvidia's most advanced supercomputer. 'Later this year, we will have the center ready, and we'll be training AI models and testing them on integrated devices,' says Costa.

1 Artificial Intelligence (AI) Stock That Could Go Parabolic
1 Artificial Intelligence (AI) Stock That Could Go Parabolic

Yahoo

time01-05-2025

  • Business
  • Yahoo

1 Artificial Intelligence (AI) Stock That Could Go Parabolic

Nvidia's stock is down 20% this year. But the company's biggest days of growth are still ahead. It's been a difficult year so far for Nvidia (NASDAQ: NVDA). Shares were down by more than 30% at one point, wiping more than $1 trillion off the company's valuation. After a brief rebound, shares are now down by just 20% year to date. The stock isn't as cheap as it was a few weeks ago, but this is still an incredible chance for patient investors to lock in a great price for a business that should grow exponentially in the years to come. There's one reason in particular that should get investors very excited. Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue » Nvidia is one of the most valuable companies in the world for a reason. Its graphic processing units (GPUs) are some of the best in the world. For AI applications in particular, its GPUs are considered the best in the world. They are crucial components that make the AI revolution possible, allowing companies to train and execute large models that require huge data sets to run properly. Its next-gen Blackwell chips have performance benchmarks that few companies can match. But it's not just about raw performance. The hardware is supported by a software suite called Compute Unified Device Architecture (CUDA). CUDA allows developers to customize Nvidia's GPUs to their specific uses, unlocking performance upgrades that make the company's GPUs even more attractive. And once a customer is using CUDA, it essentially locks it into Nvidia's hardware and software, giving the chipmaker control over both ends of the value chain. In summary, Nvidia has some of the best chips on the market, especially for AI applications, and its CUDA suite creates a durable competitive advantage when it comes to customer stickiness by embedding itself directly into its customers' products from both a hardware and a software perspective. That's an incredibly valuable position considering the AI industry as a whole is expected to surpass $4 trillion by 2033, up from just $189 billion in 2023. The company's future is bright on many levels. And some recent comments from Morgan Stanley analyst Joseph Moore should get investors even more excited about its long-term prospects. The recent pullback in Nvidia's stock price stemmed from many causes. The market overall took a dive earlier this year, dragging many of the biggest names down with it. But given Nvidia's meteoric rise, many investors are also worried we're in the middle of an AI bubble, pushing valuations far beyond what is reasonable. These investors fear that even more downside is to come, but recent comments from Morgan Stanley analyst Joseph Moore should provide some relief. In a note to clients last week, Moore wrote: "The idea that we are in a digestion phase for AI is laughable given the obvious need for more inference chips which is driving a wave of very strong demand. Those who want to see this as a bubble are manifesting that through the various conversations about longer-term data center leases, but it's hard to have that view when you talk to actual customers about actual demand which remains strong." Those comments certainly line up with Nvidia's backlog figures. Many of its chips have 12-month wait lists, and there's been little sign from data center operators -- a key customer category for Nvidia's GPUs -- that spending won't grow tremendously in the years to come, even if there is some short-term noise along the way. "We are hearing about demand levels that are tens of billions above current run rates, limited by supply," Moore said in his note. The AI revolution is far from over. And even with a premium valuation, Nvidia still trades at just 25 times forward earnings, hardly unreasonable for a profitable company growing this quickly. With such huge demand projected for AI GPUs through this decade and beyond, don't be surprised to see Nvidia's stock continue to soar well beyond today's $2.7 trillion valuation. Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a 'Double Down' stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Nvidia: if you invested $1,000 when we doubled down in 2009, you'd have $282,457!* Apple: if you invested $1,000 when we doubled down in 2008, you'd have $40,288!* Netflix: if you invested $1,000 when we doubled down in 2004, you'd have $610,327!* Right now, we're issuing 'Double Down' alerts for three incredible companies, available when you join , and there may not be another chance like this anytime soon.*Stock Advisor returns as of April 28, 2025 Ryan Vanzo has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. 1 Artificial Intelligence (AI) Stock That Could Go Parabolic was originally published by The Motley Fool

Prediction: 2 Artificial Intelligence (AI) Stocks That Could Be Worth More Than Nvidia by 2030
Prediction: 2 Artificial Intelligence (AI) Stocks That Could Be Worth More Than Nvidia by 2030

Yahoo

time25-04-2025

  • Business
  • Yahoo

Prediction: 2 Artificial Intelligence (AI) Stocks That Could Be Worth More Than Nvidia by 2030

Despite the fanfare, the artificial intelligence (AI) revolution has just begun. With the AI market valued at $189 billion in 2023, the United Nations believes it will become a $4.8 trillion market by 2033. Companies like Nvidia have already taken advantage of this growth, soaring to multitrillion-dollar market caps. But the two AI businesses below trade at just fractions of that value. Over time, however, we could see one of these stocks surpass Nvidia's market cap, leading to huge gains for patient shareholders. Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue » Right now, most estimates believe that Nvidia commands somewhere between 70% and 95% of the AI graphics processing unit (GPU) market. GPUs, or graphics processing units, are critical components necessary for training and executing AI models, as well as facilitating many other machine learning tasks. Without GPUs, the AI revolution would not be taking off at nearly the same size or scale. And right now, Nvidia dominates AI-specific GPU sales. What makes Nvidia's GPUs so special? Two things: early investment and vendor lock-in through its developer suite called CUDA. Way back in 2006, Nvidia's leadership recognized the importance of programmable infrastructure. That is, they understood that developers would want to customize their chips to optimize for certain parameters, allowing them to process data or run calculations faster and more efficiently than a stock GPU. To address this, Nvidia released Compute Unified Device Architecture (CUDA). This unlocked the power of parallel computing, making its chips more attractive than the competition when it came to performance optimization potential. Today, many Nvidia customers are using Nvidia products due to CUDA. They've customized their setups from a software perspective around Nvidia's hardware offerings, creating what analysts call "vendor lock-in." This lock-in has granted Nvidia an 80% to 95% market share for AI-related GPUs. It'll be hard to compete with this competitive advantage. But eventually, another chipmaker will break through. And the companies below are my top bets when it comes to both risk and potential upside potential. The road to toppling Nvidia will be a long one. But over the coming years, I suspect either Intel (NASDAQ: INTC) or Advanced Micro Devices (NASDAQ: AMD) could break through. AMD is arguably in the best position to potentially match Nvidia's AI dominance over the next five years. The company's latest GPUs have performed well against Nvidia's Blackwell chips on benchmark tests. Plus, Nvidia is having difficulty manufacturing enough chips to meet demand, leading to multi-month delays on shipments, giving AMD an ability to more rapidly meet rising demand despite arguably inferior products with less vendor lock-in. NVDA PS Ratio data by YCharts Right now, Intel is far behind AMD in terms of catching up with Nvidia. But its market cap and valuation more than reflect that reality. Intel is valued at just $80 billion versus a $140 billion valuation for AMD. Meanwhile, Intel shares trade at just 1.5 times sales versus a 5.6 times sales valuation for AMD. Betting on Intel reaching Nvidia's valuation by 2030 is clearly a long shot. But the company is investing heavily to improve its chips' competitiveness, as well as its overall manufacturing capacity. And late last year it received a multibillion-dollar contract from Amazon for AI chips and another multibillion-dollar contract from the U.S. military. Which company am I betting on today to catch up with Nvidia? I'm going with AMD. Its chip performance and manufacturing capabilities heftily outpace Intel's for AI GPUs. And with 52% of revenue coming from data centers versus just 25% for Intel, AMD is clearly much more leveraged to the AI economy than Intel. Nvidia's CUDA architecture will remain a strong barrier to competition for years to come. But both AMD and Intel have such cheap relative valuations that both are worth a small, speculative investment, even if the odds of overtaking Nvidia by 2030 remain slim. Before you buy stock in Advanced Micro Devices, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Advanced Micro Devices wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $561,046!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $606,106!* Now, it's worth noting Stock Advisor's total average return is 811% — a market-crushing outperformance compared to 153% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of April 21, 2025 Ryan Vanzo has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Intel, and Nvidia. The Motley Fool recommends the following options: short May 2025 $30 calls on Intel. The Motley Fool has a disclosure policy. Prediction: 2 Artificial Intelligence (AI) Stocks That Could Be Worth More Than Nvidia by 2030 was originally published by The Motley Fool

Nvidia thinks it has a better way of building AI agents
Nvidia thinks it has a better way of building AI agents

Mint

time23-04-2025

  • Business
  • Mint

Nvidia thinks it has a better way of building AI agents

Nvidia is getting into the artificial intelligence agents game with the release of a software platform that helps businesses build their own autonomous bots. The platform, called NeMo microservices, is available for all customers to use to build their 'AI teammates," the chip maker said Wednesday. The Santa Clara, Calif.-based company said it has developed a better way of building AI agents that relies on open-source AI models like those provided by Meta Platforms and the startup Mistral AI. Nvidia is betting on open-source, or open-weight, AI technologies because they tend to offer businesses more flexibility and control than proprietary models. Popular proprietary models are offered by vendors like OpenAI and Anthropic. AI models that are open-weight share the numerical parameters, or 'weights," that underlie them. That flexibility is critical for enterprises with a lot of confidential data but still want to use agents, said Joey Conway, a senior director of generative AI software for enterprise at Nvidia. 'We wanted to focus on places where enterprises need the full control of open-weight models," Conway said. 'And we don't see much of the market moving there." Nvidia joins the ranks of companies like OpenAI, Microsoft, Amazon and Google that aim to help businesses build their own AI agents, which are technologies that can independently perform tasks across various functions. Nvidia pegs the size of the AI agent market at $1 trillion—roughly the same as the enterprise software market that it predicts agents will replace. But over the past year, the technology has struggled to gain widespread adoption among enterprises. It has been difficult for developers to train models while readying their corporate data, Conway said. That's one gap Nvidia's NeMo microservices aims to fill, he said, by making agent-building easier with a system for incorporating private business data. By selling software for agents, Nvidia continues a tactic it started with its popular Compute Unified Device Architecture, or CUDA, a programming language that lets developers write applications for graphics processing units. That strategy of selling software alongside hardware keeps Nvidia's GPU business strong, said Dave McCarthy, a research vice president at research firm International Data Corp. Another reason businesses might want to use Nvidia's software to build agents is because it isn't tied to any particular cloud platform, McCarthy added. 'In many cases, it's an easy choice for an enterprise to say, 'If you don't want to lock yourself into a particular cloud provider or hardware company, Nvidia is a good choice,'" he said. Write to Belle Lin at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store