logo
Watch CNBC's full interview with Nvidia CEO Jensen Huang

Watch CNBC's full interview with Nvidia CEO Jensen Huang

CNBCa day ago

Nvidia CEO Jensen Huang discusses Europe's role in the AI race, the robotics and AV industries, and how U.S.-China relations could impact the future of technology with CNBC's Arjun Kharpal at the VivaTech conference in Paris.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ian Buck built Nvidia's secret weapon. He may spend the rest of his career defending it.
Ian Buck built Nvidia's secret weapon. He may spend the rest of his career defending it.

Business Insider

timean hour ago

  • Business Insider

Ian Buck built Nvidia's secret weapon. He may spend the rest of his career defending it.

Ian Buck, Nvidia's vice president of hyperscale and high-performance computing, felt a twinge of nostalgia during CEO Jensen Huang's keynote presentation at their GTC conference in March. Huang spent nearly eight minutes on a single slide listing software products. "This slide is genuinely my favorite," said Huang onstage in front of 17,000 people. "A long time ago — 20 years ago — this slide was all we had," the CEO continued. Buck said he was instantly transported back to 2004 when he started building the company's breakthrough software, called Compute Unified Device Architecture. Back then, the team had two people and two libraries. Today, CUDA supports more than 900 libraries and artificial intelligence models. Each library corresponds to an industry using Nvidia technology. "It is a passionate and very personal slide for me," Buck told reporters the next day. The 48-year-old's contribution to Nvidia is hard-coded into the company's history. But his influence is just beginning. CUDA is the platform from which Nvidia jumped to, at one point, 90% market share in AI computing. CUDA is how the company defends its moat. One architecture to rule them all Dion Harris, Nvidia's senior director of high-performance computing and AI factory solutions, sometimes forgets that he's in the room with the Dr. Ian Buck. Then it hits him that his boss, and friend, is a computing legend. Since Buck's undergrad days at Princeton in the late 1990s, he had been focused on graphics — a particularly punishing field within computer science with no obvious connection to AI at the time. "Computer graphics was such a dweebie field," said Stephen Witt, the author of " The Thinking Machine," which details Nvidia's rise from obscurity to the most valuable company in the world. "There was a stigma to working in computer graphics — you were maybe some kind of man-child if this was your focus," Witt said. While getting his Ph.D. at Stanford, Buck connected multiple graphics processing units with the aim of stretching them to their limits. He had interned at Nvidia before pursuing his Ph.D., so he was familiar with the GPU. Initially, he used it for graphics like everyone else. Buck has said that he and his cohort would use the chips to play video games such as "Quake" and "Doom," but eventually, he started asking himself what else his gaming setup could do. He became fixated on proving that you could use GPUs for anything and everything. He received funding from Nvidia and the Defense Advanced Research Projects Agency, among others, to develop tools to turn a GPU into a general-purpose supercomputing machine. When the company saw Brook—Buck's attempt at a programming language that applied the power of GPUs beyond graphics, Nvidia hired him. He wasn't alone. John Nickolls, a hardware expert and then director of architecture for GPU computing, was also instrumental in building CUDA. Buck may have been forever paired with Nickolls if the latter had not died from cancer in 2011. "Both Nickolls and Buck had this obsession with making computers go faster in the way that a Formula 1 mechanic would have an obsession with making the race car go faster," Witt told BI. (The author said Huang expressed frustration that Nickolls doesn't get the recognition he deserves since his passing.) Buck, Nickolls, and a small team of experts built a framework that allowed developers to use an existing coding language, C, to harness the GPU's ability to run immense calculations simultaneously rather than one at a time and apply it to any field. The result was CUDA, a vehicle to bring parallel computing to the masses. The rise of CUDA as an essential element in the world of AI wasn't inevitable. Huang insisted on making every chip compatible with the software, though hardly anyone was using it despite being free. In fact, Nvidia lost millions of dollars for more than a decade because of CUDA. The rest is lore. When ChatGPT launched, Nvidia was already powering the AI computing revolution that is now the focus of $7 trillion in infrastructure spending, much of which eventually goes to Nvidia. King of the nerds Buck's smarts do have limits. He joked to an eager audience at GTC that quantum chromodynamics, a field of particle physics, just won't stick in his brain. But he thrives in Nvidia's notoriously rigorous environment. The Santa Clara, California, company has an intense culture that eschews one-on-one meetings and airs missteps and disagreements in public. It might sound terrifying, but for those with the brains to keep up, the directness and the rigor are ideal. For this report, Business Insider spoke with four people who have either worked directly with Buck at Stanford, Nvidia, or both. Those who know Buck personally describe him as gregarious and easygoing but capable of intensity when goals are on the line. He's focused on results rather than theory. In his public remarks, at panels and in interviews on behalf of Nvidia, Buck volleys from rapid, technical lines of thought to slower, simple descriptions in layman's terms. At a GTC press conference, he detailed the latest development in convolutional neural networks and then described proteins as "complicated 3D squigglies in your body." He describes the tiny, sensitive interconnects between parts of an Nvidia chipset like the Geek Squad explaining the back of a TV from memory — it's all in his head. Harris said storytelling ability is particularly important in the upper ranks at Nvidia. Since the company essentially had a promising technology waiting for a market for years, Huang still sees being early as a business strategy. He has branded it "going after zero billion-dollar markets." The potential of AI, "AI factories," and the infrastructure spending that goes with them is a story Nvidia can't stop telling. Buck's shilling skills have improved over the years. But even in 15-year-old footage, he's most animated when explaining the inner workings of Nvidia's technology. "A lot of developers are amazing, but they say, 'Leave me alone. I'm going to write my code in the mountains somewhere," Paul Bloch, president of Nvidia partner DDN, told BI. Nvidia's leaders aren't like that, he said. Much of Nvidia's upper echelon may have the skills to match the reclusive set, but they don't choose between showmanship and code. Ian Buck's next act Ian Buck's work at Nvidia began with a simple mandate: make the GPU work for every industry. That mission is very nearly accomplished. There are hundreds of CUDA libraries targeting industries from weather forecasting to medical imaging. "The libraries are really there to connect the dots so that every business doesn't have to learn CUDA," Harris said. CUDA draws its strength from millions of developers, amassed over decades, who constantly innovate and improve on the platform. So far, no one has caught Nvidia's heels, but the competition is coming faster than ever. Even as Buck spoke at GTC, developers across the world were trying to break through CUDA's dominance. The first night of the conference, a cast of competitors convened by TensorWave, an AI cloud company exclusively using chips from Nvidia's only US rival, AMD, held an event entitled "Beyond CUDA." Tensorwave cofounder Jeff Tatarchuck said it included more than "24 presenters talking about what they're doing to overcome the CUDA moat." AMD, which also presented at the event, is making an explicit effort to catch up on the software side of AI computing, but industry analysts say they're not there yet. Harris told BI Buck's team spends a lot of time speaking with researchers to stay on top. That's always been true, but the nature of the work has changed. A decade ago, Buck was convincing researchers to apply accelerated computing to their problems; now the tables have turned. "One of the most challenging parts of my job often is to try to predict the future, but AI is always surprising us," Buck said at a Bank of America conference this month. Understanding what the smartest minds in AI need from Nvidia is paramount. Many saw DeepSeek, the company that spooked markets with its ostensibly cheap reasoning AI model, as a threat to Nvidia since the team bypassed CUDA to squeeze out the performance gains that allowed it to achieve competitive results with less compute. But Buck recently described the Chinese team as "one of the best CUDA developers out there." AI is entering a new phase as more companies commercialize their tools. Even with Nvidia's enormous head start, in part built by Buck, the pace is intense. For example, one of the products Nvidia debuted at GTC, Dynamo, is an inference computing platform designed to adapt to the rise of reasoning models. Nvidia launched Dynamo a couple of months after the DeepSeek earthquake, but some users had already built their own versions. That's how fast AI is evolving. "Inference is really hard. It's wickedly hard," said Buck at GTC. Talent is also a big part of how Nvidia is going to try to maintain its dominance, and another place where, Witt says, Buck has value beyond his technical skills. He's not exactly a household name, even at Stanford. But for a certain type of developer, the ones who can play in Buck's extremely complex sandbox, he is a draw. "Everyone's trying to hire these guys, especially after DeepSeek," said Witt. "This was not a sexy domain in computer science for a long time. Now it is white hot." "Now these guys are going for big money. So, I think Ian Buck has to be out there advertising what his group is doing," Witt continued. Nvidia declined to make Ian Buck available for an interview with Business Insider and declined to comment on this report. Who's Nvidia's next CEO? Buck is more than a decade younger than Huang, who is 62, and doesn't plan on going anywhere anytime soon. Yet, questions about succession are inevitable. Lip-Bu Tan, a semiconductor industry legend who recently became Intel's CEO, told BI that Buck is one of a handful of true collaborators for Huang, who has more than 60 direct reports. "Jensen has three right-hand men," Tan told BI before he took over at Intel. Buck is one. Vice President of GPU Engineering Jonah Alben is another. And CFO Colette Kress, though clearly not a man, is the third, Tan said. Jay Puri, Nvidia's executive vice president for worldwide field operations, and Sanja Fidler, vice president of AI research, are also names that come up in such conversations. "I don't think Ian spends a lot of time doing business strategy. He's more like the world's best mechanic," Witt said.

Stock-Split History Is Being Made Next Week by an Industry-Leading Company That's Gained 400% in Just Over 5 Years
Stock-Split History Is Being Made Next Week by an Industry-Leading Company That's Gained 400% in Just Over 5 Years

Yahoo

time2 hours ago

  • Yahoo

Stock-Split History Is Being Made Next Week by an Industry-Leading Company That's Gained 400% in Just Over 5 Years

Investors have rallied around influential companies conducting splits. To date, two industrial titans -- one of which has risen more than 210,000% since its initial public offering -- have completed forward stock splits. Next week, a high-flying financial stock, whose key performance indicators are rocketing higher across the board, will become Wall Street's newest stock-split stock. 10 stocks we like better than Interactive Brokers Group › For more than three decades, investors have almost always had a next-big-thing trend or innovation to hold their attention. It started with the advent and proliferation of the internet in the mid-1990s and was followed by genome decoding, business-to-business e-commerce, nanotechnology, 3D printing, blockchain technology, cannabis, and the metaverse. Today, artificial intelligence (AI) is captivating the attention and wallets of professional and everyday investors. But every so often, more than one big trend can exist at the same time. In addition to the evolution of AI, investors have been rallying around influential companies announcing stock splits. A stock split is a tool publicly traded companies can lean on to cosmetically alter their share price and outstanding share count by the same factor. These adjustments are considered cosmetic because they don't result in a change to a company's market cap or its underlying operating performance. Although stock splits can nominally adjust a company's share price in either direction, one is overwhelmingly preferred by the investing community. Reverse splits, which are designed to increase a company's share price while correspondingly reducing its outstanding share count, are often avoided by investors. The companies announcing and completing reverse splits are typically struggling and attempting to avoid delisting from a major U.S. stock exchange. On the other hand, investors are willingly lured by businesses conducting forward splits. This type of split lowers a company's share price to make it more nominally affordable for everyday investors and/or employees who aren't able to purchase fractional shares. Forward splits are typically completed by companies on the leading edge of the innovation curve within their respective industry. Furthermore, an analysis from Bank of America Global Research showed that, since 1980, companies enacting forward splits more than doubled the average return of the benchmark S&P 500 in the 12 months following their split announcement (25.4% vs. 11.9%). To date, two influential stock-split stocks have taken center stage. Next week, the Class of 2025 stock-split stocks will welcome a new member. Last year, more than a dozen high-profile businesses completed a split, with many of these companies being traced back to the tech sector. This included Nvidia's much-anticipated 10-for-1 split, as well as AI networking solutions specialist Broadcom's first-ever split (also 10-for-1). This year's stock-split theme is all about non-tech titans making their shares more accessible to everyday investors. Although it was the last of the three companies to announce its intent to split, wholesale industrial and construction supplies company Fastenal (NASDAQ: FAST) became the first notable business to complete its forward split (2-for-1) after the close of trading on May 21. This marked its ninth split in the last 37 years. Shares of Fastenal have rocketed higher by well over 210,000% since its initial public offering in 1987 (including dividends) and are reflective of the company becoming increasingly tied to the supply chains of notable industrial and construction companies. Fastenal has been integrating its managed inventory solutions on-site to generate instant revenue, as well as gain a better understanding of the supply chain needs of its leading customers. Furthermore, Fastenal benefits from the nonlinearity of economic cycles. Though recessions are a normal and inevitable part of the economic cycle, they're historically short-lived. In comparison, the average economic expansion since the end of World War II has endured around five years. A cyclically tied company like Fastenal spends a disproportionate amount of time growing in lockstep with its biggest clients. The other big-time stock split that's been announced and completed is auto parts supplier O'Reilly Automotive (NASDAQ: ORLY). Following the approval of its forward split by shareholders in mid-May, O'Reilly completed its largest-ever split, 15-for-1, after the close of trading on June 9. One of the clear-cut catalysts for O'Reilly and its peers is the steady aging of cars and light trucks on American roadways. Whereas the average age of vehicles in the U.S. stood at 11.1 years in 2012, according to a report by S&P Global Mobility, it's increased to an all-time high of 12.8 years, as of 2025. With auto loan interest rates climbing and President Donald Trump's tariff and trade policy leading to confusion, O'Reilly Automotive should be relied on by drivers and mechanics to keep aging vehicles in tip-top running condition. A more company-specific reason O'Reilly Automotive stock has steadily climbed is its sensational share-repurchase program. Since initiating a buyback program in 2011, more than $25.9 billion has been spent to repurchase close to 60% of its outstanding shares. A company that regularly grows its net income and reduces its outstanding share count should enjoy a boost to its earnings per share. Wall Street's third high-profile, non-tech, industry-leading stock split of 2025 is right around the corner. Automated electronic brokerage firm Interactive Brokers Group (NASDAQ: IBKR) announced on April 24 that it would complete a 4-for-1 forward split following the close of trading on June 17. This split, which is historic in the sense that it's the first in the company's history, will reduce its share price from north of $205, as of this writing on June 10, to around $50 per share. Since the start of May 2020, which represents a period of just over five years, shares of Interactive Brokers have soared by 400%. This advance is a function of macro and company-specific factors working in its favor. The broad-based theme that helps Interactive Brokers succeed is long-lasting bull markets. Even though stock market corrections and periods of outsized volatility offer some of the best investment opportunities, customers at Interactive Brokers tend to be more willing to trade and hold additional equity on the platform when stocks are climbing. With the exception of the 2022 bear market, which endured less than a year, and the short-lived tariff-induced swoon in April 2025, the bulls have been running wild on Wall Street for the last five years. Interactive Brokers' site features have also hit home with its clients. The company's heavy reliance on technology and automation allows it to pay higher interest on cash balances, as well as charge lower margin fees, depending on the amount being borrowed. This combination of enduring bull markets and unique features has led to sweeping growth in virtually all of Interactive Brokers Group's key performance indicators (KPIs). Over the trailing-two-year period, ended March 31, 2025, the number of customer accounts has soared by 65% to 3.62 million, customer equity on the platform has risen by 67% to almost $574 billion, and daily average revenue trades -- total customer orders divided by the number of trading days in a period -- has jumped 72% to 3.52 million. In other words, when investors feel confident about the state of the stock market, they open accounts, trade more frequently, use margin more often, and keep more of their capital tied up with Interactive Brokers' platform. The only knock you'll find against owning Interactive Brokers' stock is that its forward price-to-earnings (P/E) ratio of 26 represents a 29% premium to its average forward P/E over the trailing-five-year period. Though this likely isn't a big deal for long-term investors, considering the company's KPIs keep heading in the right direction, it might limit upside for its shares in the coming quarters. Before you buy stock in Interactive Brokers Group, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Interactive Brokers Group wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $657,871!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $875,479!* Now, it's worth noting Stock Advisor's total average return is 998% — a market-crushing outperformance compared to 174% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 9, 2025 Bank of America is an advertising partner of Motley Fool Money. Sean Williams has positions in Bank of America. The Motley Fool has positions in and recommends Bank of America, Interactive Brokers Group, and Nvidia. The Motley Fool recommends Broadcom and recommends the following options: long January 2027 $175 calls on Interactive Brokers Group and short January 2027 $185 calls on Interactive Brokers Group. The Motley Fool has a disclosure policy. Stock-Split History Is Being Made Next Week by an Industry-Leading Company That's Gained 400% in Just Over 5 Years was originally published by The Motley Fool

Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton
Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton

Forbes

time2 hours ago

  • Forbes

Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton

DGX Lepton Cloud In April 2025, Nvidia quietly acquired Lepton AI, a Chinese startup specializing in GPU cloud services. Founded in 2023, Lepton AI focused on renting out GPU compute that's aggregated from diverse infrastructure and cloud providers. While the deal value is unknown, the founders of Lepton AI, Yangqing Jia (former VP of Technology at Alibaba) and Junjie Bai, joined Nvidia to continue building the product. Lepton AI had previously raised $11 million in seed funding from investors such as CRV and Fusion Fund. Nvidia has rebranded Lepton AI as DGX Cloud Lepton and relaunched it in June 2025. According to Nvidia, the service delivers a unified AI platform and compute marketplace that connects developers to tens of thousands of GPUs from a global network of cloud providers. How Does DGX Cloud Lepton Work DGX Cloud Lepton serves as a unified AI platform and marketplace, bringing the global network of GPU resources closer to developers. It aggregates the GPU capacity offered by cloud providers, such as AWS, CoreWeave and Lambda, through a consistent software interface. This enables developers to access GPU compute through a centralized interface, regardless of the cluster's location. Lepton Cloud While leveraging the underlying GPU compute, Nvidia is exposing a consistent software platform powered by NIM, Nemo, Blueprints and Cloud Functions. Irrespective of the cloud infrastructure, developers can expect the same software stack to run their AI workflows. DGX Cloud Lepton supports three primary workflows: Dev Pods: Interactive development environments (e.g., Jupyter notebooks, SSH, VS Code) for prototyping and experimentation. Batch Jobs: Large-scale, non-interactive workloads (e.g., model training, data preprocessing) that can be distributed across multiple nodes, with real-time monitoring and detailed metrics. Inference Endpoints: Deploy and manage models (base, fine-tuned, or custom) as scalable, high-availability endpoints, with support for both NVIDIA NIM and custom containers Apart from this, DGX Cloud Lepton delivers operational features such as real-time monitoring and observability, on-demand auto-scaling, custom workspaces, security and compliance. Developers can choose the region of their preference to maintain data locality and comply with data sovereignty requirements. DGX Lepton's Growing Network Nvidia has partnered with major cloud providers and infrastructure providers worldwide. Andromeda, AWS, CoreWeave, Foxconn, Hugging Face, Lambda, Microsoft Azure, Mistral AI, Together AI and Yotta are some of the listed partners for DGX Cloud Lepton. At the recently held GTC event in Paris, Nvidia announced that it is working with some of the leading European cloud providers to enable local developers to meet the data sovereignty needs. It also announced a partnership with Hugging Face to deliver training clusters as a service. Nvidia collaborates with European venture capital firms, Accel, Elaia, Partech, and Sofinnova Partners, to provide up to $100,000 in GPU capacity credits and assistance from NVIDIA specialists for eligible portfolio firms via DGX Cloud Lepton. While the pricing varies based on the cloud provider, the service is currently in preview. Developers can sign up at to apply for early access to Lepton. With DGX Cloud Lepton, Nvidia aims to make GPU computing accessible to global developers. Instead of launching its own cloud platform that competes with the hyperscalers, Nvidia has chosen to partner with them to deliver aggregated compute resources to developers.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store