Latest news with #TheNvidiaWay


Asia Times
21-05-2025
- Business
- Asia Times
The Nvidia way of building a great company
Nvidia is now worth about three trillion US dollars, making it the second most valuable public company in the world. By establishing leadership in AI technology chips, Nvidia is in a unique industry position for growth and profitability. This accomplishment, under the leadership of Jensen Huang, the firm's CEO and co-founder, ranks him among the historic entrepreneurs who pioneered major new industrial sectors. How this success was achieved is discussed in a new book by Tae Kim, a writer for Barron's, entitled 'The Nvidia Way.' As he tells it, the company's extraordinary success is the result of brilliant technology execution, excellent timing in new product strategy and ultra-fast scaling to meet market needs. Underlying Nvidia's success is a unique, unconventional management method developed by Huang that involves practices generally deemed to be impractical. 'If Nvidia had not evolved from its early, more conventional form, it would not have survived even with Jensen in charge. But the organization dynamic he eventually created—one that represents the exact opposite of the 'best practices' in most of the rest of corporate America–has made it possible for the company to withstand and thrive amid the pressures of an externally unforgiving market.' Huang's organization addressed a big problem of technology companies: identifying important market developments early and responding rapidly with corporate developments to prevent becoming obsolete. Bureaucracy grows along with company size. As companies grow organized in conventional hierarchies, senior leaders become increasingly isolated from day-to-day developments in their markets. With layers of management required for strategic decisions grows an appetite for low-risk investments. Furthermore, overlapping internal committees slow the process of the timely decision-making needed to avoid ultimate failure. The technology world moves ever faster. Early on, Nvidia went through a period of conventional management structure and nearly failed. Huang decided to streamline the company's management and make it rapidly responsive to market needs. To that end, he needed to organize something radically different built around his personal skills. Huang thus organized the company to avoid the lag between market needs and response by a flat management structure and his personal involvement in the daily corporate activities he deemed relevant to making timely decisions. His intent was the most rapid response possible. He accomplished that by an established flow of daily interactive e-mails from as many as 60 key staff members that kept him updated as quickly as possible on competitive and product developments. In effect, he used these emails to communicate and manage a reporting group of many senior managers. Huang strives to eliminate the delay in decision-making between himself and this team by in-person large group meetings that focus on problem-solving using whiteboards to present and discuss problems and solutions. Significantly, PowerPoint materials are not used. The focus is on current problems and needed creative actions to solve them by the company's leadership. At the end of such meetings, Huang offers his conclusions regarding actions to be taken by the company. Such meetings set or reset strategic actions with key staff involved and briefed. Having been personally involved in technology company management, it is clear that Huang's organization is a personal structure. However, it does highlight the importance of ever more efficient strategic decision-making for success, which needs to be the goal of every company. He has certainly shown one way to eliminate bureaucratic smothering of innovation. Henry Kressel is a technologist, inventor and author with extensive experience in technology development and corporate management. He is also a long-term private equity investor in technology companies.


Forbes
05-05-2025
- Business
- Forbes
Nvidia Builds An AI Superhighway To Practical Quantum Computing
At the GTC 2025 conference, Nvidia announced its plans for a new, Boston-based Nvidia Accelerated Quantum Research Center or NVAQC, designed to integrate quantum hardware with AI supercomputers. Expected to begin operations later this year, it will focus on accelerating the transition from experimental to practical quantum computing. 'We view this as a long-term opportunity,' says Tim Costa, Senior Director of Computer-Aided Engineering, Quantum and CUDA-X at Nvidia. 'Our vision is that there will come a time when adding a quantum computing element into the complex heterogeneous supercomputers that we already have would allow those systems to solve important problems that can't be solved today.' Quantum computing, like AI (i.e., deep learning) a decade ago, is yet another emerging technology with an exceptional affinity with Nvidia's core product, the GPU. It is another milestone in Nvidia's successful ride on top of the technological shift re-engineering the computer industry, the massive move from serial data processing (executing instructions one at a time, in a specific order) to parallel data processing (executing multiple operations simultaneously). Over the last twenty years, says Costa, there were several applications where 'the world was sure it was serial and not parallel, and it didn't fit GPUs. And then, a few years later, rethinking the algorithms has allowed it to move on to GPUs.' Nvidia's ability to 'diversify' from its early focus on graphics processing (initially to speed up the rendering of three-dimensional video games) is due to the development in the mid-2000s of its software, the Compute Unified Device Architecture or CUDA. This parallel processing programming language allows developers to leverage the power of GPUs for general-purpose computing. The key to CUDA's rapid adoption by developers and users of a wide variety of scientific and commercial applications was a decision by CEO Jensen Huang to apply CUDA to the entire range of Nvidia's GPUs, not just the high-end ones, thus ensuring its popularity. This decision—and the required investment—caused Nvidia's gross margin to fall from 45.6% in the 2008 fiscal year to 35.4% in the 2010 fiscal year. 'We were convinced that accelerated computing would solve problems that normal computers couldn't. We had to make that sacrifice. I had a deep belief in [CUDA's] potential,' Huang told Tae Kim, author of the recently published The Nvidia Way. This belief continues to drive Nvidia's search for opportunities where 'we can do lots of work at once,' says Costa. 'Accelerated computing is synonymous with massively parallel computing. We think accelerated computing will ultimately become the default mode of computing and accelerate all industries. That is the CUDA-X strategy.' Costa has been working on this strategy for the last six years, introducing the CUDA software to new areas of science and engineering. This has included quantum computing, helping developers of quantum computers and their users simulate quantum algorithms. Now, Nvidia is investing further in applying its AI mastery to quantum computing. Nvidia became one of the world's most valuable companies because the performance of the artificial neural networks at the heart of today's AI depends on the parallelism of the hardware they are running on, specifically the GPU's ability to process many linear algebra multiplications simultaneously. Similarly, the basic units of information in quantum computing, qubits, interact with other qubits, allowing for many different calculations to run simultaneously. Combining quantum computing and AI promises to improve AI processes and practices and, at the same time, escalate the development of practical applications of quantum computing. The focus of the new Boston research center is on 'using AI to make quantum computers more useful and more capable,' says Costa. 'Today's quantum computers are fifty to a hundred qubits. It's generally accepted now that truly useful quantum computing will come with a million qubits or more that are error corrected down to tens to hundreds of thousands of error-free or logical qubits. That process of error correction is a big compute problem that has to be done in real time. We believe that the methods that will make that successful at scale will be AI methods.' Quantum computing is a delicate process, subject to interference from 'noise' in its environment, resulting in at least one failure in every thousand operations. Increasing the number of qubits introduces more opportunities for errors. When Google announced Willow last December, it called it 'the first quantum processor where error-corrected qubits get exponentially better as they get bigger.' Its error correction software includes AI methods such as machine learning, reinforcement learning, and graph-based algorithms, helping identify and correct errors accurately, 'the key element to unlocking large-scale quantum applications,' according to Google. 'Everyone in the quantum industry realizes that the name of the game in the next five years will be quantum error correction,' says Doug Finke, Chief Content Officer at Global Quantum Intelligence. 'The hottest job in quantum these days is probably a quantum error correction scientist, because it's a very complicated thing.' The fleeting nature of qubits—they 'stay alive' for about 300 microseconds—requires speedy decisions and very complex math. A ratio of 1,000 physical qubits to one logical qubit would result in many possible errors. AI could help find out 'what are the more common errors and what are the most common ways of reacting to it,' says Finke. Researchers from the Harvard Quantum Initiative in Science and Engineering and the Engineering Quantum Systems group at MIT will test and refine these error correction AI models at the NVAQC. Other collaborators include quantum startups Quantinuum, Quantum Machines, and QuEra Computing. They will be joined by Nvidia's quantum error correction research team and Nvidia's most advanced supercomputer. 'Later this year, we will have the center ready, and we'll be training AI models and testing them on integrated devices,' says Costa.