TSMC shows off new tech for stitching together bigger, faster chips
Taiwan Semiconductor Manufacturing Co on Wednesday unveiled technology for making faster chips and putting them together in dinner-plate-sized packages that will boost performance needed for artificial intelligence applications.
It said its A14 manufacturing technology will arrive in 2028 and will be able to produce processors that are 15% faster at the same power consumption as its N2 chips due to enter production this year, or will use 30% less power at the same speed as the N2 chips.
The world's biggest contract manufacturer, which counts Nvidia and Advanced Micro Devices as clients, said its forthcoming "System on Wafer-X" will be able to weave together at least 16 large computing chips, as well as memory chips and fast optical interconnections and new technology, to deliver thousands of watts of power to the chips.
By comparison, Nvidia's current flagship graphics processing units consist of two large chips stitched together, and its "Rubin Ultra" GPUs due out in 2027 will stitch four together. TSMC said plans to build two factories to carry out the work near its chip plants in Arizona, with plans for a total of six chip factories, two packaging factories, and a research and development center at the site.
"As we continue to bring more advanced silicon to Arizona, you need a continuous effort to enhance that silicon," Kevin Zhang, deputy co-chief operations officer and senior vice president, said on Wednesday. Intel, which is working to build out a contract manufacturing business to compete with TSMC, is due to announce new manufacturing technologies next week. Last year, it claimed it would overtake TSMC in making the world's fastest chips.
Demand for massive AI chips that are packaged together has shifted the battleground between the two firms from simply making fast chips to integrating them - a complex task that requires working closely with customers.
"They're both neck-and-neck. You're not going to pick one over the other because they have the technological lead," said Dan Hutcheson, vice chair at analyst firm TechInsights. "You're going to pick one over the other for different reasons."
Customer service, pricing and how much wafer allocation can be obtained are likely to influence a company's decision about which chip manufacturer would be best.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
3 hours ago
- Hindustan Times
AMD Radeon RX 9060 XT released globally: What you need to know before you buy?
The AMD Radeon RX 9060 XT is the latest release in the midrange GPU market. It is positioned as a direct competitor to NVIDIA's RTX 5060 Ti GPU and targets gamers looking for strong 1080p or 1440p gaming performance without spending too much. Here is a breakdown of what you need to know before considering this GPU for your next purchase. The AMD Radeon RX 9060 XT delivers solid 1080p and 1440p gaming performance. It matches the performance of the NVIDIA RTX 5060 Ti but at a much lower price. In benchmarks, it performs significantly better than the previous RX 7600 XT and matches the RTX 5060 Ti at 1440p. It is capable of running most modern titles at about 70 FPS at 1440p, which is commendable at this price point. The RX 9060 XT can also run modern titles up to 4K at playable frame rates, thanks to the FSR 4 upscaling technology released earlier this year. Ray tracing has substantially improved over the years in AMD's GPUs but still lags behind NVIDIA's DLSS 4 and frame generation technology for ray-traced workloads. It can deliver playable frame rates at 1440p with ray tracing enabled, but performance significantly drops at 4K with path tracing enabled. The RX 9060 XT features FSR 4, powered by second-generation AI accelerators, which boosts frame rates and image quality. The HYPR-RX suite includes Radeon Super Resolution and Fluid Motion Frames for even smoother gameplay. This GPU is ideal for anyone looking for high frame rates at 1080p and 1440p, along with extra VRAM for future titles. It suits gamers who want to save money without compromising on modern features like AI upscaling and ray tracing. AI hobbyists can also benefit from the card's robust AI acceleration capabilities. The Radeon RX 9060 XT was officially launched globally on June 5, 2025, with two models varying in VRAM. Official AIB partners like ASUS, MSI and Sapphire may release the GPU with pricing between ₹44,999 and ₹49,999 in India.


Mint
3 hours ago
- Mint
Nvidia dumps $4.5 billion in chips amid US trade restrictions: Why are China-specific H20 chips unusable?
Nvidia, the world's top chipmaker isn't immune to the unpredictable fallout of global politics. Last week, as the company reported another strong earnings report, CEO Jensen Huang revealed the sobering news of a $4.5 billion write-off for chips that were supposed to be sent to China and now have nowhere to go. Huang said during the earnings call that, 'We are taking a multibillion dollar write off on inventory that cannot be sold or repurposed,' quoted Fortune. The chips, which have led to the massive loss, known as the H20 chips, were designed by Nvidia specifically for Chinese clients to meet earlier US export restrictions, according to the report. These chips weren't top-of-the-line, but they were still advanced enough for AI development and, also legal to ship under the Biden administration's rules, as per Fortune. Nvidia now faces a significant setback with a $4.5 billion write-off due to US export restrictions, rendering its China-specific H20 chips unusable. Designed to comply with previous regulations, these chips are now banned under new rules, leaving Nvidia unable to repurpose them for other markets. But things changed after US president Donald Trump took office, as he went a step further in early April and banned exports of even these chips, according to the report. However, Nvidia's newest chips have made gains in training large artificial intelligence systems, new data released on Wednesday showed, with the number of chips required to train large language models dropping dramatically. MLCommons, a nonprofit group that publishes benchmark performance results for AI systems, released new data about chips from Nvidia and Advanced Micro Devices, among others, for training, in which AI systems are fed large amounts of data to learn from. While much of the stock market's attention has shifted to a larger market for AI inference, in which AI systems handle questions from users, the number of chips needed to train the systems is still a key competitive concern. China's DeepSeek claims to create a competitive chatbot using far fewer chips than U.S. rivals. The results were the first that MLCommons has released about how chips fared at training AI systems such as Llama 3.1 405B, an open-source AI model released by Meta Platforms that has a large enough number of what are known as "parameters" to give an indication of how the chips would perform at some of the most complex training tasks in the world, which can involve trillions of parameters. Nvidia and its partners were the only entrants that submitted data about training that large model, and the data showed that Nvidia's new Blackwell chips are, on a per-chip basis, more than twice as fast as the previous generation of Hopper chips.


Time of India
7 hours ago
- Time of India
Hewlett Packard beats Q2 revenue estimates on AI demand; records $1.36 billion charge
Hewlett Packard Enterprise beat Wall Street revenue estimates for the second quarter on Tuesday, driven by demand for its artificial-intelligence servers and hybrid cloud segment. Shares of the server-maker, which also recorded an impairment charge of $1.36 billion in the reported quarter, were up 3.2% in extended trading. HPE has benefited from a surge in spending on advanced data center architecture, designed to support the complex processing needs of generative AI. The boom in GenAI has bumped up demand for HPE's AI-optimized servers, which are powered by Nvidia processors and can run complex applications. "In a very dynamic macro environment, we executed our strategy with discipline," CEO Antonio Neri said. HPE was focusing on achieving efficiencies and streamlining operations across its businesses, said CFO Marie Myers. For the quarter ended April 30, HPE posted revenue of $7.63 billion, ahead of analysts' average estimate of $7.45 billion, according to data compiled by LSEG. Adjusted profit per share came in at 38 cents, beating an estimate of 32 cents per share. It forecast third-quarter revenue between $8.2 billion and $8.5 billion, compared to an estimate of $8.17 billion.