Latest news with #MTIA
Yahoo
13-03-2025
- Business
- Yahoo
Meta's Secret AI Chip Could Disrupt Nvidia's Grip on AI Hardware
Meta Platforms (NASDAQ:META) is testing an in-house AI training chip, aiming to lower infrastructure costs and reduce reliance on Nvidia (NASDAQ:NVDA). Warning! GuruFocus has detected 2 Warning Sign with META. The move signals Meta's deeper push into artificial intelligence as it seeks greater control over its AI hardware. According to Reuters, the company has begun small-scale deployment of the chip, with plans to scale production if testing proves successful. Meta currently spends billions on Nvidia's GPUs, a key component of its AI operations, and hopes its own chip can improve cost efficiency while optimizing tasks like recommendation systems and generative AI. The chip is part of Meta's Training and Inference Accelerator (MTIA) series and is being produced by Taiwan Semiconductor Manufacturing Co. (NYSE:TSM). Meta recently completed the tape-out phase, a critical milestone in chip development, though success is not guaranteed. A failed test would force the company to troubleshoot and restart, adding months of delays and significant costs. Meta aims to integrate the chip into AI training systems by 2026. If successful, the initiative could weaken Nvidia's dominance in AI hardware and mark a shift in Big Tech's approach to AI infrastructure. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
12-03-2025
- Business
- Yahoo
Meta's Secret AI Weapon: The Chip That Could Break Nvidia's Grip
Meta (NASDAQ:META) just made a bold move in the AI racetesting its first in-house training chip in a bid to cut reliance on Nvidia (NASDAQ:NVDA) and bring down sky-high AI infrastructure costs. The company has been burning cash on AI, projecting up to $65 billion in capital expenditures for 2025. Now, it's shifting gears, aiming to power its recommendation engines and generative AI with custom-built silicon. The chip is currently in small-scale deployment, and if all goes well, Meta could ramp up production fast. Partnering with Taiwan Semiconductor Manufacturing (NYSE:TSM), the tech giant is looking to optimize efficiency and take greater control of its AI stack. Warning! GuruFocus has detected 2 Warning Sign with META. Meta has played this game beforeits previous custom inference chip didn't make the cut, forcing a multi-billion-dollar shopping spree on Nvidia GPUs. But things are different this time. The company's MTIA chip for inference has already proven successful, running recommendation systems across Facebook and Instagram. Now, it's tackling training, the real AI powerhouse. Meta's Chief Product Officer, Chris Cox, calls this a walk, crawl, run processsuggesting they're still in the early stages, but moving in the right direction. If this new chip delivers, it could shake up the AI hardware landscape, challenging Nvidia's stranglehold on the market. The stakes are high, and so is the skepticism. The AI industry is at a crossroadsscaling up large models with more GPUs isn't the only game in town anymore. Meta's custom chip strategy is a bet on efficiency, but will it pay off? Investors are watching closely. If Meta pulls this off, it could signal a seismic shift in AI computing. If not, it might just end up doubling down on Nvidia again. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
11-03-2025
- Business
- Yahoo
Meta is reportedly testing its first in-house AI training chip
Breaking: A Big Tech company is ramping up its AI development. (Whaaat??) In this case, the protagonist of this now-familiar tale is Meta, which Reuters reports is testing its first in-house chip for AI training. The idea is to lower its gargantuan infrastructure costs and reduce its dependence on NVIDIA (a company that apparently brings out Mark Zuckerberg's "adult language" side). If all goes well, Meta hopes to use it for training by 2026. Meta has reportedly kicked off a small-scale deployment of the dedicated accelerator chip, which is designed to specialize in AI tasks (and is, therefore, more power-efficient than general-purpose NVIDIA GPUs). The deployment began after the company completed its first "tape-out," the phase in silicon development where a complete design is sent for a manufacturing test run. The chip is part of the Meta Training and Inference Accelerator (MTIA) series, the company's family of custom in-house silicon focused on generative AI, recommendation systems and advanced research. See for yourself — The Yodel is the go-to source for daily news, entertainment and feel-good stories. By signing up, you agree to our Terms and Privacy Policy. Last year, the company started using an MTIA chip for inference, a predictive process that happens behind the scenes in AI models. Meta began using the inference one for its Facebook and Instagram news feed recommendation systems. Reuters reports that it plans to start using the training silicon for that as well. The long-term plan for both chips is said to begin with recommendations and eventually use them for generative products like the Meta AI chatbot. The company is one of NVIDIA's biggest customers after placing orders for billions of dollars' worth of GPUs in 2022. That was a pivot for Meta after it bailed on a previous in-house inference silicon that failed a small-scale test deployment — much like the one it's doing now for the training chip.


Asharq Al-Awsat
11-03-2025
- Business
- Asharq Al-Awsat
Meta Begins Testing its First in-house AI Training Chip
Facebook owner Meta (META.O), opens new tab is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia (NVDA.O), opens new tab, two sources told Reuters. The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said. The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth. Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure. One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads. Meta is working with Taiwan-based chip manufacturer TSMC ( opens new tab to produce the chip, this person said. The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said. A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step. The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly start for years and at one point scrapped a chip at a similar phase of development. However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds. Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform. As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said. "We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week. Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success." Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022. The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day. The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubts about how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power. Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimize computational efficiency by relying more heavily on inference than most incumbent models. In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns.


Gulf Business
11-03-2025
- Business
- Gulf Business
Facebook owner Meta begins testing its first in-house AI training chip
Image credit: Getty Images Facebook owner Meta is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters. The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said. The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth. Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114bn to $119bn, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure. Read- One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads. Meta is working with Taiwan-based chip manufacturer TSMC to produce the chip, this person said. The test deployment began after Meta finished its first 'tape-out' of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said. A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step. Meta and TSMC declined to comment. The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly startfor years and at one point scrapped a chip at a similar phase of development. However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds. Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to 'teach' it how to perform. As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said. 'We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI,' Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week. Cox described Meta's chip development efforts as 'kind of a walk, crawl, run situation' so far, but said executives considered the first-generation inference chip for recommendations to be a 'big success.' Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022. The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day. The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubtsabout how much more progress can be made by continuing to 'scale up' large language models by adding ever more data and computing power. Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimise computational efficiency by relying more heavily on inference than most incumbent models. In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns.