logo
Meta Begins Testing its First in-house AI Training Chip

Meta Begins Testing its First in-house AI Training Chip

Asharq Al-Awsat11-03-2025
Facebook owner Meta (META.O), opens new tab is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia (NVDA.O), opens new tab, two sources told Reuters.
The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said.
The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth.
Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure.
One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads.
Meta is working with Taiwan-based chip manufacturer TSMC (2330.TW), opens new tab to produce the chip, this person said.
The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said.
A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step.
The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly start for years and at one point scrapped a chip at a similar phase of development.
However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds.
Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform.
As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said.
"We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week.
Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success."
Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022.
The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day.
The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubts about how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power.
Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimize computational efficiency by relying more heavily on inference than most incumbent models.
In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Saudi Arabia's Al-Hilal Land Uruguay Star Darwin Nunez from Liverpool
Saudi Arabia's Al-Hilal Land Uruguay Star Darwin Nunez from Liverpool

Leaders

time3 days ago

  • Leaders

Saudi Arabia's Al-Hilal Land Uruguay Star Darwin Nunez from Liverpool

In a significant transfer move, Saudi Pro League giants Al-Hilal have completed the signing of Uruguayan striker Darwin Nunez from Premier league champions Liverpool, both clubs announced on Saturday. The Riyadh-based side confirmed that the 26-year-old will join their ranks on a three-year contract, marking a significant move as Al-Hilal continue to build a star-studded squad. Nunez joined Liverpool three years ago from Benfica for 75 million euros, making him one of the most expensive signings. Despite the hefty price tag, he scored 40 goals in 143 appearances, as he struggled to maintain his place under managers Jurgen Klopp and Arne Slot. As a result, his playing time decreased, prompting the club to look for new opportunities. 🎥 نمر أزرق في عرين #الهلال 🐅💙#نونيز_هلالي — نادي الهلال السعودي (@Alhilal_FC) August 9, 2025 Al-Hilal Continu Their Big-Name Signings Nunez's transfer adds to Al-Hilal's growing reputation as a destination for top international talent. The club recently shocked many by reaching the quarter-finals of the Club World Cup, defeating Manchester City along the way. Under coach Simone Inzaghi, the team features high-profile players like Portuguese internationals Ruben Neves and Joao Cancelo, as well as Senegal captain Kalidou Koulibaly and former Fulham striker Aleksandar Mitrovic. These signings reflect Al-Hilal's ambition to compete at the highest levels. A New Chapter for Nunez However, Nunez's move to Saudi Arabia opens a new chapter in his career, as the striker now has the opportunity to reignite his goal-scoring form amid top-quality teammates and competitive fixtures. Fans and analysts will closely watch how he adapts to the Saudi Pro League. Al-Hilal's strategic signings, including Nunez, highlight their intent to dominate domestic and continental competitions. The club's investment shows their commitment to becoming one of Asia's leading football giants. Short link : Post Views: 39

Meta says working to thwart WhatsApp scammers
Meta says working to thwart WhatsApp scammers

Arab News

time6 days ago

  • Arab News

Meta says working to thwart WhatsApp scammers

SAN FRANCISCO: Meta on Tuesday said it shut nearly seven million WhatsApp accounts linked to scammers in the first half of this year and is ramping up safeguards against such schemes. 'Our team identified the accounts and disabled them before the criminal organizations that created them could use them,' WhatsApp external affairs director Clair Deevy said. Often run by organized gangs, the scams range from bogus cryptocurrency investments to get-rich-quick pyramid schemes, WhatsApp executives said in a briefing. 'There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' Meta-owned WhatsApp said in a blog post. WhatsApp detected and banned more than 6.8 million accounts linked to scam centers, most of them in Southeast Asia, according to Meta. WhatsApp and Meta worked with OpenAI to disrupt a scam traced to Cambodia that used ChatGPT to generate text messages containing a link to a WhatsApp chat to hook victims, according to the tech firms. Meta on Tuesday began prompting WhatsApp users to be wary when added to unfamiliar chat groups by people they don't know. New 'safety overviews' provide information about the group and tips on spotting scams, along with the option of making a quick exit. 'We've all been there: someone you don't know attempting to message you, or add you to a group chat, promising low-risk investment opportunities or easy money, or saying you have an unpaid bill that's overdue,' Meta said in a blog post. 'The reality is, these are often scammers trying to prey on people's kindness, trust and willingness to help — or, their fears that they could be in trouble if they don't send money fast.'

OpenAI Releases Open-Weight Reasoning Models Optimized for Running on Laptops
OpenAI Releases Open-Weight Reasoning Models Optimized for Running on Laptops

Asharq Al-Awsat

time7 days ago

  • Asharq Al-Awsat

OpenAI Releases Open-Weight Reasoning Models Optimized for Running on Laptops

OpenAI said on Tuesday it has released two open-weight language models that excel in advanced reasoning and are optimized to run on laptops with performance levels similar to its smaller proprietary reasoning models. An open-weight language model's trained parameters or weights are publicly accessible, which can be used by developers to analyze and fine-tune the model for specific tasks without requiring original training data. "One of the things that is unique about open models is that people can run them locally. People can run them behind their own firewall, on their own infrastructure," OpenAI co-founder Greg Brockman said in a press briefing. Open-weight language models are different from open-source models, which provide access to the complete source code, training data and methodologies. The landscape of open-weight and open-source AI models has been highly contested this year. For a time, Meta's Llama models were considered the best, but that changed earlier this year when China's DeepSeek released a powerful and cost-effective reasoning model, while Meta struggled to deliver Llama 4. The two new OpenAI models are the first open models OpenAI has released since GPT-2, which was released in 2019. OpenAI's larger model, gpt-oss-120b, can run on a single GPU, and the second, gpt-oss-20b, is small enough to run directly on a personal computer, the company said. OpenAI said the models have similar performance to its proprietary reasoning models called o3-mini and o4-mini, and especially excel at coding, competition math and health-related queries. The models were trained on a text-only dataset which in addition to general knowledge, focused on science, math and coding knowledge. OpenAI did not release benchmarks comparing the open-weight models to competitors' models such as the DeepSeek-R1 model. Microsoft-backed OpenAI, currently valued at $300 billion, is currently raising up to $40 billion in a new funding round led by Softbank Group.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store