logo
Google launches new Ironwood chip to speed AI applications

Google launches new Ironwood chip to speed AI applications

Zawya09-04-2025

Alphabet's on Wednesday unveiled its seventh-generation artificial intelligence chip named Ironwood, which the company said is designed to speed the performance of AI applications.
The Ironwood processor is geared toward the type of data crunching needed when users query software such as OpenAI's ChatGPT. Known in the tech industry as "inference" computing, the chips perform rapid calculations to render answers in a chatbot or generate other types of responses.
The search giant's multi-billion dollar, roughly decade-long effort represents one of the few viable alternative chips to Nvidia's powerful AI processors.
Google's tensor processing units (TPUs) can only be used by the company's own engineers or through its cloud service and have given its internal AI effort an edge over some rivals.
For at least one generation Google split its TPU family of chips into a version that's tuned for building large AI models from scratch. Its engineers have made a second line of chips that strips out some of the model building features in favor of a chip that shaves costs of running AI applications.
The Ironwood chip is a model designed for running AI applications, or inference, and is designed to work in groups of as many as 9,216 chips, said Amind Vahdat, a Google vice president.
The new chip, unveiled at a cloud conference, brings functions from earlier split designs together and increases the available memory, which makes it better suited for serving AI applications.
"It's just that the relative importance of inference is going up significantly," Vahdat said.
The Ironwood chips boast double the performance for the amount of energy needed compared with Google's Trillium chip it announced last year, Vahdat said. The company builds and deploys its Gemini AI models with its own chips.
The company did not disclose which chip manufacturer is producing the Google design.
(Reporting by Max A. Cherney in San Francisco; Editing by Sonali Paul)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI to continue working with Scale AI after Meta deal
OpenAI to continue working with Scale AI after Meta deal

Zawya

time7 hours ago

  • Zawya

OpenAI to continue working with Scale AI after Meta deal

OpenAI plans to continue working with Scale AI after rival Meta on Friday agreed to take a 49% stake in the artificial intelligence startup for $14.8 billion, OpenAI's CFO Sarah Friar told the VivaTech conference in Paris. Scale AI provides vast amounts of labelled or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. "We don't want to ice the ecosystem because acquisitions are going to happen," she said. "And if we ice each other out, I think we're actually going to slow the pace of innovation." (Reporting by Supantha Mukherjee in Paris, Editing by Louise Heavens)

AMD takes steps to ‘Advance AI'
AMD takes steps to ‘Advance AI'

Tahawul Tech

time12 hours ago

  • Tahawul Tech

AMD takes steps to ‘Advance AI'

Lisa Su the CEO of AMD recently showcased a new AI server planned to launch in 2026 with the goal of challenging Nvidia's flagship offerings. In addition the OpenAI CEO Sam Altman announced the company's intent to adopt AMD's latest chips. Su took the stage at a developer conference in San Jose, California, called 'Advancing AI' to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors. The MI400 series of chips will be the basis of a new server called 'Helios' that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. During its keynote presentation, AMD said that many aspects of the Helios servers – such as the networking standards – would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. 'The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry,' Su said. Su was joined onstage by OpenAI's Sam Altman. The ChatGPT creator is working with AMD on the firm's MI450 chips to improve their design for AI work. 'Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch,' Altman said. During her speech, executives from Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Source: Reuters Image Credit: Stock Image

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store