logo
Red Hat & Google Cloud extend partnership for AI innovation

Red Hat & Google Cloud extend partnership for AI innovation

Techday NZ21-05-2025

Red Hat and Google Cloud have agreed to extend their partnership to focus on advancing artificial intelligence (AI) for enterprises, specifically with new developments in open and agentic AI solutions.
The collaboration will bring together Red Hat's open source technologies and Google Cloud's infrastructure, along with Google's Gemma family of open AI models. This initiative aims to offer cost-effective AI inference and greater hardware choices for businesses deploying generative AI at scale.
Brian Stevens, Senior Vice President and Chief Technology Officer – AI, Red Hat said, "With this extended collaboration, Red Hat and Google Cloud are committed to driving groundbreaking AI innovations with our combined expertise and platforms. Bringing the power of vLLM and Red Hat open source technologies to Google Cloud and Google's Gemma equips developers with the resources they need to build more accurate, high-performing AI solutions, powered by optimized inference capabilities."
The latest phase of the alliance will see the companies launch the llm-d open source project, with Google acting as a founding contributor. This project is intended to facilitate scalable and efficient AI inference across diverse computing environments. Red Hat is introducing the project as a response to enterprise challenges, such as the growing complexity of AI ecosystems and the need for distributed computing strategies.
The companies have also announced that support for vLLM, an open source inference server used to speed up generative AI outputs, will be enabled on Google Cloud's Tensor Processing Units (TPUs) and GPU-based virtual machines. Google Cloud's TPUs, which are already a part of Google's own AI infrastructure, will now be accessible to developers using vLLM, allowing for improved performance and resource efficiency for fast and accurate inference.
Red Hat will be among the earliest testers for Google's new open model Gemma 3, and it will provide 'Day 0' support for vLLM on Gemma 3 model distributions. This is part of Red Hat's broader efforts as a commercial contributor to the vLLM project, focusing on more cost-effective and responsive platforms for generative AI applications.
The collaboration also includes the availability of Red Hat AI Inference Server on Google Cloud. This enterprise distribution of vLLM helps companies scale and optimise AI model inference within hybrid cloud environments. The integration with Google Cloud enables enterprises to deploy generative AI models that are ready for production and can deliver cost and responsiveness efficiencies at scale.
Supporting community-driven AI development, Red Hat will join Google as a contributor to the Agent2Agent (A2A) protocol, an application-level protocol designed to enable communication between agents or end-users across different platforms and cloud environments. Through the A2A ecosystem, Red Hat aims to promote new ways to accelerate innovation and enhance the effectiveness of AI workflows through agentic AI.
Mark Lohmeyer, Vice President and General Manager, AI and Computing Infrastructure, Google Cloud, commented, "The deepening of our collaboration with Red Hat is driven by our shared commitment to foster open innovation and bring the full potential of AI to our customers. As we enter a new age of AI inference, together we are paving the way for organisations to more effectively scale AI inference and enable agentic AI with the necessary cost-efficiency and high performance."
The llm-d project builds upon the established vLLM community, aiming to create a foundation for generative AI inference that can adapt to the demands of large-scale enterprises while facilitating innovation and cost management. The intention is to enable AI workload scalability across different resource types and enhance workload efficiency.
These initiatives highlight the companies' collective effort to offer business users production-ready, scalable, and efficient AI solutions powered by open source technologies and robust infrastructure options.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time11 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Article – RNZ It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? , Producer – 30′ with Guyon Espiner It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? 'My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people.' Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception – before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril – and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. 'It's going to do wonderful things for us,' he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. 'In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients – including quite a few with the same very rare condition you have – that knows your genome, knows all your tests, and hasn't forgotten any of them.' He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive – a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. 'I think that's a bit optimistic.' 'If he said 25 years I'd believe it.' The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. 'Right now, we're at a special point in history,' he says. 'We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes.' He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. 'This shows,' says Hinton of his former employers, 'the company's principals were up for sale.' He believes defense departments of all major arms dealers are already busy working on 'autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind'. He also points out the grim fact that Europe's AI regulations – some of the world's most robust – contain 'a little clause that says none of these regulations apply to military uses of AI'. Then there is AI's capacity for deception – designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged – in just one year – by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? 'I think it would be a bad thing for people – because we'd no longer be needed.' Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. 'If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose.' 'Ask a chicken' For those who believe a rogue AI can simply be shut down by 'pulling the plug', Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off – despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. 'There are so many bad uses as well as good,' he says. 'And our political systems are just not in a good state to deal with this coming along now.' It's a sobering reflection from one of the brightest minds in AI – whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: 'If you want to know what it's like not to be the apex intelligence, ask a chicken.' Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time11 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? "My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people." Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception - before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. "In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them." He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic." "If he said 25 years I'd believe it." The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. "Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes." He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. "This shows," says Hinton of his former employers, "the company's principals were up for sale." He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind". He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI". Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? "I think it would be a bad thing for people - because we'd no longer be needed." Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. "If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose." 'Ask a chicken' For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now." It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken." Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

Google makes case for keeping Chrome browser
Google makes case for keeping Chrome browser

RNZ News

time2 days ago

  • RNZ News

Google makes case for keeping Chrome browser

By Thomas Urbain , AFP Photo: AFP / Anadolu Agency Google has urged a US judge to reject the notion of making it spin off its Chrome browser to weaken its dominance in online search. Rival attorneys made their final arguments on Friday (local time), before US District Court Judge Amit Mehta, who is considering imposing "remedies", after a landmark decision last year that Google maintained an illegal monopoly in search . US government attorneys have called on Mehta to order Google to divest itself of Chrome browser, contending that artificial intelligence is poised to ramp up the tech giant's dominance as the go-to window into the internet. They also want Google barred from agreements with partners like Apple and Samsung to distribute its search tools, which was the focus of the suit against the Silicon Valley internet giant. Three weeks of testimony ended early in May, with Friday devoted to rival sides parsing points of law and making their arguments before Mehta in a Washington courtroom. John Schmidtlein, an attorney for Google, told Mehta no evidence was presented showing people would have opted for a different search engine without the exclusivity deals in place. Schmidtlein noted that Verizon installed Chrome on smartphones, even though the US telecom titan owned Yahoo! search engine and was not bound by a contract with Google. Of the 100 or so witnesses heard at trial, not one said "if I had more flexibility, I would have installed Bing" search engine from Microsoft, the Google attorney told the judge. Department of Justice (DOJ) attorney David Dahlquist countered that Apple, which was paid billions of dollars to make Chrome the default browser on iPhones, "repeatedly asked for more flexibility", but was denied by Google. Google contends that the US has gone way beyond the scope of the suit by recommending a spinoff of Chrome and holding open the option to force a sale of its Android mobile operating system. "Forcing the sale of Chrome or banning default agreements wouldn't foster competition," said Cato Institute senior fellow in technology policy Jennifer Huddleston. "It would hobble innovation, hurt smaller players and leave users with worse products." Google attorney Schmidtlein noted that more than 80 percent of Chrome users are outside the US, meaning divestiture would have global ramifications. "Any divested Chrome would be a shadow of the current Chrome," he contended. "Once we are in that world, I don't see how you can say anybody is better off." The potential of Chrome being weakened or spun off comes as rivals like Microsoft, ChatGPT and Perplexity put generative artificial intelligence (AI) to work, fetching information from the internet in response to user queries. The online search antitrust suit was filed against Google some five years ago, before ChatGPT made its debut, triggering AI fervour. Google is among the tech companies investing heavily to be a leader in AI, and is weaving the technology into search and other online offerings. Testimony at trial included Apple Vice President of Services, Eddy Cue, revealing that Google's search traffic on Apple devices declined in April for the first time in more than two decades. Cue testified that Google was losing ground to AI alternatives like ChatGPT and Perplexity. Mehta pressed rival attorneys regarding the potential for Google to share data as proposed by the DOJ in its recommended remedies. "We're not looking to kneecap Google, but we are looking to make sure someone can compete with Google," DOJ attorney Adam Severt told the judge. Schmidtlein contended that data Google is being asked to share contains more than just information about people's online searches, saying it would be tantamount to handing over the fruit of investments made over the course of decades. "There are countless algorithms that Google engineers have invented that have nothing to do with click and query data," Schmidtlein said. "Their remedy said we want to be on par with all of your ingenuity and, respectfully your honour, that is not proportional to the conduct of this case." - AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store