logo
Lenovo unveils AI-optimised storage for faster enterprise ROI

Lenovo unveils AI-optimised storage for faster enterprise ROI

Techday NZ28-04-2025

Lenovo has announced a significant expansion of its data storage portfolio with the release of new AI-optimised storage solutions aimed at supporting enterprise AI and data modernisation strategies.
The new portfolio consists of 21 ThinkSystem and ThinkAgile models and is designed to assist organisations in addressing challenges related to AI deployment, virtualisation, and sustainability. The updates encompass storage arrays, software defined infrastructure, and new solutions intended to deliver improved efficiency, performance and scalability required by modern data centres.
"The new Lenovo Data Storage Solutions help businesses harness AI's transformative power with a data-driven strategy that ensures scalability, interoperability, and tangible business outcomes powered by trusted infrastructure," said Scott Tease, Vice President and General Manager of Infrastructure Solutions Product Group at Lenovo. "The new solutions help customers achieve faster time to value no matter where they are on their IT modernisation journey with turnkey AI solutions that mitigate risk and simplify deployment."
The range includes the introduction of AI Starter Kits for the Lenovo Hybrid AI Platform, pre-configured options designed to accelerate enterprise adoption of AI, notably for retrieval-augmented generation (RAG) workflows. Also launched is the ThinkAgile HX Series GPT-in-a-Box solution, equipped with Lenovo Neptune Liquid Cooling technology. Lenovo states that this is the industry's first liquid-cooled hyperconverged infrastructure (HCI) appliance, capable of providing up to 25% energy savings over previous generations.
"This refresh is an imperative step in supporting local organisations achieve their AI ambitions. We know AI investment in ANZ is accelerating, growing fourfold in 2025, yet our CIO Playbook highlights that ROI remains a key barrier to AI adoption" said Sumir Bhatia, President, Asia Pacific Infrastructure Solutions Group, Lenovo. "We are assisting in speeding up ROI, particularly with our industry-first liquid-cooled HCI appliance, which will yield up to 25% energy savings over previous generations."
Among the products launching as part of this portfolio refresh is the Lenovo ThinkAgile SDI V4 Series, which is described as a full-stack, turnkey solution intended to streamline IT infrastructure and facilitate computing for data-driven workloads, including AI large language models (LLMs) inferencing. The ThinkSystem Storage Arrays are also part of the release, with claims of offering up to three times faster performance and power consumption reductions, translating to up to 97% energy savings and 99% improvement in storage density over legacy hardware based on 10K HDDs.
The converged ThinkAgile and ThinkSystem hybrid cloud and virtualisation solutions now allow for independent scaling of compute and storage capabilities, with Lenovo reporting potential software licensing cost reductions of up to 40% by enabling additional storage without incurring extra licensing charges.
The ThinkAgile HX Series GPT-in-a-Box offerings, which use Neptune Liquid Cooling, are engineered to allow turnkey AI inferencing suitable for distributed applications across edge and cloud environments. The solution is positioned to reduce energy usage, which Lenovo states will increase return on investment and allow organisations to achieve data-driven outcomes faster.
The AI Starter Kits for the Lenovo Hybrid AI Platform come as pre-configured packages combining compute, storage, GPUs, and networking components. These kits are intended to be scalable and adaptable to various organisational needs and to minimise the complexity involved with deploying enterprise AI services. Storage arrays configured with the AI Starter Kits support unified file, block, and object storage, leveraging SSD flash technology to accelerate time to insight from enterprise data.
Bringing computation capabilities closer to the source of enterprise data is a growing requirement for AI model training and inference tasks. Lenovo's new ThinkAgile SDI V4 Series and ThinkSystem Storage Arrays are positioned as full-stack AI-ready infrastructure options for organisations at the start of their AI journey. Additionally, the new hybrid cloud and virtualisation solutions are designed to offer flexibility and operational simplicity.
For virtualisation, the Lenovo ThinkAgile Converged Solution for VMware brings together features from the ThinkAgile VX Series and ThinkSystem DG Series storage arrays. The aim is to provide a unified hybrid cloud platform capable of supporting diverse storage workloads, facilitated by integrated lifecycle management and operational features.
The new ThinkAgile V4 series now includes what Lenovo describes as the industry's first liquid-cooled HCI appliance, specifically targeting the efficiency challenges of high-powered AI workloads. The company asserts up to 25% energy savings when compared with the previous generation, and this hardware is positioned as a repeatable solution for swift AI integration.
The security aspect of Lenovo's storage solutions includes features like Premier Enhanced Storage Support, which provides expert support and rapid response for IT teams. Additionally, the ThinkSystem DG and DM storage arrays have new AI-powered autonomous ransomware protection that employs machine learning to proactively identify and mitigate cyber threats.
Lenovo Data Storage Solutions also include the company's XClarity systems management software, offering comprehensive security, management, encryption, and compliance features for storage management across enterprise environments.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time5 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Article – RNZ It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? , Producer – 30′ with Guyon Espiner It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? 'My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people.' Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception – before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril – and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. 'It's going to do wonderful things for us,' he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. 'In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients – including quite a few with the same very rare condition you have – that knows your genome, knows all your tests, and hasn't forgotten any of them.' He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive – a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. 'I think that's a bit optimistic.' 'If he said 25 years I'd believe it.' The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. 'Right now, we're at a special point in history,' he says. 'We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes.' He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. 'This shows,' says Hinton of his former employers, 'the company's principals were up for sale.' He believes defense departments of all major arms dealers are already busy working on 'autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind'. He also points out the grim fact that Europe's AI regulations – some of the world's most robust – contain 'a little clause that says none of these regulations apply to military uses of AI'. Then there is AI's capacity for deception – designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged – in just one year – by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? 'I think it would be a bad thing for people – because we'd no longer be needed.' Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. 'If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose.' 'Ask a chicken' For those who believe a rogue AI can simply be shut down by 'pulling the plug', Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off – despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. 'There are so many bad uses as well as good,' he says. 'And our political systems are just not in a good state to deal with this coming along now.' It's a sobering reflection from one of the brightest minds in AI – whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: 'If you want to know what it's like not to be the apex intelligence, ask a chicken.' Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time5 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? "My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people." Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception - before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. "In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them." He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic." "If he said 25 years I'd believe it." The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. "Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes." He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. "This shows," says Hinton of his former employers, "the company's principals were up for sale." He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind". He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI". Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? "I think it would be a bad thing for people - because we'd no longer be needed." Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. "If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose." 'Ask a chicken' For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now." It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken." Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

Why it's time Australian asset management leaders explore the true value of AI
Why it's time Australian asset management leaders explore the true value of AI

Techday NZ

time5 hours ago

  • Techday NZ

Why it's time Australian asset management leaders explore the true value of AI

Modern asset management across industries like mining, utilities, public infrastructure, road and rail represents a massive exercise in managing and analysing data. This data-driven domain is a critical function within these industries, albeit one that has seen less digital transformation than other segments in recent years. Because of this, there appears to be increasing pressure on asset-centric business leaders to lunge for an AI project as a means to quickly manifest efficiencies in their operations. However, whether organisations are curious, excited, motivated, or feeling like they're under boardroom duress to deploy an AI solution into their asset management workflows, strategic considerations about how best to integrate these tools must come first. Digital transformation is not a small undertaking, nor should it be about technology in and of itself, especially for companies managing critical infrastructure and assets potentially worth billions. The stakes are high, which is why one industry leader is calling on organisations to zoom out and take a look at the big picture. Hype cycles, distractions and trust If you ask Scott McGowan, CEO of data-led asset management solutions firm COSOL, what he thinks of AI, he'll tell you it represents a lot of potential in the asset management sector. However, he'll also tell you not to run out and invest in an AI solution just because your board thinks you should. Observing the proliferation of AI adoption hype across industry events, social media and the press, McGowan is concerned that market pressures might push organisations in asset-centric industries to try and run with AI before they can walk. "Everyone expects everyone to build an AI division in-house, but if you're a mining or infrastructure company, the reality is that's not your core business," he says. "Your core business is to produce iron, or coal, or copper, or to run trains and provide services to the public." Without reason or a roadmap, the discourse around AI integration often represents an unwelcome distraction from the primary mission of asset-centric industries. Adoption pressures are being met with a degree of hesitation from business leaders, McGowan adds, due to the somewhat untested nature of AI in an asset management context and an absence of established trust. "Hesitation toward AI probably comes from multiple aspects, but they all come back to one underlying principle, which is around trust," he says. "Trust in security, trust in algorithms, and trust in data." "Trust is built on experience and understanding, and I think the challenge we have with AI as it stands today, is that the algorithms are designed in such a way that there's little to no transparency as to the decision-making methodology those algorithms have. "Data foundations must also be trusted, and solutions that act on that data must be trusted to do so correctly. Organisations need to trust that any solution they adopt is repeatable, robust and resilient." Exploring a path forward for AI in asset-centric industries will be the subject of a series of roundtables that COSOL, in partnership with IBM, will be running across three Australian cities in June. While the sessions are closed, select insights are set to be released highlighting discussions, pain points and strategies business leaders face. Ahead of the events, COSOL's McGowan touched on some of the messaging set to be covered during the campaign, including what might it look like for an organisation to take its first steps with AI before building maturity in considered stages. First steps: Learning to walk with AI AI transformation is a journey, not a big bang project. Organisations that approach their AI maturity through a lens of a walk-jog-run approach based on the true needs of the business are much more likely to find value in implementation versus those that seek shiny solutions. In the first instance, the walk phase, McGowan urges that AI has to be solving an existing problem. "There are a lot of AI initiatives and a lot of technologies looking for a problem; I think that's the wrong way to go about it," he says. "The walk concept is solving a discrete problem that already lies within the technology space. If you think about self-solving and trust in data, using AI to resolve, infer, continually assess or optimise data quality is actually a really interesting place to start. "Through this approach AI starts to solve its own trust problem by addressing one of the initial play areas. The opportunity around master data quality and the ability to link physical equipment with digital representations for accuracy is in my view the first part of the AI journey in our context." David Small, Principal Sales Leader for IBM Automation Software in Australia and New Zealand, echoes the need for organisational data layers to be updated and as accurate as possible to enable smarter asset management. "Without quality data, organisations will struggle to achieve key benefits as they move along the asset maturity curve," he says. "Data is the building block that asset management systems need and rely on to support the company's chosen asset management strategies. "The direction that IBM has taken is to embed AI capability within the Maximo Application Suite vs saying to organisations go and build your own AI capability. These AI enhancements are integrated AI experiences that deliver immediate value, increase productivity and seamlessly scale." Adding capability and building an AI-integrated business Looking further into an AI modernisation program, companies might look to select an AI partner to help them further investigate ways and means AI can be implemented, be it in a proof of concept, or solving discrete problems. Partnering with someone who specialises in AI can help companies build capability within the business from a cross functional perspective around asset management. "This phase is around automation of things like non-value tasks, work order administration, and master data creation as examples," McGowan says. "There is a lot of work that goes into master data creation. So, following from a solution in the first phase that knows what good data looks like, you can take it to the next phase of being a master data specialist that understands how to take a piece of equipment and represent it digitally and accurately." When a company is running with AI, it is essentially changing its operating business model. With each new wave of technology, transformation initiatives should not be about solving technology problems, but enhancing the operational capability of the business. Of course, McGowan notes that business parameters will need to be changed to enable technology to solve business problems if the integration is significant. In the case of AI, this may look like redefining career paths, redefining the operating model and the job descriptions for individual components and then rolling out the purpose of AI. "I can almost see a place where you define a job description for the AI agent or component, and it performs that task, has regular performance reviews, learns from its mistakes, and is managed like anyone else in a business," he says. "I think that's the evolution: creating a space where AI has a role to play that is embedded in the organisation and the operating model, which measures and manages it like every other human resource within the business." Playing the long game With every new wave of technology across data, analytics, automation and AI, business leaders are often targeted and lobbied to act with urgency on a new solution. The warning to 'innovate or be left behind' is regularly bandied around these concepts. While McGowan notes the AI opportunity in asset management represents considerable potential, he feels strongly that it must be approached driven by practical business needs, organisational fit and readiness, rather than hype. "The potential for AI particularly in the asset management space is almost endless, because largely the sector has not gone through as many digital revolutions that other industries and sectors have gone through," he says. "There are opportunities to do things like provide real-time feedback around avoiding potential incidents, and the automation of non-value-adding tasks will also be significant. "But it's important that when we talk about driving efficiency with AI that we evolve our thinking in terms of what it means for careers and jobs and tasks, and working to make those AI-infused rather than AI-replaced. "We need to look at evolving the operating model to support the introduction of efficiency gains from AI and then driving your valued labour to much higher value tasks." This messaging will form part of the jumping off point for COSOL's upcoming roundtable series in Australia. With 60 asset management leaders across Sydney, Melbourne and Brisbane coming together to discuss the pivotal role of AI in shaping the future of asset management in the coming weeks, valuable insights are guaranteed to emerge. Many of these insights will be shared in an upcoming report. To register for these learnings, click this link.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store