
APJ region accelerates AI adoption as Dell rolls out new innovations
As AI use cases proliferate and investment ramps up, the region is fast emerging as a global leader, both in adoption and ambition.
"Asia Pacific is leading the way in generative AI spending, with 38% of AI investment in the region now focused on Gen AI, compared to just 33% in the rest of the world," said Peter Marrs, President of Asia Pacific, Japan and Greater China at Dell.
"Even North America sits at 29%," he added, highlighting the region's rapid pace.
Dell is positioning itself at the heart of this growth through its AI Factory and a growing ecosystem of technology partners, universities and governments.
"There's not an industry that's untouched by AI, but financial services, healthcare, energy, retail and manufacturing really stand out. We're at the forefront of helping customers across these sectors," he added.
Transforming business through AI factories
The Dell AI Factory, a framework designed to help organisations scale AI, has quickly gained traction. "It's been a year since we announced it, and we've moved from having tens or hundreds of customers globally to thousands," said Chris Kelly, Senior Vice President of Data Center Solutions APJC at Dell.
"Not only are more customers deploying it, but they're achieving real, tangible ROI."
According to Danny Elmarji, Vice President of Presales APJC at Dell, the AI Factory has resonated because it provides a practical pathway for organisations to adopt AI at scale.
"CIOs are trying to understand how to tackle AI inside their business. Unlike past technology shifts, this is fundamentally a business-driven initiative," he explained.
Elmarji pointed to significant momentum in financial services, where generative AI is being used to recommend customer actions, automate fraud detection and transform digital banking experiences.
In manufacturing, AI is powering digital twin capabilities and revolutionising fault detection, while in healthcare, early detection tools and enhanced electronic medical records are improving patient outcomes.
AI is also driving change in retail, with computer vision enabling smarter inventory management, and in education, where Dell is working with universities to personalise learning and foster innovation. "We're building connections between the IT world, research and industry," Kelly noted.
"It's about moving beyond pilot projects and making AI meaningful for everyday users."
From modular data centres to sovereign AI
The roundtable also showcased a unique customer partnership with South Korean AI education platform Elice.
CEO Jae Won Kim described how Elice faced soaring costs when trying to provide deep learning environments for students and businesses. "We had to reduce GPU cloud fees by more than 90%," he said.
The solution was a portable modular data centre powered by Dell servers, now used for everything from AI digital textbooks for five million students to sovereign AI workloads that comply with government requirements.
"There's very limited data centre capacity in Korea for high-density AI workloads," Kim explained.
"The modular data centre lets us host hundreds of GPUs, with liquid cooling for the latest chips. It's not just about education anymore – we're talking about a hybrid solution that could be deployed in Japan, Australia or anywhere data centre construction lags demand."
Marrs praised the partnership, saying, "You really thought big, and you went and made it happen." Kim's advice for others: "AI is not going away. It's better to start early. If you're worried about investment, modular is the best way to start small and start fast."
Innovation and ecosystem challenges
Dell's announcements at the conference included a raft of new infrastructure solutions designed to cut energy costs, boost data centre efficiency and accelerate AI deployments of any size.
The company's latest cooling technology can reduce energy costs by up to 60%, while new servers with AMD and NVIDIA chips promise up to 35 times greater AI inferencing performance than previous generations.
Yet, challenges remain. "The biggest hurdles are people and ecosystem," Marrs acknowledged.
"We need to educate the next generation of AI talent and work with governments to create the right regulatory and compliance frameworks." Kelly added, "Access to data centre space, power and cooling is going to be crucial. Requirements are moving so fast that what seemed high density a year ago now looks standard."
To address these gaps, Dell is nurturing partnerships with universities, local ISVs and industry bodies, running hundreds of AI innovation days and investing in hands-on labs. "We're enabling partners to experiment in safe environments and bring AI to life," said Elmarji.
Dell executives are optimistic but realistic about the scale of change.
"We're delivering AI at scale in the largest and most complex use cases, but also helping small startups get started," Kelly said. "You don't have to spend a fortune – start small and grow. If you don't act now, you're falling behind."
For Kim, the journey with Dell is just beginning. "It was a huge investment for us, basically a startup. We poured all our money into GPUs. But I think it will be a good journey," he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
15 hours ago
- Techday NZ
Artificial Intelligence (AI) is poised to revolutionise construction project management
Artificial Intelligence (AI) is poised to revolutionise project management across the Australian construction industry, offering unparalleled efficiencies and predictive capabilities. By integrating AI, specifically Generative AI (GenAI), into project management, companies can leverage machine learning algorithms to predict project timelines, costs, and potential risks with unprecedented accuracy. This is not just a futuristic vision; it is a rapidly unfolding reality, as demonstrated in Project Management Institute's (PMI) report, 'First Movers' Advantage: The Immediate Benefits of Adopting Generative AI for Project Management'. Efficiency and predictive power AI-driven software can automate routine tasks such as scheduling, resource allocation, and progress tracking, freeing project managers to focus on strategic decision-making and problem-solving. PMI's research underscores the potential of GenAI to reduce administrative burden and increase efficiency. For example, predictive analytics can provide insights into potential delays and budget overruns before they occur, allowing for timely intervention and swift corrective actions to avoid disruptions. This proactive approach, enabled by AI's ability to analyse complex datasets, can significantly mitigate risks and improve project outcomes. Data-driven optimisation Another transformative aspect of AI in project management is its ability to analyse vast amounts of data from past projects. By identifying patterns and trends, AI can offer best practices and optimise resource use for future projects. This data-driven approach can significantly enhance productivity and profitability. The 'First Movers' Advantage' report further sets out that GenAI can be used for data analysis and reporting, extracting valuable insights from project data to inform decision-making. This allows construction firms to learn from past successes and failures, continuously improving their project management processes. Enhanced collaboration AI can improve collaboration across teams by integrating communication tools and real-time data sharing, ensuring all stakeholders are aligned and informed. GenAI has great potential to enhance communication and collaboration, facilitating seamless information flow and fostering a more cohesive project environment. This improved communication can lead to better coordination, reduced misunderstandings, and ultimately, more successful project delivery. The path forward: Challenges and opportunities However, the adoption of AI in construction project management is not without its challenges. As many adopters of AI have found, investing in upskilling and reskilling the workforce is critical to enable project professionals to effectively leverage AI tools and technologies. In conclusion, the use of AI, particularly GenAI, in project management represents a significant opportunity to push the boundaries of what is possible in construction efficiency and project success. While challenges exist, the potential benefits are undeniable. By embracing AI strategically and addressing the associated risks proactively, the Australian construction industry can unlock new levels of productivity, profitability, and project performance. PMI's AI certifications such as 'AI in Infrastructure and Construction Projects' and 'Cognitive Project Management in AI' (CPMAI) serve as valuable tools for navigating this transformative landscape, illuminating both the opportunities and the responsibilities that come with adopting AI in project management. 'AI in Infrastructure and Construction Projects' is a 3-hour course designed to help professionals understand how AI is shaping the industry. It is a strong entry point for those looking to explore AI's role in project management. PMI's 30-hour CPMAI course ensures those enrolled have access to the most current practices and strategies for managing AI projects and confidently align AI initiatives with business goals.


Techday NZ
15 hours ago
- Techday NZ
How optimisation is helping to tackle the data centre efficiency challenge
In the era of cloud adoption and AI, the demand for data centre bandwidth has skyrocketed, leading to the exponential sprawl of data centres worldwide. However, new data centres are running up against sustainability, space and budget constraints. Policymakers recognise the benefits of data centres to productivity, economic growth and research, but there is still a tension over their impact on local communities, water and electricity use. The best solution is in optimising the data centre infrastructure we have already to unlock more performance while still being mindful of the limits we have. Our cities, our consumer products and our world is going to become more digital and we need more compute to keep up. Optimising the data centre infrastructure we have already to unlock more performance is the best way data centres can turn constraints into an opportunity for a competitive advantage. Why data centre optimisation matters CIOs and IT leaders increasingly face calls to provide a high-performance foundational compute infrastructure across their businesses and handle new, more demanding, use cases while balancing sustainability commitments, space and budget constraints. Many have sought to build new data centres outright to meet demand and pair them with energy efficient technologies to minimise their environmental impact. For example, the LUMI (Large Unified Modern Infrastructure) Supercomputer, one of the most powerful in Europe uses 100% carbon-free hydroelectric energy for its operations and its waste heat is reused to heat homes in the nearby town of Kajanni, Finland. There are many other examples like LUMI showing the considerable progress the data centre industry have made in addressing the need for energy efficiency. Yet energy efficiency alone won't be enough to power the growing demands of AI which is expected to plump up data centre storage capacity. AI's greater energy requirements will also require more energy efficient designs to help ensure scalability and address environmental goals and with data centre square footage, land and power grids nearing capacity, one way to optimise design is to upgrade from old servers. Data centres are expensive investments, and some CIOs and IT leaders try to recoup costs by running their hardware for as long as possible. As a result, most data centres are still using hardware that is 10 years old (Dell) and only expand compute when absolutely necessary. While building new data centres might be necessary for some, there are significant opportunities to upgrade existing infrastructure. Upgrading to newer systems means data centres can achieve the same tasks more efficiently. Global IT data centre capacity will grow from 180 Gigawatts (GW) in 2024 to 296 GW in 2028, representing a 12.3% CAGR, while electricity consumption will grow at a higher rate 23.3% from 397 Terawatt hours (TWh) to 915 TWh in 2028. For the ageing data centres, that can translate to fewer racks and systems to manage, while still maintaining the same bandwidth. It can leave significant room for future IT needs but also makes room for experimentation which is absolutely necessary in AI workloads at the moment. They can use the space to build less expensive proof of concept half racks before it leads to bigger build outs and use new hyper-efficient chips to help reduce energy consumption and cooling requirements, recouping investment back more quickly. What to look for in an upgrade There are many factors to consider in a server upgrade and there isn't a one size fits all solution to data centre needs. It's not just about buying the most powerful chip that can be afforded. Yes, the significance of a good chip on energy efficiency cannot be overstated, but each data centre has different needs that will shape the hardware and software stack they need to operate most efficiently. Leading South Korean cloud provider, Kakao Enterprise, needed servers that can deliver high performance across a wide range of workloads to support its expansive range of offerings. By deploying a mixed fleet of 3rd and 4th Gen AMD EPYC processors, the company was able to reduce the server required for its total workload to 40 percent of its original fleet, while achieving increased performance by 30 percent, with a 50 percent reduction in total cost of ownership. Much like Kakao Enterprise, IT decision makers should look for providers that can deliver end-to-end data centre Infrastructure at scale combining high performance chips, networking, software and systems design expertise. For example, the right physical racks make it easy to swap in new kit as needs evolve, and having open software is equally important for getting the different pieces of the software stack from different providers talking with each other. In addition, providers that are continually investing in world class systems design and AI systems capabilities will be best positioned to accelerate enterprise AI hardware and software roadmaps. AMD, for example, recently achieved a 38× improvement in node-level energy efficiency for AI training and HPC over just five years. This translates to a 97% reduction in energy for the same performance, empowering providers and end-users alike to innovate more sustainably and at scale. Advancing the Data Centre As our reliance on digital technologies continues to grow, so too does our need for computing power. It is important to balance the need for more compute real estate with sustainability goals, and the way forward is in making the most out of the existing real estate we have. This is a big opportunity to think smartly about this and turn an apparent tension into a massive advantage. By using the right computational architecture, data centres can achieve the same tasks more efficiently, making room for the future technologies that will transform businesses and lives.


Techday NZ
2 days ago
- Techday NZ
Dell unveils AI Data Platform upgrades with NVIDIA & Elastic
Dell Technologies has announced enhancements to the Dell AI Data Platform, expanding its support across the full lifecycle of artificial intelligence workloads with new hardware and software collaborations. The updates to the Dell AI Data Platform aim to address the challenges enterprises face with massive, rapidly growing, and unstructured data pools. Much of this data is unsuitable for generative AI applications unless it can be properly indexed and retrieved in real time. The latest advancements are designed to streamline data ingestion, transformation, retrieval, and computing tasks within enterprise environments. Lifecycle management The Dell AI Data Platform now provides improved automation for data preparation, enabling enterprises to move more quickly from experimental phases to deployment in production environments. The architecture is anchored by specialised storage and data engines, designed to connect AI agents directly to quality enterprise data for analytics and inferencing. The platform incorporates the NVIDIA AI Data Platform reference architecture, providing a validated, GPU-accelerated solution that combines storage, compute, networking, and AI software for generative AI workflows. New partnerships An important component of the update is the introduction of an unstructured data engine, the result of collaboration with Elastic. This engine offers customers advanced vector search, semantic retrieval, and hybrid keyword search capabilities, underpinned by built-in GPU acceleration for improved inferencing and analytics performance. The unstructured data engine operates alongside other data tools, including a federated SQL engine for querying structured data, a large-scale processing engine for data transformation, and fast-access AI-ready storage. The array of tools is designed to turn large, disparate datasets into actionable insights for AI applications. Server integration Supporting these software advancements are the new Dell PowerEdge R7725 and R770 servers, fitted with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Dell claims these air-cooled servers provide improved price-to-performance for enterprise AI workloads, supporting a diverse range of use cases from data analytics and visual computing to AI inferencing and simulation. The NVIDIA RTX PRO 6000 GPU supports up to six times the token throughput for large language model inference, offers double the capacity for engineering simulations, and can handle four times the number of concurrent users compared to the previous generation. The integration of these GPUs in a 2U server chassis is positioned to make high-density AI calculations more accessible to a wider base of enterprise users. The Dell PowerEdge R7725 will be the first 2U server platform to deliver the NVIDIA AI Data Platform reference design, allowing organisations to deploy a unified hardware and software solution without the need for in-house architecture and testing. This is expected to enable enterprises to accelerate inferencing, achieve more responsive semantic searching, and support larger and more complex AI operations. Industry perspectives "The key to unlocking AI's full potential lies in breaking down silos and simplifying access to enterprise data," said Arthur Lewis, president, Infrastructure Solutions Group, Dell Technologies. "Collaborating with industry leaders like NVIDIA and Elastic to advance the Dell AI Data Platform will help organisations accelerate innovation and scale AI with confidence." Justin Boitano, Vice President of Enterprise AI at NVIDIA, added, "Enterprises worldwide need infrastructure that handles the growing scale and complexity of AI workloads. With NVIDIA RTX PRO 6000 GPUs in new 2U Dell PowerEdge servers, organisations now have a power efficient, accelerated computing platform to power AI applications and storage on NVIDIA Blackwell." Ken Exner, Chief Product Officer at Elastic, commented, "Fast, accurate, and context-aware access to unstructured data is key to scaling enterprise AI. With Elasticsearch vector database at the heart of the Dell AI Data Platform's unstructured data engine, Elastic will bring vector search and hybrid retrieval to a turnkey architecture, enabling natural language search, real-time inferencing, and intelligent asset discovery across massive datasets. Dell's deep presence in the enterprise makes them a natural partner as we work to help customers deploy AI that's performant, precise, and production-ready." Availability The unstructured data engine for the Dell AI Data Platform is scheduled for availability later in the year. The Dell PowerEdge R7725 and R770 servers with NVIDIA RTX PRO 6000 GPUs will also become globally available in the same period.