
Hitachi launches managed AI Data Hub to boost analytics success
Hitachi Vantara has launched Hitachi EverFlex AI Data Hub as a Service, a new fully managed infrastructure offering designed to address the challenges of AI data preparation for enterprises.
The Hitachi EverFlex AI Data Hub as a Service provides organisations with a modern data lakehouse that combines integrated workbench capabilities for artificial intelligence (AI), business intelligence (BI), and broader enterprise data needs. The service aims to help organisations better integrate and manage their distributed data sources during a period of rapid AI adoption and complex data management requirements.
Recent research from Hitachi Vantara's State of Data Infrastructure Report highlights the scale of the challenge, revealing that 98% of surveyed organisations currently use more than one storage platform. Over half of these organisations store data across on-premises, private cloud, hybrid cloud, and public cloud environments. This proliferation of data platforms complicates AI data preparation.
The same research also indicates that up to 80% of AI and data projects do not succeed, with more than USD $100 billion lost annually due to difficulties in AI data preparation, enablement, and operational costs. The new service is intended to help address these issues by enabling customers to pay only for the infrastructure required, with built-in security and governance aligned to hybrid cloud models.
Data management for AI
Hitachi EverFlex AI Data Hub as a Service enables organisations to create a single view of enterprise data, which can be used to power both AI and BI initiatives with complete and timely information. This, according to the company, has potential to speed up insights and improve business decision-making. "Through this new offering, we're helping customers integrate, prepare and gain more control over relevant data despite the ensuing boom of unstructured data created by AI," said Jeb Horton, Senior Vice President, Global Services at Hitachi Vantara. "Ultimately, this solution empowers organisations to meet their data where it is, accelerating AI innovation and experimentation, delivering real-time insights, and significantly reducing the cost and complexity of managing today's distributed data landscape."
Built on Hitachi's Virtual Storage Platform One (VSP One), the AI Data Hub integrates data in real time without duplication, providing a focus on data governance and quality at source. The system is designed to comply with data quality and regulatory requirements across diverse enterprise ecosystems.
Technology integration
The new service combines several technologies, including Hitachi EverFlex STaaS, VSP One and iQ, with Cisco Powered Hybrid Cloud Infrastructure that features GPU Compute and Networking as a Service. This approach aims to bring unified data management and AI capabilities into a single managed platform.
According to Hitachi Vantara, VSP 360 forms the core of unified data management by supporting block, file, object, and software-defined data infrastructures. The platform allows orchestration and automation of data services, policy compliance management, and predictive capabilities through AIOps observability.
Hitachi iQ infrastructure supports AI and analytics workloads through GPU servers, high-speed storage, and integrated networking to ensure performance and reliability. Quality, security, and flexible deployment options are prioritised.
The integration with Zetaris software provides a modern data lakehouse for real-time connection to diverse data sources and supports federated analytics. This ensures that data can be analysed in situ while maintaining governance standards, streamlining data pipelines and removing technical and cost barriers to analytics and machine learning deployments.
Business context
The arrival of the Hitachi EverFlex AI Data Hub as a Service comes amid growing attention on the limitations of traditional data architectures, which struggle to support the scale and operational tempo required for contemporary AI applications. The data hub enables customers to adopt a consumption-based model, improving cost efficiency and flexibility in resource allocation.
This service provides a unified platform for managing both AI and BI workloads, offering customers the opportunity to reduce operational complexity, ensure compliance, and harness business data for advanced analytics and real-time insight generation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
2 hours ago
- Techday NZ
Cloudera urges telcos to invest in AI or risk falling behind
Cloudera has issued a warning to telecommunications companies that those failing to adopt AI-driven networks risk being left behind, amid concerns that data fragmentation and scaling challenges are hampering progress in the sector. Use cases for artificial intelligence in telecommunications are broad, such as predictive maintenance, automated anomaly detection, real-time network optimisation, and proactive service delivery. However, Anthony Behan, Global Managing Director, Communications, Media & Entertainment at Cloudera, says a lack of modernised data infrastructure could see organisations struggle to keep pace in a market experiencing sluggish growth. Cloudera works with 80 of the world's top 100 telecom providers and reports that telcos are under increasing pressure to reduce costs, modernise infrastructure, and deliver better customer experiences, all while transforming their networks to meet new demands. The company stresses that scalable AI cannot happen without unified, reliable data; without AI, Behan warns, telcos could lose ground to competitors. "Telcos are drowning in vast volumes of operational and telemetry data – yet they can't act on it effectively," says Anthony Behan, Global Managing Director, Communications, Media & Entertainment at Cloudera. Behan further highlights, "Regulatory compliance, cyber threats, and the slow pace of network virtualisation show just how overstretched networks already are. AI can really help, and the problem isn't a lack of data – it's that it's siloed, unstructured, and untrusted. Without strong data foundations, telcos can't scale AI." Cloudera has recently joined the AI-RAN Alliance, a coalition including global companies such as Dell, NVIDIA, SoftBank, and T-Mobile, aiming to advance the integration of AI in developing telecommunications infrastructure. Behan notes the importance of scaling AI applications, stating, "The next phase of AI will be about scale and production. Private AI allows for that kind of automation in the network, at carrier scale." Barriers to adoption Data across telecommunications networks is often siloed and managed through disparate systems, creating significant hurdles for organisations wanting to deploy AI at scale. Cloudera's advice to telecom operators includes supporting hybrid workload mobility across both cloud and on-premises environments via Private AI; establishing unified data governance covering both data platform domains and BSS/OSS stacks; allowing AI workloads to be trained on-premises and deployed either in the cloud or directly in the network; and reducing vendor lock-in by running workloads where it makes the most business sense. Recent research from Cloudera shows that AI is already being utilised in some areas within telecommunications, including customer service (49%), experience management (44%), and security monitoring (49%). However, Cloudera points out that extending the benefits of AI to more advanced network functions such as predictive maintenance and real-time optimisation will depend on a scalable data and AI infrastructure. AI-native opportunities With improved data foundations, networks could unlock AI's greater potential, including automation of operations, performance gains for 5G and edge, and development of new revenue streams such as smart city solutions and support for autonomous technologies. Looking ahead, Behan outlines his vision for the future of telecom networks: "If I could wave a magic wand and build the ideal telecom network, it would have GPUs in every base station and use AI not just for communication, but for distributed, sovereign, local intelligence. That's where Private AI comes in - you can't run everything in the public cloud, especially with sensitive data. You need on-premises capabilities for control and security, but also the flexibility to use the cloud where it makes sense. The network would be highly secure, fast, and elastic – capable of spinning up virtual resources automatically to handle congestion or block fraud in real time. While this vision is still perhaps five to ten years away telcos must begin laying the groundwork now. More investment and experimentation are needed today to realise the network of tomorrow."


Techday NZ
21 hours ago
- Techday NZ
Asia Pacific enterprises shift to genAI spend amid AI cloud push
Forrester's latest research examines the status of artificial intelligence (AI) adoption across Asia Pacific and its implications for cloud strategy and enterprise innovation. The two recent Forrester reports, The State of AI, 2024 and Embrace the AI-Native Cloud Now, provide an in-depth look at how organisations in Asia Pacific and worldwide are approaching generative AI (genAI) investments, use cases, and cloud-native transformations. Regional investment trends Forrester's The State of AI, 2024 report shows more than half of enterprise AI decision-makers globally have allocated between USD $200,000 and USD $400,000 to genAI so far. These figures signal significant but selective engagement with AI, as adoption patterns differ according to regional objectives and regulatory environments. Within Asia Pacific, one of the more distinct trends is a change in budget allocation. Enterprises in the region are more likely to transition funding from predictive AI initiatives to genAI. According to Forrester, this shift indicates "a pragmatic, value-driven approach" to investments in artificial intelligence. Leaders across Asia Pacific continue to place a premium on employee productivity and customer experience (CX), two goals that mirror global priorities. However, Asian Pacific organisations stand out by leading slightly on data literacy efforts, with 50% of respondents indicating a focus on upskilling employees for more informed, data-driven decision making. Adoption and key use cases GenAI applications are gaining traction in both operational and development domains. While IT operations is a top AI use case globally, 43% of Asia Pacific respondents said their firms are deploying genAI to support software development. This reflects the region's intent to bolster engineering productivity and accelerate the pace of digital transformation. Despite ongoing investment, barriers continue to influence adoption rates. Data privacy is the primary risk cited by firms in the region. At the same time, Asia Pacific organisations report a more acute lack of specialised technical skills compared to their counterparts elsewhere. This underlines the need for expanded workforce enablement and training to meet enterprise AI ambitions. Expectations for returns on investment (ROI) also demonstrate variability. Half of organisations surveyed anticipate realising returns within one to three years. Meanwhile, 38% are targeting a three-to-five year time frame. These figures suggest a diversity of approaches based on the scale and complexity of AI projects currently underway. Generative AI continues to spur conversation and investment across industries, albeit at varying levels depending on regional priorities and challenges. Companies that strategically align their AI efforts with measurable outcomes around customer experience and employee productivity are already reaping returns. However, addressing barriers such as data privacy, governance, and skill shortages is critical to ensuring that AI investments deliver sustainable value. That assessment comes from Frederic Giron, Vice President and Senior Research Director at Forrester. Cloud strategies for the AI era The Embrace the AI-Native Cloud Now report explores the evolving nature of public cloud platforms under the influence of widespread AI adoption. The report asserts that as genAI capabilities advance, Asia Pacific enterprises and governments must pursue AI-native cloud strategies to remain competitive and operate efficiently at scale. AI-native clouds are described as moving beyond conventional infrastructure services to offer intelligent, automated operations and application capabilities. Key features highlighted in the research include predictive operations, autoscaling, and automated handling of security updates and system patches. There is also a rapid expansion of AI-enabled development tools, such as TuringBots and low-code platforms that help streamline application design and deployment. These tools aim to raise developer productivity by automating much of the coding and debugging process while freeing teams to focus on strategic initiatives. Furthermore, embedded AI APIs are increasingly underpinning both software-as-a-service (SaaS) solutions and bespoke applications, providing elements such as predictive analytics, personalisation, and intuitive agentic AI-driven user experiences. The report notes that to fully benefit from the AI-native cloud, organisations should integrate technologies such as retrieval-augmented generation (RAG) and adopt composable architectures. The AI-native cloud is not just the next iteration of cloud technology; it is the paradigm shift enterprises need for their cloud strategies to remain competitive in an AI-driven world. Leaders must prioritise AI-native cloud strategies to improve operational efficiency, advance development capabilities, accelerate business innovation, and differentiate their customer experiences. Those comments were made by Charlie Dai, Vice President and Principal Analyst at Forrester. The research indicates that as organisations in Asia Pacific move towards more data-driven operations and cloud-native models, the focus will increasingly fall on workforce competence, governance, and risk management to ensure AI and cloud investments translate into sustained value and competitive differentiation.


Techday NZ
21 hours ago
- Techday NZ
Milestone launches Project Hafnia for AI-driven city management
Milestone has commenced its Project Hafnia to develop AI-driven solutions for urban infrastructure and traffic management, with the first city being Genoa in Italy. The initiative is aimed at improving city operations by harnessing computer vision technologies, using high-quality video data that adheres to European regulatory frameworks, including GDPR and the AI Act. Video data used for the project is trained with NVIDIA's NeMo Curator on NVIDIA DGX Cloud. Collaboration and compliance Milestone is among the first companies to utilise the new NVIDIA Omniverse Blueprint for Smart City AI—a framework designed for optimising city operations through digital twins and AI agents. The company is also enhancing its data platform by generating synthetic video data via NVIDIA Cosmos, which processes real-world inputs. This combination of real and synthetic video data is used to build and train Vision Language Models (VLMs) in a manner that the company states is responsible and regulation-compliant. European cloud provider Nebius will supply the GPU compute for training these models, which is an element in keeping data processing anchored within European borders and compliant with regional data protection regulations. The application of AI within Project Hafnia spans smart traffic and transportation management, as well as improvements in safety and security for cities. VLMs establish connections between textual data and visual information from images or videos, enabling AI models to generate insights and summaries from visual sources. These efforts, the company asserts, are based upon regulatory integrity, data diversity, and relevance to European legal frameworks. "I'm proud that with Project Hafnia we are introducing the world's first platform to meet the EU's regulatory standards, powered by NVIDIA technology. With Nebius as our European cloud provider, we can now enable compliant, high-quality video data for training vision AI models — fully anchored in Europe. This marks an important step forward in supporting the EU's commitment to transparency, fairness, and regulatory oversight in AI and technology — the foundation for responsible AI innovation," says Thomas Jensen, CEO of Milestone. Genoa as a first Project Hafnia's first European service offering consists of a Visual Language Model specifically for transportation management, drawing on transportation data sourced from Genoa. The model is powered by NVIDIA technology and has been trained on data that is both responsibly sourced and compliant with applicable regulations. "AI is achieving extraordinary results, unthinkable until recently, and the research in the area is in constant development. We enthusiastically joined forces with Project Hafnia to allow developers to access fundamental video data for training new Vision AI models. This data-driven approach is a key principle in the Three-Year Plan for Information Technology, aiming to promote digital transformation in Italy and particularly within the Italian Public Administration," says Andrea Sinisi, Information Systems Officer, City of Genoa. The structure of Project Hafnia's collaborations allows for scalability, as the framework is designed to operate across multiple domains and data types. The compliant datasets and the fine-tuned VLMs will be supplied to participating cities via a controlled access licence model, supporting the region's AI ambitions within ethical standards. Role of Nebius Nebius has been selected as Project Hafnia's European cloud provider. The company operates EU-based data centres, facilitating digital sovereignty objectives and ensuring that sensitive public sector data remains within the jurisdiction of European data protection laws. "Project Hafnia is exactly the kind of real-world, AI-at-scale challenge Nebius was built for," says Roman Chernin, Chief Business Officer of Nebius."Supporting AI development today requires infrastructure engineered for high-throughput, high-resilience workloads, with precise control over where data lives and how it's handled. From our EU-based data centers to our deep integration with NVIDIA's AI stack, we've built a platform that meets the highest standards for performance, privacy and transparency." Project Hafnia data platform Project Hafnia acts as what Milestone refers to as a 'trusted librarian' of AI-ready video data, with the platform curating, tagging, and delivering video data that is described as ethically sourced and regulation-ready for AI model training. The emphasis is placed on maintaining precision, compliance, and citizen privacy throughout the process. According to Milestone, its network of customers, distributors, and technology partners enables the company to organise a comprehensive video data ecosystem that advances the development of AI in video analytics. Project Hafnia is positioned as a resource that companies can use to build AI models while meeting compliance and quality standards. The project will make both the compliant dataset and the fine-tuned Visual Language Model available to participating cities on a controlled basis as part of its effort to support AI development across Europe.