logo
Milestone launches Project Hafnia for AI-driven city management

Milestone launches Project Hafnia for AI-driven city management

Techday NZ12-06-2025
Milestone has commenced its Project Hafnia to develop AI-driven solutions for urban infrastructure and traffic management, with the first city being Genoa in Italy.
The initiative is aimed at improving city operations by harnessing computer vision technologies, using high-quality video data that adheres to European regulatory frameworks, including GDPR and the AI Act. Video data used for the project is trained with NVIDIA's NeMo Curator on NVIDIA DGX Cloud.
Collaboration and compliance
Milestone is among the first companies to utilise the new NVIDIA Omniverse Blueprint for Smart City AI—a framework designed for optimising city operations through digital twins and AI agents. The company is also enhancing its data platform by generating synthetic video data via NVIDIA Cosmos, which processes real-world inputs. This combination of real and synthetic video data is used to build and train Vision Language Models (VLMs) in a manner that the company states is responsible and regulation-compliant.
European cloud provider Nebius will supply the GPU compute for training these models, which is an element in keeping data processing anchored within European borders and compliant with regional data protection regulations.
The application of AI within Project Hafnia spans smart traffic and transportation management, as well as improvements in safety and security for cities. VLMs establish connections between textual data and visual information from images or videos, enabling AI models to generate insights and summaries from visual sources. These efforts, the company asserts, are based upon regulatory integrity, data diversity, and relevance to European legal frameworks. "I'm proud that with Project Hafnia we are introducing the world's first platform to meet the EU's regulatory standards, powered by NVIDIA technology. With Nebius as our European cloud provider, we can now enable compliant, high-quality video data for training vision AI models — fully anchored in Europe. This marks an important step forward in supporting the EU's commitment to transparency, fairness, and regulatory oversight in AI and technology — the foundation for responsible AI innovation," says Thomas Jensen, CEO of Milestone.
Genoa as a first
Project Hafnia's first European service offering consists of a Visual Language Model specifically for transportation management, drawing on transportation data sourced from Genoa. The model is powered by NVIDIA technology and has been trained on data that is both responsibly sourced and compliant with applicable regulations. "AI is achieving extraordinary results, unthinkable until recently, and the research in the area is in constant development. We enthusiastically joined forces with Project Hafnia to allow developers to access fundamental video data for training new Vision AI models. This data-driven approach is a key principle in the Three-Year Plan for Information Technology, aiming to promote digital transformation in Italy and particularly within the Italian Public Administration," says Andrea Sinisi, Information Systems Officer, City of Genoa.
The structure of Project Hafnia's collaborations allows for scalability, as the framework is designed to operate across multiple domains and data types. The compliant datasets and the fine-tuned VLMs will be supplied to participating cities via a controlled access licence model, supporting the region's AI ambitions within ethical standards.
Role of Nebius
Nebius has been selected as Project Hafnia's European cloud provider. The company operates EU-based data centres, facilitating digital sovereignty objectives and ensuring that sensitive public sector data remains within the jurisdiction of European data protection laws. "Project Hafnia is exactly the kind of real-world, AI-at-scale challenge Nebius was built for," says Roman Chernin, Chief Business Officer of Nebius."Supporting AI development today requires infrastructure engineered for high-throughput, high-resilience workloads, with precise control over where data lives and how it's handled. From our EU-based data centers to our deep integration with NVIDIA's AI stack, we've built a platform that meets the highest standards for performance, privacy and transparency."
Project Hafnia data platform
Project Hafnia acts as what Milestone refers to as a 'trusted librarian' of AI-ready video data, with the platform curating, tagging, and delivering video data that is described as ethically sourced and regulation-ready for AI model training. The emphasis is placed on maintaining precision, compliance, and citizen privacy throughout the process.
According to Milestone, its network of customers, distributors, and technology partners enables the company to organise a comprehensive video data ecosystem that advances the development of AI in video analytics. Project Hafnia is positioned as a resource that companies can use to build AI models while meeting compliance and quality standards.
The project will make both the compliant dataset and the fine-tuned Visual Language Model available to participating cities on a controlled basis as part of its effort to support AI development across Europe.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Artificial Intelligence (AI) is poised to revolutionise construction project management
Artificial Intelligence (AI) is poised to revolutionise construction project management

Techday NZ

time12 hours ago

  • Techday NZ

Artificial Intelligence (AI) is poised to revolutionise construction project management

Artificial Intelligence (AI) is poised to revolutionise project management across the Australian construction industry, offering unparalleled efficiencies and predictive capabilities. By integrating AI, specifically Generative AI (GenAI), into project management, companies can leverage machine learning algorithms to predict project timelines, costs, and potential risks with unprecedented accuracy. This is not just a futuristic vision; it is a rapidly unfolding reality, as demonstrated in Project Management Institute's (PMI) report, 'First Movers' Advantage: The Immediate Benefits of Adopting Generative AI for Project Management'. Efficiency and predictive power AI-driven software can automate routine tasks such as scheduling, resource allocation, and progress tracking, freeing project managers to focus on strategic decision-making and problem-solving. PMI's research underscores the potential of GenAI to reduce administrative burden and increase efficiency. For example, predictive analytics can provide insights into potential delays and budget overruns before they occur, allowing for timely intervention and swift corrective actions to avoid disruptions. This proactive approach, enabled by AI's ability to analyse complex datasets, can significantly mitigate risks and improve project outcomes. Data-driven optimisation Another transformative aspect of AI in project management is its ability to analyse vast amounts of data from past projects. By identifying patterns and trends, AI can offer best practices and optimise resource use for future projects. This data-driven approach can significantly enhance productivity and profitability. The 'First Movers' Advantage' report further sets out that GenAI can be used for data analysis and reporting, extracting valuable insights from project data to inform decision-making. This allows construction firms to learn from past successes and failures, continuously improving their project management processes. Enhanced collaboration AI can improve collaboration across teams by integrating communication tools and real-time data sharing, ensuring all stakeholders are aligned and informed. GenAI has great potential to enhance communication and collaboration, facilitating seamless information flow and fostering a more cohesive project environment. This improved communication can lead to better coordination, reduced misunderstandings, and ultimately, more successful project delivery. The path forward: Challenges and opportunities However, the adoption of AI in construction project management is not without its challenges. As many adopters of AI have found, investing in upskilling and reskilling the workforce is critical to enable project professionals to effectively leverage AI tools and technologies. In conclusion, the use of AI, particularly GenAI, in project management represents a significant opportunity to push the boundaries of what is possible in construction efficiency and project success. While challenges exist, the potential benefits are undeniable. By embracing AI strategically and addressing the associated risks proactively, the Australian construction industry can unlock new levels of productivity, profitability, and project performance. PMI's AI certifications such as 'AI in Infrastructure and Construction Projects' and 'Cognitive Project Management in AI' (CPMAI) serve as valuable tools for navigating this transformative landscape, illuminating both the opportunities and the responsibilities that come with adopting AI in project management. 'AI in Infrastructure and Construction Projects' is a 3-hour course designed to help professionals understand how AI is shaping the industry. It is a strong entry point for those looking to explore AI's role in project management. PMI's 30-hour CPMAI course ensures those enrolled have access to the most current practices and strategies for managing AI projects and confidently align AI initiatives with business goals.

How optimisation is helping to tackle the data centre efficiency challenge
How optimisation is helping to tackle the data centre efficiency challenge

Techday NZ

time12 hours ago

  • Techday NZ

How optimisation is helping to tackle the data centre efficiency challenge

In the era of cloud adoption and AI, the demand for data centre bandwidth has skyrocketed, leading to the exponential sprawl of data centres worldwide. However, new data centres are running up against sustainability, space and budget constraints. Policymakers recognise the benefits of data centres to productivity, economic growth and research, but there is still a tension over their impact on local communities, water and electricity use. The best solution is in optimising the data centre infrastructure we have already to unlock more performance while still being mindful of the limits we have. Our cities, our consumer products and our world is going to become more digital and we need more compute to keep up. Optimising the data centre infrastructure we have already to unlock more performance is the best way data centres can turn constraints into an opportunity for a competitive advantage. Why data centre optimisation matters CIOs and IT leaders increasingly face calls to provide a high-performance foundational compute infrastructure across their businesses and handle new, more demanding, use cases while balancing sustainability commitments, space and budget constraints. Many have sought to build new data centres outright to meet demand and pair them with energy efficient technologies to minimise their environmental impact. For example, the LUMI (Large Unified Modern Infrastructure) Supercomputer, one of the most powerful in Europe uses 100% carbon-free hydroelectric energy for its operations and its waste heat is reused to heat homes in the nearby town of Kajanni, Finland. There are many other examples like LUMI showing the considerable progress the data centre industry have made in addressing the need for energy efficiency. Yet energy efficiency alone won't be enough to power the growing demands of AI which is expected to plump up data centre storage capacity. AI's greater energy requirements will also require more energy efficient designs to help ensure scalability and address environmental goals and with data centre square footage, land and power grids nearing capacity, one way to optimise design is to upgrade from old servers. Data centres are expensive investments, and some CIOs and IT leaders try to recoup costs by running their hardware for as long as possible. As a result, most data centres are still using hardware that is 10 years old (Dell) and only expand compute when absolutely necessary. While building new data centres might be necessary for some, there are significant opportunities to upgrade existing infrastructure. Upgrading to newer systems means data centres can achieve the same tasks more efficiently. Global IT data centre capacity will grow from 180 Gigawatts (GW) in 2024 to 296 GW in 2028, representing a 12.3% CAGR, while electricity consumption will grow at a higher rate 23.3% from 397 Terawatt hours (TWh) to 915 TWh in 2028. For the ageing data centres, that can translate to fewer racks and systems to manage, while still maintaining the same bandwidth. It can leave significant room for future IT needs but also makes room for experimentation which is absolutely necessary in AI workloads at the moment. They can use the space to build less expensive proof of concept half racks before it leads to bigger build outs and use new hyper-efficient chips to help reduce energy consumption and cooling requirements, recouping investment back more quickly. What to look for in an upgrade There are many factors to consider in a server upgrade and there isn't a one size fits all solution to data centre needs. It's not just about buying the most powerful chip that can be afforded. Yes, the significance of a good chip on energy efficiency cannot be overstated, but each data centre has different needs that will shape the hardware and software stack they need to operate most efficiently. Leading South Korean cloud provider, Kakao Enterprise, needed servers that can deliver high performance across a wide range of workloads to support its expansive range of offerings. By deploying a mixed fleet of 3rd and 4th Gen AMD EPYC processors, the company was able to reduce the server required for its total workload to 40 percent of its original fleet, while achieving increased performance by 30 percent, with a 50 percent reduction in total cost of ownership. Much like Kakao Enterprise, IT decision makers should look for providers that can deliver end-to-end data centre Infrastructure at scale combining high performance chips, networking, software and systems design expertise. For example, the right physical racks make it easy to swap in new kit as needs evolve, and having open software is equally important for getting the different pieces of the software stack from different providers talking with each other. In addition, providers that are continually investing in world class systems design and AI systems capabilities will be best positioned to accelerate enterprise AI hardware and software roadmaps. AMD, for example, recently achieved a 38× improvement in node-level energy efficiency for AI training and HPC over just five years. This translates to a 97% reduction in energy for the same performance, empowering providers and end-users alike to innovate more sustainably and at scale. Advancing the Data Centre As our reliance on digital technologies continues to grow, so too does our need for computing power. It is important to balance the need for more compute real estate with sustainability goals, and the way forward is in making the most out of the existing real estate we have. This is a big opportunity to think smartly about this and turn an apparent tension into a massive advantage. By using the right computational architecture, data centres can achieve the same tasks more efficiently, making room for the future technologies that will transform businesses and lives.

HPE expands AI server range with NVIDIA Blackwell GPU solutions
HPE expands AI server range with NVIDIA Blackwell GPU solutions

Techday NZ

time14 hours ago

  • Techday NZ

HPE expands AI server range with NVIDIA Blackwell GPU solutions

Hewlett Packard Enterprise has introduced several updates to its NVIDIA AI Computing by HPE portfolio, aimed at supporting enterprise clients seeking to accelerate agentic and physical AI deployment across a variety of use cases. Server advancements Among the headline updates, HPE has confirmed it will ship new HPE ProLiant Compute servers equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. This includes a new 2U RTX PRO Server form factor in the DL385 Gen11 model, as well as an 8-GPU 4U configuration with the DL380a Gen12 model. According to HPE, the DL385 Gen11 supports up to two of the Blackwell Server Edition GPUs, providing an air-cooled solution suitable for datacentres coping with increasing artificial intelligence workloads. Meanwhile, the DL380a Gen12 can accommodate up to eight GPUs in a larger form factor, with shipments scheduled to begin in September 2025. HPE highlighted that the ProLiant Compute servers are purpose-built for handling a variety of tasks, including generative and agentic AI, robotics, industrial automation, visual computing, simulation, 3D modelling, digital twins, and autonomous systems. Security features on the Gen12 models include HPE Integrated Lights Out 7 Silicon Root of Trust and a secure enclave for tamper-resistant protection and quantum-resistant firmware signing. The company states that its server management platform, HPE Compute Ops Management, can reduce IT hours spent on server management by up to 75% and lower downtime by an average of 4.8 hours per server annually. HPE has also indicated that these servers are designed to be flexible and scalable, able to support a growing range of GPU-accelerated workloads across the enterprise. AI development platform HPE Private Cloud AI, a collaborative development with NVIDIA, will incorporate support for the latest NVIDIA AI models. This includes the NVIDIA Nemotron agentic AI model, Cosmos Reason vision language model for robotics and physical AI, and the NVIDIA Blueprint for Video Search and Summarization (VSS 2.4). These additions will allow customers to build and deploy video analytics AI agents that can process extensive volumes of video data and extract actionable insights. The new release promises seamless scalability across GPU generations, air-gapped management, and enterprise multi-tenancy. Continuous integration with NVIDIA technologies will also allow HPE Private Cloud AI to deliver rapid deployment of NVIDIA NIM microservices, with access provided via HPE AI Essentials. The platform is positioned to help enterprises handle increasing AI inferencing workloads while retaining control over their data, supporting high performance and security requirements in demanding sectors. Regional and industry response "Asia Pacific is one of the fastest-growing AI markets, and enterprises face the imperative to transform ambition into results, with agility and security at the core," said Joseph Yang, General Manager, HPC, AI & NonStop, at HPE APAC and India. "With NVIDIA Blackwell GPUs in our HPE ProLiant servers and the latest NVIDIA AI models in HPE Private Cloud AI, we're enabling customers across APAC to accelerate agentic and physical AI, powering everything from advanced manufacturing to smart cities, while safeguarding data sovereignty and maximizing operational efficiency." Data sovereignty and operational efficiency were also cited as important capabilities for regional customers working in sectors such as advanced manufacturing and public infrastructure. "HPE is committed to empowering enterprises with the tools they need to succeed in the age of AI," said Cheri Williams, Senior Vice President and General Manager for Private Cloud and Flex Solutions at HPE. "Our collaboration with NVIDIA continues to push the boundaries of innovation, delivering solutions that unlock the value of generative, agentic and physical AI while addressing the unique demands of enterprise workloads. With the combination of HPE ProLiant servers and expanded capabilities in HPE Private Cloud AI, we're enabling organizations to embrace the future of AI with confidence and agility." The collaboration between HPE and NVIDIA is expected to support customers managing large-scale enterprise AI workloads, with the infrastructure designed to be as flexible and scalable as present and emerging tasks require. "Enterprises need flexible, efficient infrastructure to keep pace with the demands of modern AI," said Justin Boitano, Vice President of Enterprise AI at NVIDIA. "With NVIDIA RTX PRO 6000 Blackwell GPUs in HPE's 2U ProLiant servers, enterprises can accelerate virtually every workload on a single, unified, enterprise-ready platform." Availability The HPE ProLiant DL385 Gen11 and DL380a Gen12 servers equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are currently open for orders, with first shipments expected from September 2025. HPE intends to roll out support for the newest NVIDIA AI models, the Cosmos Reason VLM, and the VSS 2.4 blueprint in HPE Private Cloud AI during the latter half of 2025. The next generation of HPE Private Cloud AI, with Blackwell GPU support, is also slated for release in the same period.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store