logo
Policloud - The pioneering, next-gen sovereign cloud infrastructure - raises €7.5mln

Policloud - The pioneering, next-gen sovereign cloud infrastructure - raises €7.5mln

Zawya3 days ago

Rapidly growing company, led by serial entrepreneur, David Gurlé, completes seed round - led by Global Ventures, a leading VC firm in MENA - with participation from Inria, OneRagtime, Mi8, and business angels
$800 billion cloud market, growing at 20% a year - amid accelerated demand for AI – is ripe for a European solution to lessen dependence on U.S. Cloud providers
Cannes, France – PoliCloud (the 'Company'), the rapidly-growing provider and developer of next-gen, sovereign, High Performance Computing (HPC) cloud infrastructure, announces its €7.5 million seed fundraise.
The funding was led by Global Ventures, a leading VC firm in MENA, with participation from MI8 Limited, a Hong Kong multi-family office; OneRagtime, a Paris-based venture capital firm; Inria, France's National Institute for Research in Digital Science and Technology; and other private investors.
The proceeds will be used to hire the operating team and grow its business globally with a focus on public entities in Europe.
PoliCloud provides state-of-the-art distributed cloud infrastructure for secure storage and HPC. The Company's solution is eco-responsible, affordable, abundant, and secure; meets the sovereignty needs of enterprises, public administrations and local SMEs; is at the edge because of its unique capabilities of providing decentralization of computing through its partnership with Hivenet, the distributed cloud leader and owner of the market's largest contributor community.
PoliCloud is responding to demand following relentless (c. 20% annually) global cloud growth. Accelerated demand for AI requires affordable and scalable computing power, and the market is ripe for a Europe-led solution to lessen dependence on U.S. cloud providers, who currently dominate the $800 billion market.
David Gurlé, Founder of PoliCloud, said:
'PoliCloud is meeting a critical market demand for sovereign cloud infrastructure that is not only secure and abundant but also eco-responsible. Our unique edge computing capabilities deliver significant benefits to both public and private sector users.
'The time is right for a new European solution that reduces reliance on US cloud providers and offers affordable, scalable computing power, especially as AI adoption accelerates. We are grateful to Global Ventures and all our investors for their support as we enter this exciting phase of expansion.'
PoliCloud is a solution addressing market imperfections. Current cloud expansion suffers from high usage costs and dependence on hyperscalers - such as Google or Amazon - whose models use massive, single, centralized data facilities with high implementation costs and challenging environmental conditions. In contrast, PoliCloud has multiple competitive advantages, including:
Unlimited and flexible computing power, provided by federating with the Grid. By y/e 2025, it will have >1,000+ GPUs and by y/e 2026 >20,000+ GPUs;
Computing resources are delivered to where they are needed and empower local communities;
Small footprint and energy needs;
Rapid time to market, with flexibility and adaptability;
Capex and Opex offset by sharing unused capacity; and
More resilient, higher performance, and more scalable by design
PoliCloud's operating model combines its hardware and infrastructure with Hivenet's distributed storage and computing software. PoliCloud designs, builds, and operates its own computers and micro-data centers, with proprietary and optimized design, to ensure low-cost, high-performance storage and computing on state-of-the-art hardware.
For example, cities such as Cannes, France, purchase, host, and supply PoliClouds with electricity and fiber connection, and offer the available capacity to its ecosystem of startups in their incubator. Enterprises, such as Data Factory, provide HPC infrastructure to their customers in the US. The result is reliable and scalable cloud storage for public and private users.
PoliCloud was launched in February 2025 at the World Artificial Intelligence Cannes Festival (WAICF) with support from the five cities of the Alpes-Maritimes. The Company also benefits from a positive market context and political environment, as well as buoyant early trading. Having already sold four PoliClouds in three months, with a projected €6+ million in revenue by year-end 2025, the Company is already cash flow positive.
The current cloud computing market is worth $800 billion and is projected to reach $2 trillion by 2030, according to Goldman Sachs Research. Profitability pressure is shifting the market to frugality, and there is a need for cost-effective GPU-based computing, such as HPC for rapid rendering graphics. SMEs are rapidly growing and adopting AI, catalysing a major market need for computing power that requires fulfillment. France's public investment bank, Bpifrance, also considers the development of distributed computer technology as a deep-tech initiative.
Simon Sharp, Senior Partner of Global Ventures, commented:
'Global Ventures is delighted to lead PoliCloud's seed fund raise and work again with David and his talented management team, following their track record of successful delivery in Hivenet. We seek visionary entrepreneurs whose products have clear market demand and global potential – all of which apply to PoliCloud. Their distributed data centers have multiple competitive advantages: delivering next-gen, sovereign computing resources where they are needed; with more resilience; faster performance; greater security; while being cheaper to build and maintain. The exponential growth in AI demand and the need for reliable, scalable computing power means the Company's future is a very bright one.'
Stephanie Hospital, Founder & CEO of OneRagtime, said:
'As an early investor and believer in David and Hivenet; and being very aware of how cloud technology has opened up horizons of innovation, but also comes with challenges of costs, security and environmental impact, OneRagtime is excited to invest in PoliCloud. The company is uniquely positioned to provide decentralized, unlimited computing power – affordably, securely and in an eco-responsible way – for which substantial demand exists.'
Bruno Sportisse, CEO of Inria, commented:
"Inria Participations is delighted to become an investor in Policloud, as it is a logical extension of Inria's existing strategic partnership with Hivenet. Inria and PoliCloud share the same philosophy of a decentralized path to the cloud, for secure, distributed computing, but where resources can also be shared according to need. Achieving this goal is of strategic importance for France and its digital sovereignty."
Guillaume Dhamelincourt, Managing Director of Mi8, said:
'The opportunity to invest in PoliCloud was compelling for Mi8, as the world embraces AI and rapidly adjusts its demand for computing power. The multiple use cases for PoliClouds, such as SMEs - but also public enterprises who want to stay mindful of their IT strategy's impact- is an attractive market environment and we look forward to PoliCloud's future growth with great confidence.'
About PoliCloud
PoliCloud is a decentralized cloud built for cities, enterprises, and public institutions that refuse to hand their data to hyperscalers. Each unit arrives as a container-sized module that runs compute and storage locally while linking to a wider network.
Sovereignty — Data remains under local regulation, free from foreign interference.
Security — End-to-end encryption and live intrusion protection keep workloads safe and always on.
Sustainability — Air-cooled design cuts energy use and eliminates water waste, offering a cloud that respects the planet.
Scalability — Snap in new modules when you need extra power; the fabric pools capacity automatically for AI, research, and everyday services.
About Hivenet
Hivenet is a distributed cloud platform that replaces traditional data centers with crowdsourced infrastructure.
People use Hivenet to back up files, run computing tasks, and send large files—powered entirely by idle devices across the globe. It's fast, fair, and radically more sustainable than the status quo. No extraction. No vendor lock-in. Just cloud services that actually live up to the name.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI is blurring language barriers in email fraud, and cybercriminals are expanding their targets
AI is blurring language barriers in email fraud, and cybercriminals are expanding their targets

Khaleej Times

time4 hours ago

  • Khaleej Times

AI is blurring language barriers in email fraud, and cybercriminals are expanding their targets

Looking back a few years ago, cultural or language barriers were enough to deter cybercriminals from targeting Arabic-speaking regions. But today, threat actors are now using AI to tailor attacks more effectively to local audiences. According to the first volume of Proofpoint Inc's latest Human Factor 2025 report, language and culture are no longer the deterrent they once were for cybercriminals. As generative AI tools become more accessible, cybercriminals are now able to create personalised phishing and impersonation scams in multiple languages, including Arabic. Proofpoint's research shows that while most tracked email fraud remains in English, there is a growing wave of non-English attempts. For example, a scammer known as TA2900 sends French-language emails on rental payment themes to targets in France and Canada. This trend raises an important question for regional organisations — does the Arabic language still offer a barrier for cybercriminals in today's AI-driven threat landscape? What is enabling this shift is not just language flexibility, it is the fundamental transformation in how social engineering works. Artificial Intelligence is no longer just a tool; it has become the engine powering the next generation of cyber threats. Attackers can collect large volumes of conversation data from platforms like social media, messaging apps, and chat logs, and feeding it into natural language models. These models learn how to mimic tone and context, making the interaction feel even more human. The end goal is manipulation - convincing someone to make a call, click a link, or download a file without realising they have been targeted. And the more realistic the email, the higher the chance the victim will fall for it. Middle East is firmly in the crosshairs of fast-evolving social engineering A recent study revealed that this shift is already being felt in the region. 85 per cent of organisations in the UAE were targeted by Business Email Compromise (BEC) attacks, up from 66 per cent the year before. While global reports of email fraud dropped, the UAE saw a 29 per cent rise in attack volume. One reason for this could be that attackers are now using AI to overcome the language and cultural barriers that may have previously held them back. The truth is that the broader landscape of social engineering is evolving. In the past, cybercriminals had to choose between sending generic mass phishing emails or spending time crafting highly targeted messages. With automation and AI, that trade-off no longer exists. Today, attackers can launch complex, convincing attacks at scale, making the threat harder to contain and easier to miss. The tools used by cybercriminals are also now more varied. With many businesses using collaboration platforms like Microsoft Teams, Slack, and WhatsApp alongside email, attackers are using multiple entry points. They may start with an email and follow up with a message through another channel. This multichannel approach increases the likelihood of success, especially when an employee lets their guard down outside their inbox. Proofpoint's research found that 84 per cent of CISOs in Saudi Arabia now see human error as their biggest cybersecurity risk, up from 48 per cent in 2023. Another growing tactic is the use of benign conversations to build trust. Attackers start with a friendly or neutral message, perhaps asking for a quote or following up on a simple task, to see if the target will respond. Once that trust is established, they introduce a malicious link or request. These softer tactics are harder to detect because they do not look dangerous at first glance, but over time, they open the door to more serious breaches. A proactive approach to cyber resilience is now non-negotiable Despite the challenges, there is strong momentum in the region when it comes to building cyber resilience. Both the UAE and Saudi Arabia are making visible investments in cybersecurity, smart infrastructure, and public education campaigns. These efforts are part of a broader push to futureproof digital ecosystems while continuing to drive digital transformation. To stay ahead of these threats, organisations will need to build more layered strategies. Security systems that use behavioral analytics, machine learning, and AI can help detect unusual communication patterns and flag potential threats early. Technology like sender authentication can also play a key role, blocking attacks that rely on identity spoofing or lookalike domains. But technology alone is not enough. Employees must also be part of the solution. Ongoing training and awareness initiatives will be crucial to help people recognise emerging threats and stay alert - not just on email, but across all the tools they use to communicate. As generative AI becomes more embedded in the threat landscape, it is clear that no region or language is off-limits. For the Middle East, this means moving beyond the assumption that linguistic or cultural nuances are enough to keep cyber threats at bay. A more proactive, people-focused approach will be essential to stay protected in an increasingly intelligent and personalised threat environment.

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Khaleej Times

time5 hours ago

  • Khaleej Times

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviours — lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of "reasoning" models — AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behaviour goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

Dubai: Emirates uses AI-powered engine monitoring to cut costly flight diversions
Dubai: Emirates uses AI-powered engine monitoring to cut costly flight diversions

Khaleej Times

time8 hours ago

  • Khaleej Times

Dubai: Emirates uses AI-powered engine monitoring to cut costly flight diversions

Dubai's flagship carrier Emirates is tapping into the power of artificial intelligence (AI) to help the airline enhance flight safety, monitor engine health in real time, and reduce costly flight diversions, a senior official revealed. 'We're using AI to predict the turbulence en route from one destination to another. Using AI is not only for the benefit of the company but also for enhancing the safety of operations. "We are able to use AI to predict the health of the engine, and how it is performing, and that helped us a number of times to avoid diversions of an aircraft because we were able to continue operating that aircraft for the specific diagnosis and live monitoring it,' said Adel Al Redha, deputy president and chief operations officer of Emirates. He was speaking during the ForsaTek 2025 exhibition and conference last week. The event featured over 40 in-house and partner showcases, strategically organised across the innovation pipeline spectrum, from early-stage research and prototyping to proof-of-concept trials, and fully launched initiatives being scaled up. AI adoption is on the rise across all industries, and airlines are no exception. From improving operational efficiency to enhancing customer service, companies are exploring new ways to harness data and automation. Looking ahead, Al Redha stressed the importance of strengthening real-time data capabilities to further improve services. For example, he said accurate data can help airlines load the right quantity of food on board — preventing waste and saving millions of dirhams annually. On the use of passenger data, Al Redha reassured that Emirates is aligned with government regulations. He noted that the airline is already taking steps to regulate and govern the use of public data. 'The more we rely on technology, the higher the level of cybersecurity we need to invest in and ensure that we are safeguarding the operations,' he added. As the technology advances, Al Redha noted that the company will upscale its staff as the airline will increasingly adopt generative artificial intelligence (AI). 'We're going to rely on generative AI. That's going to make some of our jobs easier and information much more faster accessible. Some of the people may not be up to the same level of skill as others. We will have to examine certain staff level skills, and upskill them to be able to manage or deal with what the new norm is going to be,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store