logo
#

Latest news with #SovereignAI

Orange Group upgrades its partnership with OpenAI
Orange Group upgrades its partnership with OpenAI

Tahawul Tech

timea day ago

  • Business
  • Tahawul Tech

Orange Group upgrades its partnership with OpenAI

Orange Business Group has strengthened its partnership with OpenAI by announcing it will now deploy the AI organisation's advanced open-weight reasoning models into its infrastructure. The French operator stated OpenAI's gpt-oss-120b and gpt-oss-20b models will be integrated into its infrastructure, 'making customers' data even safer' and cater to calls for advanced sovereign AI solutions. As an early access partner, Orange will become one of the first companies globally to deploy the new open models, implemented across a variety of environments. These range from Orange's large regional cloud data centres in France to small on-premises servers or edge sites. The operator explained that by maintaining control over the deployment environment, it can host AI workloads locally across any of its 26-country footprint, while safeguarding sensitive data and complying with diverse and evolving national regulations across Europe, Middle East and Africa. Orange added its deep AI engineering team will work to customise and distil OpenAI models for specific tasks, effectively creating smaller sub models for particular use cases while ensuring protection of all sensitive data used. The companies also pointed to a push around driving digital inclusion and innovation across Africa, using access to the advanced models to include several African languages and build on an initiative announced in November 2024. Chief AI officer at Orange, Steve Jarrett, said the new strategy 'drives new use cases to address sensitive enterprise needs, helps manage our networks and enables innovative customer care solutions including African regional languages and much more'. Source: Mobile World Live Image Credit: OpenAI

Sovereign AI needs solutions for the energy bottleneck
Sovereign AI needs solutions for the energy bottleneck

The Australian

time28-07-2025

  • Business
  • The Australian

Sovereign AI needs solutions for the energy bottleneck

Artificial intelligence is fast becoming the backbone of innovation across Australia, powering everything from public services to national defence capabilities. But behind this lies a critical dependency: infrastructure. Without sovereign control of the data centres, chips, and power systems that fuels our AI, Australia risks falling behind in the global digital race. As AI's influence grows, so does the need for Australia to develop and control AI technologies within its borders. That is, national AI governance and development. A concept known as 'sovereign AI'. Sovereign AI ensures systems reflect Australian values and ethics, comply with regulations, and support the country's strategic interests. Relying on overseas infrastructure to support our AI capabilities has the potential to impact our ability to retain economic value and independent control over critical digital infrastructure, including safeguarding our data. In this way, sovereign AI is not only a matter of technological capability but also of national security and long-term economic resilience. But it requires more than just models and algorithms. It also needs chips, onshore data centres and electricity. Lots of electricity. Data centres need 24/7, reliable, high-quality electricity supply with built-in redundancy. Deloitte's TMT Predictions 2025 highlights that the rapid growth of AI data centres is already pushing operators to adopt more sustainable and forward-looking technology and energy solutions. Peter Corbett is National Telecommunications, Media and Technology Industry Lead Partner at Deloitte Australia Elsewhere, the Australian Energy Market Operator (AEMO) has forecast that data centres could consume up to 15 per cent of the country's electricity by 2030 under a high-growth projection. Even the more conservative projection estimates an 8 per cent share (up from 5 per cent today). For Australia, this presents a real challenge. Building sovereign AI capability in any serious sense will require investment in the technology and energy infrastructure to power our data centres. If not, we risk importing capability through hyperscalers (large cloud service providers), ceding control and sovereignty in the process. Or alternatively (and possibly even worse) building data centres on a grid that is not equipped to support its needs. At the heart of this challenge is a shift in the software-hardware dynamic. Historically, hardware has always led while software followed. Now, the coin has flipped, and AI development is surging ahead of infrastructure. Software capabilities are outpacing the physical systems required to support them. Hardware, energy, and networks are playing catch-up to software's rapid evolution. So, what technologies can we adopt to prevent this energy bottleneck constraining our nation's sovereign AI capacity? One of the most promising pathways to manage the surging power demands of generative AI is improving chip-level energy efficiency. A new generation of gen AI–specific chips can now train advanced models in 90 days while consuming just 8.6 GWh, less than one-tenth the energy of prior-generation chips for the same task. Both the private sector and government need to work together to secure a pipeline of chips from global manufacturers to ensure Australia can access the latest high-efficiency semiconductors. Another key technology driving lower data centre energy usage is liquid cooling. It can reduce power consumption by up to 90 per cent compared to traditional air-based systems and is better suited to manage the intense heat generated by densely packed, high-performance AI chips. However, it also introduces water usage concerns, as AI data centres may require vast quantities of freshwater for cooling, a resource that is both finite and increasingly under pressure. Balancing energy efficiency with sustainable energy production and responsible water use will be critical as this technology scales. In Australia, one solution is to co-locate data centres with renewable energy infrastructure. Projects like Snowy Hydro 2.0 provide access to water, energy and grid infrastructure that can grow alongside data centre demand. Similarly, positioning centres near wind or solar farms offers an opportunity to use clean power while reducing the need for additional infrastructure. Offloading AI workloads to edge devices is another tool to manage power demands. This is especially effective for applications where latency is crucial or where sensitive data and privacy needs are high. By processing data locally, edge computing reduces reliance on central data centres and limits the transmission of sensitive information across networks. This not only conserves energy and reduces network strain, but also strengthens data security by keeping information closer to its source. As edge capabilities grow, this distributed approach will enable a more efficient and secure balance between edge and core infrastructure. Australia must align its digital ambitions with its physical infrastructure capacity. As the nation advances towards sovereign AI capabilities, energy availability and management represent significant constraints. Through deliberate frameworks and robust public-private collaboration to drive strategic infrastructure investment and technologies, Australia can establish the foundation necessary to support a secure and sustainable AI ecosystem. Peter Corbett is National Telecommunications, Media and Technology Industry Lead Partner at Deloitte Australia. - Disclaimer This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional adviser. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication. About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ('DTTL'), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. Please see to learn more. Copyright © 2025 Deloitte Development LLC. All rights reserved. -

Huawei's AI Scandal Just Exploded--And Investors Should Be Paying Attention
Huawei's AI Scandal Just Exploded--And Investors Should Be Paying Attention

Yahoo

time07-07-2025

  • Business
  • Yahoo

Huawei's AI Scandal Just Exploded--And Investors Should Be Paying Attention

Huawei is pushing backhard. Over the weekend, its secretive Noah's Ark Lab broke from its usual silence to address accusations that its new AI model, Pangu Pro MoE, borrowed code without proper credit. The model, which runs on Huawei's own Ascend chips (their homegrown answer to Nvidia's GPUs), had its source code picked apart on GitHub, where a group dubbed HonestAGI claimed it spotted unacknowledged code fragments. That post vanished. But another one, titled Pangu's Sorrow, quickly followedalleging that Huawei's team had been under intense pressure to deliver and fell behind domestic rivals in the race. In a rare rebuttal, Huawei said it fully complied with open-source licenses and welcomed technical discussion, not speculation. This isn't just a code reviewit's a window into the internal pressure mounting inside China's AI champions. With Alibaba and DeepSeek making waves and catching investor attention, Huawei's rare public statement signals how high the stakes are becoming. The companylong a symbol of China's tech self-sufficiencynow finds itself on the defensive in one of the hottest battlegrounds: sovereign AI. The fact that Huawei had to respond at all speaks volumes. IP compliance, innovation speed, and trust are no longer soft issuesthey're table stakes in an environment where reputations are earned (or lost) in public. For global investors watching the AI value chainfrom chipmakers like Nvidia (NASDAQ:NVDA) to downstream platforms like Tesla (NASDAQ:TSLA)this is another flashing signal. The game isn't just about who builds the fastest model. It's also about who's playing fair, who's shipping on time, and who's earning credibility in a world that increasingly demands transparency. As China's AI ecosystem matures, these reputational battles could become just as important as the hardware wars. This article first appeared on GuruFocus. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Huawei's AI Scandal Just Exploded--And Investors Should Be Paying Attention
Huawei's AI Scandal Just Exploded--And Investors Should Be Paying Attention

Yahoo

time07-07-2025

  • Business
  • Yahoo

Huawei's AI Scandal Just Exploded--And Investors Should Be Paying Attention

Huawei is pushing backhard. Over the weekend, its secretive Noah's Ark Lab broke from its usual silence to address accusations that its new AI model, Pangu Pro MoE, borrowed code without proper credit. The model, which runs on Huawei's own Ascend chips (their homegrown answer to Nvidia's GPUs), had its source code picked apart on GitHub, where a group dubbed HonestAGI claimed it spotted unacknowledged code fragments. That post vanished. But another one, titled Pangu's Sorrow, quickly followedalleging that Huawei's team had been under intense pressure to deliver and fell behind domestic rivals in the race. In a rare rebuttal, Huawei said it fully complied with open-source licenses and welcomed technical discussion, not speculation. This isn't just a code reviewit's a window into the internal pressure mounting inside China's AI champions. With Alibaba and DeepSeek making waves and catching investor attention, Huawei's rare public statement signals how high the stakes are becoming. The companylong a symbol of China's tech self-sufficiencynow finds itself on the defensive in one of the hottest battlegrounds: sovereign AI. The fact that Huawei had to respond at all speaks volumes. IP compliance, innovation speed, and trust are no longer soft issuesthey're table stakes in an environment where reputations are earned (or lost) in public. For global investors watching the AI value chainfrom chipmakers like Nvidia (NASDAQ:NVDA) to downstream platforms like Tesla (NASDAQ:TSLA)this is another flashing signal. The game isn't just about who builds the fastest model. It's also about who's playing fair, who's shipping on time, and who's earning credibility in a world that increasingly demands transparency. As China's AI ecosystem matures, these reputational battles could become just as important as the hardware wars. This article first appeared on GuruFocus.

The Hidden Cost Of Sovereign AI Inside Your Company
The Hidden Cost Of Sovereign AI Inside Your Company

Forbes

time30-06-2025

  • Business
  • Forbes

The Hidden Cost Of Sovereign AI Inside Your Company

Bringing AI in-house should not mean burning your people out Sovereign AI is often discussed as a geopolitical imperative. Governments want control over their data, autonomy over infrastructure and strategic independence from foreign technology providers. Which makes sense. But inside many companies, the story takes a different shape. The shift toward sovereign AI is not just about national policy or regulatory compliance. It is actually emerging as a new kind of operational burden—one that lands squarely on internal teams. While the term originated in government circles, its logic now applies at the enterprise level. Just as nations demand oversight of how AI is developed and deployed within their borders, organizations are under pressure to do the same within their own environments. What once lived safely outside the firewall is now being pulled inside. For many companies, this shift means more than just adopting new tools. It requires taking direct responsibility for AI systems that were previously outsourced or abstracted away. Legal risk, regulatory scrutiny and technical complexity now sit within the enterprise. And the pressure lands on the very people expected to keep pace with innovation while making sense of it all. The strategic upside is real. So is the human cost. The Pivot From Outsourcing to Ownership Until recently, most AI use in the enterprise followed a simple formula. Teams picked a vendor. The vendor delivered a model. IT set it up. Legal signed off. The business ran with it. The inner workings were treated like a black box. As long as the system produced results, few asked how it actually made decisions. Sovereign AI turns that model inside out. Now companies are expected to host their own models, explain how they work and provide ongoing documentation for regulators, customers and internal auditors. That shift is about more than control. It is about accountability. And that accountability falls directly on product managers, engineers, legal leads and data teams already stretched thin. These teams are not just delivering AI anymore. They are governing it. They are translating compliance rules into model constraints. They are tracking changes, logging exceptions and prepping for audit requests. They are doing all this on top of their original jobs. And no one hired extra help. When Responsibility Scales Faster Than Headcount Ask any engineering leader what sovereign AI looks like on the ground and they will talk about one thing. Overload. Teams are being asked to own model outputs they didn't train, defend decisions they didn't make and meet compliance standards that shift by the quarter. The risks are real. The resources are not. Take a product lead running a customer-facing AI tool. They now needs to ensure the system is not just accurate but explainable. They need to validate training data, confirm fairness thresholds and answer customer service questions about model decisions. They is not a lawyer or a data scientist. But they are now on the hook for both. That kind of responsibility without full control wears people down. It creates constant low-grade anxiety. And it shows up in small ways. Delayed launches. Risk-averse decisions. Silent burnout. Teams that were once energized by AI are now cautious. Some are quietly opting out of building at all. Coordination Costs Are Rising Fast Sovereign AI is not managed by a single team. It touches legal, engineering, product and compliance functions. That means more meetings, more approvals and more room for confusion. The goal is greater accountability. The reality, too often, is fatigue. In many firms, new model governance processes have been added to ensure oversight. AI releases now pass through layers of internal review. But without enough resourcing or clear ownership, timelines slip. Delays creep in not because teams resist governance, but because no one has the time or context to explain a model end to end. This is the coordination tax. It is not just about time lost. It weighs on morale. When teams spend more effort aligning than building, progress slows. Innovation loses energy. Frustration sets in. If companies want the full value of sovereign AI, they must invest in infrastructure to support it. That means dedicated model owners, practical documentation tools and compliance processes that are built for real-world use. Just as important, it means clarifying decision rights. Too often, no one is sure who makes the final call when speed and risk are at odds. That ambiguity adds another layer to the psychological strain. Leadership Language vs Day-to-Day Reality A lot of executives talk about sovereign AI as a bold move. They use words like autonomy, resilience and differentiation. The story at the front of the room is often one of strategic control. But in the back rooms where implementation happens, the mood is different. Engineers are juggling retraining cycles with patching production bugs. Legal is working overtime trying to track emerging AI laws across multiple states. Product teams are rewriting roadmaps to build in time for explainability reviews. And they are doing it all while being told to move faster. The disconnect is not just about messaging. It is about understanding. Most leaders are not asking what this shift actually means for the people doing the work. They are not checking where new responsibilities have landed. And they are not updating delivery timelines to reflect the added load. To lead through this, executives need to bring vision closer to reality. That means asking teams what has changed in their daily workflow. It means acknowledging where stretch has turned into strain. It means adjusting KPIs to account for governance, not just growth. Where to Start Fixing It First, map where the AI burden sits. Look for the teams absorbing new compliance or coordination work. Ask where responsibilities are unclear or where decisions stall. Then rebalance. That could mean adding new roles, shifting scope or providing better tools. Second, simplify where you can. Some companies are investing in internal model documentation portals. Others are building red-teaming processes that surface model risks early. The goal is not to slow things down. It is to give teams the clarity to move fast without fear. Third, stop treating governance as an add-on. If you are serious about sovereign AI, make it part of your leadership model. Give it a budget. Make it part of performance reviews. And give credit to the people making it work. Quiet accountability only leads to quiet burnout. Fourth, speak plainly. Let teams know this is hard. Name the complexity. Share the progress. Celebrate the unglamorous work of keeping AI systems fair, compliant and traceable. That is the backbone of trust. And it is the part most likely to get ignored. The Bottom Line Sovereign AI offers real strategic upside. It gives companies more control over their data, their models and their brand. But control comes at a cost. The people implementing this shift are carrying more weight than most leaders realize. The work is more complex. The stakes are higher. And the pace is relentless. If companies want to succeed in this new era, they need to lead it from the inside out. That means building support structures for the people doing the hardest jobs. It means designing for sustainability, not just compliance. And it means treating internal resilience as seriously as external control. Because in the end, bringing AI in-house should not mean burning your people out.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store