logo
#

Latest news with #NVIDIADGXSuperPOD

With Cloud Revenues Sliding, Can Hosting Sustain APLD's Growth?
With Cloud Revenues Sliding, Can Hosting Sustain APLD's Growth?

Yahoo

time15-07-2025

  • Business
  • Yahoo

With Cloud Revenues Sliding, Can Hosting Sustain APLD's Growth?

Applied Digital APLD delivered 22% year-over-year revenue growth in the third quarter of fiscal 2025, reaching $52.9 million. However, underlying this top-line expansion is a shifting business model under pressure. The company's Cloud Services segment revenues declined sequentially to $17.8 million, primarily due to a transition to on-demand capacity and technical issues with a multi-tenant configuration, now resolved. In contrast, the Data Center Hosting segment generated $35.2 million, benefiting from full utilization at two North Dakota facilities and supportive Bitcoin pricing. Yet, the real story lies in APLD's pivot to high-performance computing (HPC) hosting, where it's betting big on AI-focused infrastructure. Construction of the Ellendale campus — a 400MW flagship data center — is progressing on schedule, with the first 100MW liquid-cooled facility set to go live in fourth-quarter 2025. Backed by $5 billion from Macquarie and $375 million from SMBC, the company is well-capitalized to execute this pivot. CEO Wes Cummins emphasized strong inbound leasing interest, stating discussions with multiple hyperscalers and expectations that a single anchor customer could occupy the entire initial build. However, APLD's long-term thesis relies on execution. CapEx remains aggressive at $30-$50 million per month, and the company carries $689 million in debt. Its plan to divest the Cloud Services business, partly due to friction with leasing prospects and REIT transition goals, introduces uncertainty around monetization and competitive positioning. While management believes the Cloud segment could rebound to $110-$120 million in annual revenues, its future under APLD is uncertain. Hosting, particularly in AI-focused HPC infrastructure, has become the cornerstone of APLD's growth strategy. With the Cloud segment in flux, successful lease-up of the Ellendale campus and timely execution on hyperscale contracts will be critical to validating this pivot and sustaining growth in a capital-intensive landscape. Digital Realty DLR is aggressively scaling its AI infrastructure footprint, supported by a newly formed $10 billion U.S. Hyperscale Data Center Fund, targeting top-tier metros like Northern Virginia, Dallas and Silicon Valley. In the first quarter, Digital Realty signed a record backlog of $919 million in booked-not-billed leases and expanded its development pipeline by 170MW to a total of 814MW — 63% of which is pre-leased. AI-driven demand contributed to record pricing of $244kW/month. Notably, a leading AI inference firm joined PlatformDIGITAL, leveraging its high-density and liquid-cooled infrastructure, including the new FRA18 center in Frankfurt. Digital Realty's strategy emphasizes scale, sustainability and contiguous capacity for AI use cases. Equinix EQIX is capitalizing on strong AI-driven demand, with notable AI wins in the first quarter, including deployments for Block's NVIDIA DGX SuperPOD and Grok's inference platform. The company's high-performance infrastructure and interconnection ecosystem make it an ideal platform for scalable AI workloads. EQIX is actively building larger IBX facilities, and 56 major projects are underway across 33 metros, including 12 xScale sites. 85% of xScale projects are pre-leased. Its 'Build Bolder' strategy emphasizes fewer, higher-density builds with fast cabinet rollout, while global AI momentum is supporting recurring revenue step-ups and margin expansion. Equinix expects AI-related growth to accelerate throughout 2025. APLD's Price Performance, Valuation and Estimates Shares of APLD have gained 24.6% in the year-to-date period compared with the industry's growth of 4.5%. Image Source: Zacks Investment Research From a valuation standpoint, Applied Digital trades at a forward price-to-sales of 7.97X, above the industry average as well as its five-year median of 1.47. APLD carries a Value Score of F. Image Source: Zacks Investment Research The Zacks Consensus Estimate for Applied Digital's fiscal 2026 earnings implies a 67.7% rise year over year. Image Source: Zacks Investment Research The stock currently carries a Zacks Rank #2 (Buy). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Equinix, Inc. (EQIX) : Free Stock Analysis Report Digital Realty Trust, Inc. (DLR) : Free Stock Analysis Report Applied Digital Corporation (APLD) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Sign in to access your portfolio

DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience
DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience

Mid East Info

time12-06-2025

  • Business
  • Mid East Info

DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience

DeepL's Language AI platform can now translate the entire web into another language in just 18.5 days – down from 194 days Paris, France – 11th June 2025 – DeepL, a leading global Language AI company, today announced the deployment of an NVIDIA DGX SuperPOD with DGX GB200 systems. This transformative computing platform will enable DeepL to translate the entirety of the internet – which currently takes 194 days of nonstop processing – in just over 18 days, marking a significant leap forward in DeepL's research and product capabilities. This deployment marks the first deployment of DGX SuperPOD with DGX GB200 systems in Europe, which is now operational at DeepL's partner EcoDataCenter in Sweden. Driving Research and Innovation The new cluster will enhance DeepL's research capabilities, unlocking powerful generative features that will allow the Language AI platform to expand its product offerings significantly. With this advanced infrastructure, DeepL will approach model training in an entirely new way, paving the path for a more interactive experience for its users. In the short term, users can expect immediate enhancements, including increased quality, speed and nuance in translations, along with greater interactivity and the introduction of more generative AI features. Looking to the future, multi-modal models will become the standard at DeepL. The long-term vision includes further exploration of generative capabilities and an increased focus on personalization options, ensuring that every user's experience is tailored and unique. Technical Advancements This deployment will provide the additional computing power necessary to train new models and develop innovative features for DeepL's Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 systems, with its liquid-cooled, rack-scale design and scalability for tens of thousands of GPUs, will enable DeepL to run high-performance AI models essential for advanced generative applications. This marks DeepL's third deployment of an NVIDIA DGX SuperPOD, now surpassing the capabilities of DeepL Mercury, its previous flagship supercomputer. 'At DeepL, we take pride in our unwavering commitment to research and development, which has consistently allowed us to deliver solutions that outshine our competitors. This latest deployment further cements our position as a leader in the Language AI space,' said Jarek Kutylowski, CEO and Founder of DeepL. 'By equipping our research infrastructure with the latest technology, we not only enhance our existing offering but also explore exciting new products. The pace of innovation in AI is faster than ever, and integrating these advancements into our tech stack is essential for our continued growth.' Delivering Extraordinary Translation Quality The deployment of these new racks will make DeepL faster and more accurate than ever before. Standout capabilities of the new clusters include: Translating the entire web into another language, which currently takes 194 days of non-stop processing, will now be achievable in just 18.5 days. The time required to translate the Oxford English Dictionary into another language will drop from 39 seconds to two seconds. Translating Marcel Proust's In Search of Lost Time , the world's longest book, will be reduced from 0.95 seconds to just 0.09 seconds. Overall, the new clusters will deliver 30 times the text output compared to previous capabilities. 'Europe needs robust AI deployments to maintain its competitive edge, drive innovation, and address complex challenges across industries' said Charlie Boyle, Vice President of DGX systems at NVIDIA. 'By harnessing the performance and efficiency of our latest AI infrastructure, DeepL is poised to accelerate breakthroughs in language AI and deliver transformative new experiences for users across the continent and beyond.' About DeepL DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures.

DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience
DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience

Korea Herald

time11-06-2025

  • Business
  • Korea Herald

DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience

DeepL's Language AI platform can now translate the entire web into another language in just 18.5 days – down from 194 days PARIS, June 11, 2025 /PRNewswire/ -- DeepL, a leading global Language AI company, today announced the deployment of an NVIDIA DGX SuperPOD with DGX GB200 systems. This transformative computing platform will enable DeepL to translate the entirety of the internet – which currently takes 194 days of nonstop processing – in just over 18 days, marking a significant leap forward in DeepL's research and product capabilities. This deployment marks the first deployment of DGX SuperPOD with DGX GB200 systems in Europe, which is now operational at DeepL's partner EcoDataCenter in Sweden. Driving Research and Innovation The new cluster will enhance DeepL's research capabilities, unlocking powerful generative features that will allow the Language AI platform to expand its product offerings significantly. With this advanced infrastructure, DeepL will approach model training in an entirely new way, paving the path for a more interactive experience for its users. In the short term, users can expect immediate enhancements, including increased quality, speed and nuance in translations, along with greater interactivity and the introduction of more generative AI features. Looking to the future, multi-modal models will become the standard at DeepL. The long-term vision includes further exploration of generative capabilities and an increased focus on personalization options, ensuring that every user's experience is tailored and unique. Technical Advancements This deployment will provide the additional computing power necessary to train new models and develop innovative features for DeepL's Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 systems, with its liquid-cooled, rack-scale design and scalability for tens of thousands of GPUs, will enable DeepL to run high-performance AI models essential for advanced generative applications. This marks DeepL's third deployment of an NVIDIA DGX SuperPOD, now surpassing the capabilities of DeepL Mercury, its previous flagship supercomputer. "At DeepL, we take pride in our unwavering commitment to research and development, which has consistently allowed us to deliver solutions that outshine our competitors. This latest deployment further cements our position as a leader in the Language AI space," said Jarek Kutylowski, CEO and Founder of DeepL. "By equipping our research infrastructure with the latest technology, we not only enhance our existing offering but also explore exciting new products. The pace of innovation in AI is faster than ever, and integrating these advancements into our tech stack is essential for our continued growth." Delivering Extraordinary Translation Quality The deployment of these new racks will make DeepL faster and more accurate than ever before. Standout capabilities of the new clusters include: "Europe needs robust AI deployments to maintain its competitive edge, drive innovation, and address complex challenges across industries," said Charlie Boyle, Vice President of DGX systems at NVIDIA. "By harnessing the performance and efficiency of our latest AI infrastructure, DeepL is poised to accelerate breakthroughs in language AI and deliver transformative new experiences for users across the continent and beyond." About DeepL DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures. For more information, visit

iGenius, Vertiv And Nvidia Partner On AI Supercomputer Colosseum
iGenius, Vertiv And Nvidia Partner On AI Supercomputer Colosseum

Channel Post MEA

time23-04-2025

  • Business
  • Channel Post MEA

iGenius, Vertiv And Nvidia Partner On AI Supercomputer Colosseum

Vertiv has announced a groundbreaking collaboration with NVIDIA and renowned AI pioneer iGenius to deploy Colosseum, one of the world's largest NVIDIA DGX AI supercomputers with NVIDIA Grace Blackwell Superchips. Set to deploy in 2025 in Italy, Colosseum will redefine the digital landscape through a first-of-its-kind sovereign AI data center for regulated workloads. Designed to address the demands of highly regulated industries such as finance, healthcare, and public administration, Colosseum will embody a fusion of transformative computational power, energy efficiency, and data sovereignty, while balancing stringent data security requirements. Colosseum, a NVIDIA DGX SuperPOD, is the latest advancement in a long-standing collaboration between Vertiv and NVIDIA. It is strategically positioned in southern Italy to address regional government requirements, marking a significant milestone in Europe's AI landscape. 'Harnessing the power of NVIDIA's cutting-edge accelerated computing and Vertiv's innovative infrastructure expertise, Colosseum stands as a testament to the transformative potential of sovereign AI,' said Uljan Sharka, CEO of iGenius. 'We're demonstrating how modular systems and software-specific infrastructure enable a new era of mission-critical AI.' Modular by Design. Engineered for Efficiency. Colosseum combines Vertiv's infrastructure management expertise, NVIDIA accelerated computing, and the NVIDIA Omniverse Blueprint for AI factory design and operations. The deployment will leverage Vertiv's 360AI reference architecture infrastructure platform for data center power and cooling that is designed for the NVIDIA GB200 NVL72 , which was co-developed with NVIDIA and released in late 2024. This modular and scalable system positions iGenius to deploy one of the fastest hyperscale AI supercomputers, and one of the largest to support sovereign AI. Vertiv has also extended its reference design library on its AI Hub with the co-developed data center power and cooling design for NVIDIA GB300 NVL72. By staying one GPU generation ahead, Vertiv enables customers to plan infrastructure before silicon lands, with deployment-ready designs that anticipate increased rack power densities and repeatable templates for AI factories at scale. 'The unit of compute is no longer the chip — it's the system, the AI Factory,' said Karsten Winther, president of Vertiv, EMEA. 'Through our collaboration with NVIDIA and visionary AI player iGenius, we are proving the efficiency and system-level maturity of delivering the data center as a unit of compute, unlocking rapid adoption of AI-native power and cooling infrastructure as a catalyst for AI at scale.' Simulate with NVIDIA Omniverse. Deliver with Speed. 'AI is reshaping the data center landscape, demanding new levels of scale, efficiency and adaptability for global AI factories,' said Charlie Boyle, vice president of DGX platforms at NVIDIA. 'With physically-based digital twins enabled by NVIDIA Omniverse technologies and Vertiv's modular design for the iGenius DGX SuperPOD data center, Colosseum sets a new standard for building supercomputers for the era of AI.' Colosseum was co-designed as a physically accurate digital twin developed with NVIDIA Omniverse technologies, enabling real-time collaboration between Vertiv, iGenius and NVIDIA, to accelerate system-level decisions and compress the design-to-deploy cycle. The Omniverse Blueprint enables real-time simulations, allowing engineers to test and refine designs instantly, rather than waiting for lengthy simulation processes, reducing simulation times from months to hours. Vertiv manufacturing and factory integration processes reduce deployment time by up to 50% compared to traditional data center builds. This collaborative 3D design process validated the entire infrastructure stack, enabling predictive modeling of thermal load, electrical flow, and site layout — for 132kW liquid-cooled racks to modular power systems — before a single module was built. Designed with Intelligence. Unified by Software. Vertiv's AI-ready prefabricated modular data center solution is designed, manufactured, delivered, installed and commissioned by Vertiv. It includes power, cooling, management, monitoring, service and maintenance offerings, with power and cooling capacity supporting up to 132kW/rack initially, with an ability to scale up as required for future designs. The building shell integrates prefabricated white space inside while deploying full modular grey space outside. This approach offers exceptional scalability and energy efficiency, transforming the way data centers are built and deployed. Colosseum will leverage NVIDIA Mission Control for data center operations and orchestration and Vertiv Unify to simplify and synchronize building management for AI factories. Vertiv Unify provides: ● Real-time orchestration across power, cooling, and compute ● Digital twin synchronization for closed-loop optimization ● AI-ready capabilities that support autonomous decision-making Through its integration of NVIDIA Omniverse technologies, Vertiv Unify enables real-time updates between physical systems and digital models — allowing predictive maintenance, what-if simulations, and scenario testing before operational risk occurs. The Blueprint for AI Factories Globally Colosseum is more than a data center. It's the template for scalable, repeatable, sovereign AI factories. By combining cloud-scale density, local data control, and modular deployment, it signals the next phase of AI: where inference must be secure, fast, compliant, and distributed. This is not a one-off project — it's a reference point. iGenius is building a blueprint with Colosseum designed to be repeated globally, with Vertiv and NVIDIA aligned on future platform support, including DGX GB300 systems and beyond. The future of sovereign AI is no longer theoretical — it's being built now. 0 0

iGenius launches one of the World's Largest Sovereign AI Data Centers leveraging Vertiv infrastructure, NVIDIA accelerated Computing and Omniverse
iGenius launches one of the World's Largest Sovereign AI Data Centers leveraging Vertiv infrastructure, NVIDIA accelerated Computing and Omniverse

Mid East Info

time23-04-2025

  • Business
  • Mid East Info

iGenius launches one of the World's Largest Sovereign AI Data Centers leveraging Vertiv infrastructure, NVIDIA accelerated Computing and Omniverse

Supports rollout of enterprise-grade deployment of sovereign AI compute Embodies a fusion of exceptional computational power, energy efficiency, and data sovereignty while balancing stringent data security requirements. Vertiv (NYSE: VRT), a global provider of critical digital infrastructure, today announced a groundbreaking collaboration with NVIDIA and renowned AI pioneer iGenius to deploy Colosseum, one of the world's largest NVIDIA DGX AI supercomputers with NVIDIA Grace Blackwell Superchips. Set to deploy in 2025 in Italy, Colosseum will redefine the digital landscape through a first-of-its-kind sovereign AI data center for regulated workloads. Designed to address the demands of highly regulated industries such as finance, healthcare, and public administration, Colosseum will embody a fusion of transformative computational power, energy efficiency, and data sovereignty, while balancing stringent data security requirements. Colosseum, a NVIDIA DGX SuperPOD, is the latest advancement in a long-standing collaboration between Vertiv and NVIDIA. It is strategically positioned in southern Italy to address regional government requirements, marking a significant milestone in Europe's AI landscape. 'Harnessing the power of NVIDIA's cutting-edge accelerated computing and Vertiv's innovative infrastructure expertise, Colosseum stands as a testament to the transformative potential of sovereign AI,' said Uljan Sharka, CEO of iGenius. 'We're demonstrating how modular systems and software-specific infrastructure enable a new era of mission-critical AI.' Modular by Design. Engineered for Efficiency. Colosseum combines Vertiv's infrastructure management expertise, NVIDIA accelerated computing, and the NVIDIA Omniverse Blueprint for AI factory design and operations. The deployment will leverage Vertiv's 360AI reference architecture infrastructure platform for data center power and cooling that is designed for the NVIDIA GB200 NVL72 , which was co-developed with NVIDIA and released in late 2024. This modular and scalable system positions iGenius to deploy one of the fastest hyperscale AI supercomputers, and one of the largest to support sovereign AI. Vertiv has also extended its reference design library on its AI Hub with the co-developed data center power and cooling design for NVIDIA GB300 NVL72 . By staying one GPU generation ahead, Vertiv enables customers to plan infrastructure before silicon lands, with deployment-ready designs that anticipate increased rack power densities and repeatable templates for AI factories at scale. 'The unit of compute is no longer the chip — it's the system, the AI Factory,' said Karsten Winther, president of Vertiv, EMEA. 'Through our collaboration with NVIDIA and visionary AI player iGenius, we are proving the efficiency and system-level maturity of delivering the data center as a unit of compute, unlocking rapid adoption of AI-native power and cooling infrastructure as a catalyst for AI at scale.' Simulate with NVIDIA Omniverse. Deliver with Speed. 'AI is reshaping the data center landscape, demanding new levels of scale, efficiency and adaptability for global AI factories,' said Charlie Boyle, vice president of DGX platforms at NVIDIA. 'With physically-based digital twins enabled by NVIDIA Omniverse technologies and Vertiv's modular design for the iGenius DGX SuperPOD data center, Colosseum sets a new standard for building supercomputers for the era of AI.' Colosseum was co-designed as a physically accurate digital twin developed with NVIDIA Omniverse technologies, enabling real-time collaboration between Vertiv, iGenius and NVIDIA, to accelerate system-level decisions and compress the design-to-deploy cycle. The Omniverse Blueprint enables real-time simulations, allowing engineers to test and refine designs instantly, rather than waiting for lengthy simulation processes, reducing simulation times from months to hours. Vertiv manufacturing and factory integration processes reduce deployment time by up to 50% compared to traditional data center builds. This collaborative 3D design process validated the entire infrastructure stack, enabling predictive modeling of thermal load, electrical flow, and site layout — for 132kW liquid-cooled racks to modular power systems — before a single module was built. Designed with Intelligence. Unified by Software. Vertiv's AI-ready prefabricated modular data center solution is designed, manufactured, delivered, installed and commissioned by Vertiv. It includes power, cooling, management, monitoring, service and maintenance offerings, with power and cooling capacity supporting up to 132kW/rack initially, with an ability to scale up as required for future designs. The building shell integrates prefabricated white space inside while deploying full modular grey space outside. This approach offers exceptional scalability and energy efficiency, transforming the way data centers are built and deployed. Colosseum will leverage NVIDIA Mission Control for data center operations and orchestration and Vertiv™ Unify to simplify and synchronize building management for AI factories. Vertiv Unify provides: Real-time orchestration across power, cooling, and compute Digital twin synchronization for closed-loop optimization AI-ready capabilities that support autonomous decision-making Through its integration of NVIDIA Omniverse technologies, Vertiv Unify enables real-time updates between physical systems and digital models — allowing predictive maintenance, what-if simulations, and scenario testing before operational risk occurs. The Blueprint for AI Factories Globally Colosseum is more than a data center. It's the template for scalable, repeatable, sovereign AI factories. By combining cloud-scale density, local data control, and modular deployment, it signals the next phase of AI: where inference must be secure, fast, compliant, and distributed. This is not a one-off project — it's a reference point. iGenius is building a blueprint with Colosseum designed to be repeated globally, with Vertiv and NVIDIA aligned on future platform support, including DGX GB300 systems and beyond. The future of sovereign AI is no longer theoretical — it's being built now.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store