
DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience
DeepL's Language AI platform can now translate the entire web into another language in just 18.5 days – down from 194 days
Paris, France – 11th June 2025 – DeepL, a leading global Language AI company, today announced the deployment of an NVIDIA DGX SuperPOD with DGX GB200 systems. This transformative computing platform will enable DeepL to translate the entirety of the internet – which currently takes 194 days of nonstop processing – in just over 18 days, marking a significant leap forward in DeepL's research and product capabilities. This deployment marks the first deployment of DGX SuperPOD with DGX GB200 systems in Europe, which is now operational at DeepL's partner EcoDataCenter in Sweden.
Driving Research and Innovation
The new cluster will enhance DeepL's research capabilities, unlocking powerful generative features that will allow the Language AI platform to expand its product offerings significantly. With this advanced infrastructure, DeepL will approach model training in an entirely new way, paving the path for a more interactive experience for its users.
In the short term, users can expect immediate enhancements, including increased quality, speed and nuance in translations, along with greater interactivity and the introduction of more generative AI features.
Looking to the future, multi-modal models will become the standard at DeepL. The long-term vision includes further exploration of generative capabilities and an increased focus on personalization options, ensuring that every user's experience is tailored and unique.
Technical Advancements
This deployment will provide the additional computing power necessary to train new models and develop innovative features for DeepL's Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 systems, with its liquid-cooled, rack-scale design and scalability for tens of thousands of GPUs, will enable DeepL to run high-performance AI models essential for advanced generative applications. This marks DeepL's third deployment of an NVIDIA DGX SuperPOD, now surpassing the capabilities of DeepL Mercury, its previous flagship supercomputer.
'At DeepL, we take pride in our unwavering commitment to research and development, which has consistently allowed us to deliver solutions that outshine our competitors. This latest deployment further cements our position as a leader in the Language AI space,' said Jarek Kutylowski, CEO and Founder of DeepL. 'By equipping our research infrastructure with the latest technology, we not only enhance our existing offering but also explore exciting new products. The pace of innovation in AI is faster than ever, and integrating these advancements into our tech stack is essential for our continued growth.'
Delivering Extraordinary Translation Quality
The deployment of these new racks will make DeepL faster and more accurate than ever before. Standout capabilities of the new clusters include: Translating the entire web into another language, which currently takes 194 days of non-stop processing, will now be achievable in just 18.5 days.
The time required to translate the Oxford English Dictionary into another language will drop from 39 seconds to two seconds.
Translating Marcel Proust's In Search of Lost Time , the world's longest book, will be reduced from 0.95 seconds to just 0.09 seconds.
Overall, the new clusters will deliver 30 times the text output compared to previous capabilities.
'Europe needs robust AI deployments to maintain its competitive edge, drive innovation, and address complex challenges across industries' said Charlie Boyle, Vice President of DGX systems at NVIDIA. 'By harnessing the performance and efficiency of our latest AI infrastructure, DeepL is poised to accelerate breakthroughs in language AI and deliver transformative new experiences for users across the continent and beyond.'
About DeepL
DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mid East Info
3 days ago
- Mid East Info
DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience
DeepL's Language AI platform can now translate the entire web into another language in just 18.5 days – down from 194 days Paris, France – 11th June 2025 – DeepL, a leading global Language AI company, today announced the deployment of an NVIDIA DGX SuperPOD with DGX GB200 systems. This transformative computing platform will enable DeepL to translate the entirety of the internet – which currently takes 194 days of nonstop processing – in just over 18 days, marking a significant leap forward in DeepL's research and product capabilities. This deployment marks the first deployment of DGX SuperPOD with DGX GB200 systems in Europe, which is now operational at DeepL's partner EcoDataCenter in Sweden. Driving Research and Innovation The new cluster will enhance DeepL's research capabilities, unlocking powerful generative features that will allow the Language AI platform to expand its product offerings significantly. With this advanced infrastructure, DeepL will approach model training in an entirely new way, paving the path for a more interactive experience for its users. In the short term, users can expect immediate enhancements, including increased quality, speed and nuance in translations, along with greater interactivity and the introduction of more generative AI features. Looking to the future, multi-modal models will become the standard at DeepL. The long-term vision includes further exploration of generative capabilities and an increased focus on personalization options, ensuring that every user's experience is tailored and unique. Technical Advancements This deployment will provide the additional computing power necessary to train new models and develop innovative features for DeepL's Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 systems, with its liquid-cooled, rack-scale design and scalability for tens of thousands of GPUs, will enable DeepL to run high-performance AI models essential for advanced generative applications. This marks DeepL's third deployment of an NVIDIA DGX SuperPOD, now surpassing the capabilities of DeepL Mercury, its previous flagship supercomputer. 'At DeepL, we take pride in our unwavering commitment to research and development, which has consistently allowed us to deliver solutions that outshine our competitors. This latest deployment further cements our position as a leader in the Language AI space,' said Jarek Kutylowski, CEO and Founder of DeepL. 'By equipping our research infrastructure with the latest technology, we not only enhance our existing offering but also explore exciting new products. The pace of innovation in AI is faster than ever, and integrating these advancements into our tech stack is essential for our continued growth.' Delivering Extraordinary Translation Quality The deployment of these new racks will make DeepL faster and more accurate than ever before. Standout capabilities of the new clusters include: Translating the entire web into another language, which currently takes 194 days of non-stop processing, will now be achievable in just 18.5 days. The time required to translate the Oxford English Dictionary into another language will drop from 39 seconds to two seconds. Translating Marcel Proust's In Search of Lost Time , the world's longest book, will be reduced from 0.95 seconds to just 0.09 seconds. Overall, the new clusters will deliver 30 times the text output compared to previous capabilities. 'Europe needs robust AI deployments to maintain its competitive edge, drive innovation, and address complex challenges across industries' said Charlie Boyle, Vice President of DGX systems at NVIDIA. 'By harnessing the performance and efficiency of our latest AI infrastructure, DeepL is poised to accelerate breakthroughs in language AI and deliver transformative new experiences for users across the continent and beyond.' About DeepL DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures.


Mid East Info
30-04-2025
- Mid East Info
DeepL launches Arabic Document Translation tool to support businesses across MENA - Middle East Business News and Information
DeepL, a leading global Language AI company, has introduced its Arabic Document Translation tool, designed to simplify document translation for businesses engaging with Arabic-speaking markets across MENA. This latest update helps bridge language barriers and boost efficiency by enabling accurate, context-aware document translations to and from Arabic across DeepL's supported languages including English, French, Turkish, Chinese, and Russian. What sets DeepL apart from its competitors is the ability to review and directly edit translations before downloading, giving users increased control over their documents. The tool supports popular file formats like Word, PDF, Outlook, and PowerPoint, while maintaining original layouts and fonts. Trusted by millions of users in 228 markets including MENA, DeepL continues to empower businesses to break down language barriers, expand into new markets, and foster unparalleled cross-border collaboration. About DeepL: DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures.


Mid East Info
23-04-2025
- Mid East Info
iGenius launches one of the World's Largest Sovereign AI Data Centers leveraging Vertiv infrastructure, NVIDIA accelerated Computing and Omniverse
Supports rollout of enterprise-grade deployment of sovereign AI compute Embodies a fusion of exceptional computational power, energy efficiency, and data sovereignty while balancing stringent data security requirements. Vertiv (NYSE: VRT), a global provider of critical digital infrastructure, today announced a groundbreaking collaboration with NVIDIA and renowned AI pioneer iGenius to deploy Colosseum, one of the world's largest NVIDIA DGX AI supercomputers with NVIDIA Grace Blackwell Superchips. Set to deploy in 2025 in Italy, Colosseum will redefine the digital landscape through a first-of-its-kind sovereign AI data center for regulated workloads. Designed to address the demands of highly regulated industries such as finance, healthcare, and public administration, Colosseum will embody a fusion of transformative computational power, energy efficiency, and data sovereignty, while balancing stringent data security requirements. Colosseum, a NVIDIA DGX SuperPOD, is the latest advancement in a long-standing collaboration between Vertiv and NVIDIA. It is strategically positioned in southern Italy to address regional government requirements, marking a significant milestone in Europe's AI landscape. 'Harnessing the power of NVIDIA's cutting-edge accelerated computing and Vertiv's innovative infrastructure expertise, Colosseum stands as a testament to the transformative potential of sovereign AI,' said Uljan Sharka, CEO of iGenius. 'We're demonstrating how modular systems and software-specific infrastructure enable a new era of mission-critical AI.' Modular by Design. Engineered for Efficiency. Colosseum combines Vertiv's infrastructure management expertise, NVIDIA accelerated computing, and the NVIDIA Omniverse Blueprint for AI factory design and operations. The deployment will leverage Vertiv's 360AI reference architecture infrastructure platform for data center power and cooling that is designed for the NVIDIA GB200 NVL72 , which was co-developed with NVIDIA and released in late 2024. This modular and scalable system positions iGenius to deploy one of the fastest hyperscale AI supercomputers, and one of the largest to support sovereign AI. Vertiv has also extended its reference design library on its AI Hub with the co-developed data center power and cooling design for NVIDIA GB300 NVL72 . By staying one GPU generation ahead, Vertiv enables customers to plan infrastructure before silicon lands, with deployment-ready designs that anticipate increased rack power densities and repeatable templates for AI factories at scale. 'The unit of compute is no longer the chip — it's the system, the AI Factory,' said Karsten Winther, president of Vertiv, EMEA. 'Through our collaboration with NVIDIA and visionary AI player iGenius, we are proving the efficiency and system-level maturity of delivering the data center as a unit of compute, unlocking rapid adoption of AI-native power and cooling infrastructure as a catalyst for AI at scale.' Simulate with NVIDIA Omniverse. Deliver with Speed. 'AI is reshaping the data center landscape, demanding new levels of scale, efficiency and adaptability for global AI factories,' said Charlie Boyle, vice president of DGX platforms at NVIDIA. 'With physically-based digital twins enabled by NVIDIA Omniverse technologies and Vertiv's modular design for the iGenius DGX SuperPOD data center, Colosseum sets a new standard for building supercomputers for the era of AI.' Colosseum was co-designed as a physically accurate digital twin developed with NVIDIA Omniverse technologies, enabling real-time collaboration between Vertiv, iGenius and NVIDIA, to accelerate system-level decisions and compress the design-to-deploy cycle. The Omniverse Blueprint enables real-time simulations, allowing engineers to test and refine designs instantly, rather than waiting for lengthy simulation processes, reducing simulation times from months to hours. Vertiv manufacturing and factory integration processes reduce deployment time by up to 50% compared to traditional data center builds. This collaborative 3D design process validated the entire infrastructure stack, enabling predictive modeling of thermal load, electrical flow, and site layout — for 132kW liquid-cooled racks to modular power systems — before a single module was built. Designed with Intelligence. Unified by Software. Vertiv's AI-ready prefabricated modular data center solution is designed, manufactured, delivered, installed and commissioned by Vertiv. It includes power, cooling, management, monitoring, service and maintenance offerings, with power and cooling capacity supporting up to 132kW/rack initially, with an ability to scale up as required for future designs. The building shell integrates prefabricated white space inside while deploying full modular grey space outside. This approach offers exceptional scalability and energy efficiency, transforming the way data centers are built and deployed. Colosseum will leverage NVIDIA Mission Control for data center operations and orchestration and Vertiv™ Unify to simplify and synchronize building management for AI factories. Vertiv Unify provides: Real-time orchestration across power, cooling, and compute Digital twin synchronization for closed-loop optimization AI-ready capabilities that support autonomous decision-making Through its integration of NVIDIA Omniverse technologies, Vertiv Unify enables real-time updates between physical systems and digital models — allowing predictive maintenance, what-if simulations, and scenario testing before operational risk occurs. The Blueprint for AI Factories Globally Colosseum is more than a data center. It's the template for scalable, repeatable, sovereign AI factories. By combining cloud-scale density, local data control, and modular deployment, it signals the next phase of AI: where inference must be secure, fast, compliant, and distributed. This is not a one-off project — it's a reference point. iGenius is building a blueprint with Colosseum designed to be repeated globally, with Vertiv and NVIDIA aligned on future platform support, including DGX GB300 systems and beyond. The future of sovereign AI is no longer theoretical — it's being built now.