Latest news with #DGXGB200


Mid East Info
12-06-2025
- Business
- Mid East Info
DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience
DeepL's Language AI platform can now translate the entire web into another language in just 18.5 days – down from 194 days Paris, France – 11th June 2025 – DeepL, a leading global Language AI company, today announced the deployment of an NVIDIA DGX SuperPOD with DGX GB200 systems. This transformative computing platform will enable DeepL to translate the entirety of the internet – which currently takes 194 days of nonstop processing – in just over 18 days, marking a significant leap forward in DeepL's research and product capabilities. This deployment marks the first deployment of DGX SuperPOD with DGX GB200 systems in Europe, which is now operational at DeepL's partner EcoDataCenter in Sweden. Driving Research and Innovation The new cluster will enhance DeepL's research capabilities, unlocking powerful generative features that will allow the Language AI platform to expand its product offerings significantly. With this advanced infrastructure, DeepL will approach model training in an entirely new way, paving the path for a more interactive experience for its users. In the short term, users can expect immediate enhancements, including increased quality, speed and nuance in translations, along with greater interactivity and the introduction of more generative AI features. Looking to the future, multi-modal models will become the standard at DeepL. The long-term vision includes further exploration of generative capabilities and an increased focus on personalization options, ensuring that every user's experience is tailored and unique. Technical Advancements This deployment will provide the additional computing power necessary to train new models and develop innovative features for DeepL's Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 systems, with its liquid-cooled, rack-scale design and scalability for tens of thousands of GPUs, will enable DeepL to run high-performance AI models essential for advanced generative applications. This marks DeepL's third deployment of an NVIDIA DGX SuperPOD, now surpassing the capabilities of DeepL Mercury, its previous flagship supercomputer. 'At DeepL, we take pride in our unwavering commitment to research and development, which has consistently allowed us to deliver solutions that outshine our competitors. This latest deployment further cements our position as a leader in the Language AI space,' said Jarek Kutylowski, CEO and Founder of DeepL. 'By equipping our research infrastructure with the latest technology, we not only enhance our existing offering but also explore exciting new products. The pace of innovation in AI is faster than ever, and integrating these advancements into our tech stack is essential for our continued growth.' Delivering Extraordinary Translation Quality The deployment of these new racks will make DeepL faster and more accurate than ever before. Standout capabilities of the new clusters include: Translating the entire web into another language, which currently takes 194 days of non-stop processing, will now be achievable in just 18.5 days. The time required to translate the Oxford English Dictionary into another language will drop from 39 seconds to two seconds. Translating Marcel Proust's In Search of Lost Time , the world's longest book, will be reduced from 0.95 seconds to just 0.09 seconds. Overall, the new clusters will deliver 30 times the text output compared to previous capabilities. 'Europe needs robust AI deployments to maintain its competitive edge, drive innovation, and address complex challenges across industries' said Charlie Boyle, Vice President of DGX systems at NVIDIA. 'By harnessing the performance and efficiency of our latest AI infrastructure, DeepL is poised to accelerate breakthroughs in language AI and deliver transformative new experiences for users across the continent and beyond.' About DeepL DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures.


Korea Herald
11-06-2025
- Business
- Korea Herald
DeepL first to deploy NVIDIA DGX SuperPOD with DGX GB200 systems in Europe, advancing Language AI with powerful generative features and enhanced user experience
DeepL's Language AI platform can now translate the entire web into another language in just 18.5 days – down from 194 days PARIS, June 11, 2025 /PRNewswire/ -- DeepL, a leading global Language AI company, today announced the deployment of an NVIDIA DGX SuperPOD with DGX GB200 systems. This transformative computing platform will enable DeepL to translate the entirety of the internet – which currently takes 194 days of nonstop processing – in just over 18 days, marking a significant leap forward in DeepL's research and product capabilities. This deployment marks the first deployment of DGX SuperPOD with DGX GB200 systems in Europe, which is now operational at DeepL's partner EcoDataCenter in Sweden. Driving Research and Innovation The new cluster will enhance DeepL's research capabilities, unlocking powerful generative features that will allow the Language AI platform to expand its product offerings significantly. With this advanced infrastructure, DeepL will approach model training in an entirely new way, paving the path for a more interactive experience for its users. In the short term, users can expect immediate enhancements, including increased quality, speed and nuance in translations, along with greater interactivity and the introduction of more generative AI features. Looking to the future, multi-modal models will become the standard at DeepL. The long-term vision includes further exploration of generative capabilities and an increased focus on personalization options, ensuring that every user's experience is tailored and unique. Technical Advancements This deployment will provide the additional computing power necessary to train new models and develop innovative features for DeepL's Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 systems, with its liquid-cooled, rack-scale design and scalability for tens of thousands of GPUs, will enable DeepL to run high-performance AI models essential for advanced generative applications. This marks DeepL's third deployment of an NVIDIA DGX SuperPOD, now surpassing the capabilities of DeepL Mercury, its previous flagship supercomputer. "At DeepL, we take pride in our unwavering commitment to research and development, which has consistently allowed us to deliver solutions that outshine our competitors. This latest deployment further cements our position as a leader in the Language AI space," said Jarek Kutylowski, CEO and Founder of DeepL. "By equipping our research infrastructure with the latest technology, we not only enhance our existing offering but also explore exciting new products. The pace of innovation in AI is faster than ever, and integrating these advancements into our tech stack is essential for our continued growth." Delivering Extraordinary Translation Quality The deployment of these new racks will make DeepL faster and more accurate than ever before. Standout capabilities of the new clusters include: "Europe needs robust AI deployments to maintain its competitive edge, drive innovation, and address complex challenges across industries," said Charlie Boyle, Vice President of DGX systems at NVIDIA. "By harnessing the performance and efficiency of our latest AI infrastructure, DeepL is poised to accelerate breakthroughs in language AI and deliver transformative new experiences for users across the continent and beyond." About DeepL DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures. For more information, visit


Associated Press
12-03-2025
- Business
- Associated Press
Block Becomes First Company in North America to Deploy the Latest NVIDIA GB200 Systems for Frontier Models
DISTRIBUTED-WORK-MODEL/OAKLAND, Calif.--(BUSINESS WIRE)--Mar 12, 2025-- Today, Block, Inc. (NYSE: XYZ) announced it will be the first company in North America to deploy the NVIDIA DGX SuperPOD with DGX GB200 systems. Upon deployment at an Equinix data center, this new infrastructure will be used in the research and training of open source generative AI models with novel capabilities in underexplored areas. 'The industry, and the world, is undergoing a seismic shift with adopting AI tools. At Block, we think it's essential not only to apply AI to existing problems, but also to explore, learn, and build in the open so that we can advance the frontier of AI in a way that truly levels the playing field for our customers and community,' said Dhanji R. Prasanna, CTO of Block. 'We're excited to deploy an NVIDIA Grace Blackwell DGX SuperPOD and start exploring novel solutions for our customers. We're committed to an open source approach, sharing our learnings and results along the way.' Block's AI research has previously worked on deepfake detection, and attracted widespread attention for one of the first examples in the world of hyper-realistic generated audio. 'Block is a technology company first, and engineering is at the heart of everything we do. Our past work on generative speech models means Block is well-positioned for innovation surrounding Gen AI models, and all possible applications that come with it,' said Prasanna. The Grace Blackwell-powered NVIDIA DGX SuperPOD is purpose-built for state-of-the-art generative AI models, bringing advanced capabilities for training and inference. These systems address key AI challenges, enabling larger and more advanced models to be trained, with faster innovation and greater capabilities for AI as a result. 'As AI models grow in complexity and scale, businesses need powerful infrastructure that can match the pace of innovation,' said Charlie Boyle, Vice President, DGX platforms, NVIDIA. 'With NVIDIA DGX GB200 systems, Block engineering and research teams can develop frontier open source AI models that can tackle complex, real-world challenges with state-of-the-art AI supercomputing.' A key to select the right AI Cloud partner for Block and prove out hypotheses before scaling was the ability to access hundreds of interconnected NVIDIA GPUs, for a short amount of time and supported by ML engineers: Lambda 1-Click Clusters. These GPU clusters are now available with NVIDIA Blackwell. The new DGX SuperPOD will be deployed at one of Equinix's AI-ready data centers. These purpose-built, globally interconnected facilities provide a unique environment of data privacy and sovereignty compliance, high-performance, flexibility and low-latency edge connectivity to thousands of ecosystem partners and clouds. 'Frontier models represent the cutting edge of artificial intelligence technology, pushing the boundaries of what AI can achieve, and they require the latest in AI chips – like NVIDIA's new DGX SuperPOD,' said Jon Lin, Chief Business Officer at Equinix. 'By deploying at Equinix's neutral, cloud-adjacent platform, companies like Block can unlock expanded compute scale and flexibility. This enables the customization of AI solutions with a choice of infrastructure, cloud, models and cooling at our neutral exchange.' Adopting NVIDIA's latest AI infrastructure is a fast follow from Block's launch of codename goose, an open source, interoperable AI agent framework that enables users to connect large language models (LLMs) to real-world actions. Its first use cases are related to software engineering, but developers both within Block and the broader open source community are exploring other non-engineering use cases as well. About Block Block, Inc. (NYSE: XYZ) builds technology to increase access to the global economy. Each of our brands unlocks different aspects of the economy for more people. Square makes commerce and financial services accessible to sellers. Cash App is the easy way to spend, send, and store money. Afterpay is transforming the way customers manage their spending over time. TIDAL is a music platform that empowers artists to thrive as entrepreneurs. Bitkey is a simple self-custody wallet built for bitcoin. Proto is a suite of bitcoin mining products and services. Together, we're helping build a financial system that is open to everyone. [email protected] Relations Contact SOURCE: Block, Inc. Copyright Business Wire 2025. PUB: 03/12/2025 11:00 AM/DISC: 03/12/2025 11:00 AM