logo
#

Latest news with #VoltagePark

Vultr joins ACTIVATE AI ecosystem to boost global GPU access
Vultr joins ACTIVATE AI ecosystem to boost global GPU access

Techday NZ

time30-07-2025

  • Business
  • Techday NZ

Vultr joins ACTIVATE AI ecosystem to boost global GPU access

Vultr has joined the ACTIVATE AI Ecosystem Partner Program developed by Parallel Works, providing flexible and vendor-neutral access to GPU resources for AI workloads across hybrid and multi-cloud environments. The announcement is part of the launch of Parallel Works' ACTIVATE AI Partner Ecosystem, which aims to simplify access to next-generation AI infrastructure and facilitate the deployment of AI at scale. Through the integration of Vultr's high-performance GPUs into the ACTIVATE AI control plane, users can access GPU resources on demand, avoiding vendor lock-in and reducing operational overhead traditionally associated with managing multi-cloud AI infrastructure. Meeting growing demand for AI resources As AI applications become more widespread, research teams and enterprises are consistently seeking ways to obtain the flexible computing needed to support training and deployment of large-scale models. Matthew Shaxted, CEO at Parallel Works, highlighted the increasing demand for integrated compute resources: "The global surge in AI adoption – from large language models to domain-specific applications – is driving demand for more than just GPU access. Organisations need flexible, integrated ecosystems to build, train and deploy AI at scale. Traditional infrastructure is too rigid, and managing fragmented tools across clouds slows innovation. Our Partner Ecosystem, combined with ACTIVATE AI, provides unified access to specialized compute – enabling an open, scalable AI ecosystem that puts control back in the hands of the user." The ACTIVATE AI Partner Ecosystem enables orchestration of AI workloads through a unified control plane that connects to a range of neocloud GPU providers. This connectivity supports container-based workflows, continuous integration and deployment pipelines, and distributed object storage. The service allows dynamic allocation and intelligent workload placement, supporting the collaborative needs of organisations with varying requirements. Industry partners and technical integration Alongside Vultr, the ecosystem brings onboard other GPU-as-a-service and aggregator partners such as Voltage Park, Canopy Wave, VALDI (a division of Storj), GPU Trader, and Shadeform. These collaborations expand GPU capacity and enable hybrid cloud bursting with distributed, vendor-neutral storage solutions. Kevin Cochrane, Chief Marketing Officer at Vultr, commented on the collaboration, stating, "Our partnership with Parallel Works reflects a joint commitment to driving AI innovation through an open ecosystem - making it easier, faster, and more cost-effective for customers to deploy AI at scale worldwide." Brandon Peccoralo, Vice President of Sales & Partnerships at Voltage Park, noted, "We are thrilled to be a part of the ACTIVATE AI Ecosystem Partner Program as it truly is connecting users to high-performance GPUs, giving teams access to the right compute for the right task. Together with the Parallel Works software platform and the Voltage Park scalable GPU cloud, customers can have the control and monitoring they want and moreover need when provisioning very expensive cloud resources. This will result in streamlined provisioning and a lower overall OPEX when leveraged properly." Hai Vo-Dinh, Senior Director of Product at Canopy Wave, added, "We are built to overcome the challenges of deploying AI at scale. Our partnership with Parallel Works does that and more – seamlessly and on-demand." Aggregators and infrastructure specialists Jacob Willoughby, Chief Technology Officer at VALDI, a division of Storj, said, "Valdi simplifies access to high-performance compute across clouds and platforms. With Parallel Works, users can orchestrate AI workloads faster and with less friction. Together, we're enabling a more open, scalable AI infrastructure." Ben Moore, Co-Founder and CEO at GPU Trader, stated, "We are dedicated to helping businesses unlock the power of HPC to drive their next breakthrough. Teaming with the ACTIVATE AI Ecosystem Partner Program helps organizations take advantage of the best of streamlined GPU and technology deployment." Ed Goode, Co-Founder and CEO at Shadeform, commented, "Shadeform was founded on the belief that developers should be able to freely choose where they run their compute workloads. We are pleased to be part of Parallel Works ACTIVATE AI ecosystem, working together to enable seamless, vendor-neutral access to the world's GPU supply chain." Technology and storage partners Jason Tuschen, CEO at QLAD, said, "Researchers and engineers running sensitive workloads need end-to-end trust in their Kubernetes environments. ACTIVATE is making that possible, and QLAD is proud to contribute workload-level protections that extend that trust to every mission-critical workflow." Dean Beeler, Co-Founder and CTO at Juice, remarked, "Juice's solution provides fine-grained control over GPU resources, enabling dynamic allocation, improved job scheduling, and higher overall utilization. In combination with ACTIVATE AI, HPC teams can increase session density per GPU and accelerate compute-intensive workflows, leading to faster innovation and stronger ROI on existing infrastructure." Jacob Willoughby, CTO at Storj, noted, "Storj provides high-performance, globally distributed storage that's both resilient and cost-efficient. Through ACTIVATE AI, teams gain seamless, vendor-neutral access to data across environments. Its storage built for modern AI - fast, scalable, and ready for what's next." Channel and reseller involvement Jim Kovach, Director of Business Development for HPC/AI at Pier Group, said, "It's an honor to be part of Parallel Works ACTIVATE AI Partner Program. Driving innovative AI technologies such as ACTIVATE AI into the education and research market helps our clients meet and exceed their technology goals." Michael Fedele, President and CEO at The Pinnacle Group, stated, "We take pride in providing our customers with technologies such as Parallel Works ACTIVATE AI that can not only seamlessly integrate with their environment but can efficiently scale as well." Shozo Takahashi, President and CEO at Core Micro Systems, Japan, commented, "Our partnership with Parallel Works marks a pivotal step towards expanding access to powerful AI and HPC platforms in Japan. By combining Parallel Works ACTIVATE with our AI/HPC Appliance based advanced micro and modular data center technologies, we are enabling organizations to harness the full potential of hybrid, edge, and multi-cloud computing with simplicity and speed." The ACTIVATE AI Ecosystem also includes global providers such as AWS, Google Cloud, Azure, and storage companies like Hammerspace, broadening the available infrastructure options for users needing a flexible, scalable, and vendor-neutral approach to AI and HPC deployments.

Voltage Park Addresses Kubernetes Complexity for AI Developers with New Managed Offering
Voltage Park Addresses Kubernetes Complexity for AI Developers with New Managed Offering

Yahoo

time04-06-2025

  • Business
  • Yahoo

Voltage Park Addresses Kubernetes Complexity for AI Developers with New Managed Offering

Managed K8s feature lets AI engineers and researchers focus on model training and deployment by abstracting Kubernetes complexity on high-performance GPU infrastructure SAN FRANCISCO, June 04, 2025--(BUSINESS WIRE)--Voltage Park, the company building the future of AI factories with world-class performance and service, today announced the launch of its managed Kubernetes service. This fully-managed Kubernetes control plane solution is specifically designed to simplify and accelerate the deployment of containerized AI and machine learning workloads on Voltage Park's high-performance bare metal GPU clusters. Workloads running on Voltage Park's high-performance bare metal GPU clusters benefit from a fully managed Kubernetes infrastructure. By offloading the operational overhead—including setup, security, patching, and monitoring—Voltage Park enables customers such as to focus their resources on building, training, and deploying cutting-edge models, rather than managing complex infrastructure. This launch marks a significant step in Voltage Park's mission to create a seamless AI factory, integrating optimized hardware with intelligent software to provide accessible, high-performance AI infrastructure. This managed Kubernetes offering was developed in direct response to feedback from AI pioneers and ML engineers who require robust, production-ready environments without the steep learning curve or operational overhead of managing Kubernetes themselves. Saurabh Giri, CPTO at Voltage Park, shares, "Across the spectrum of AI infrastructure I've worked with – from vast, general-purpose clouds to bespoke, specialized systems – the challenge isn't just accessing compute, but unlocking its full potential with agility. The Voltage Park AI factory is our blueprint for this. Our managed Kubernetes service, a key pillar of the Voltage Park AI factory, is engineered to do just that. We streamline the complex orchestration of bare metal GPUs, so that AI teams can focus on rapidly building and deploying their workloads." While Voltage Park handles the provisioning, updates, and health monitoring of the Kubernetes control plane, seamlessly integrated with bare metal clusters, AI/ML teams are able to: Bypass the complexities of Kubernetes control plane setup, security patching, and ongoing maintenance. Dedicate their expertise to developing, training, and deploying cutting-edge models. Leverage the full power of Kubernetes for their GPU-accelerated applications without the prerequisite of deep Kubernetes expertise, fostering faster innovation cycles. To accelerate readiness for AI workloads, Voltage Park's managed Kubernetes includes pre-configured, yet customizable, essential components on worker nodes: NVIDIA GPU Operator: Ensures seamless NVIDIA driver management and device plugin operation for optimal GPU utilization. Prometheus and Grafana: Provides a robust, out-of-the-box monitoring stack for real-time insights into cluster and application performance. SentinelOne: Delivers enhanced security observability and threat detection for containerized environments. These defaults are fully customizable, allowing teams to tailor the environment to their specific workflow and tooling preferences. It is engineered to empower research institutions, AI startups, and enterprise AI labs working on demanding deep learning, model training, and high-performance computing workloads. Currently tailored for optimal performance on bare metal GPU clusters, Voltage Park is actively working to extend Managed Kubernetes support to virtual machine environments in future iterations, offering even greater flexibility. About Voltage Park Voltage Park is your enterprise AI factory. We offer scalable compute power, on-demand and reserved bare metal AI infrastructure using NVIDIA GPUs, with world-class service, performance and value. Whether you need on-demand bursts or long-term reserve AI compute, we offer virtual machines and bare metal access with transparent pricing, leveraging the latest NVIDIA GPUs for high-performance, secure and reliable computing. With our top-tier support, we help power everyone from builders to enterprises to unlock AI's full potential — quickly, flexibly and without hidden costs. For more information visit or follow us on LinkedIn and X. View source version on Contacts Press contact Sammy TotahVoltage Parkpress@

Voltage Park Addresses Kubernetes Complexity for AI Developers with New Managed Offering
Voltage Park Addresses Kubernetes Complexity for AI Developers with New Managed Offering

Business Wire

time04-06-2025

  • Business
  • Business Wire

Voltage Park Addresses Kubernetes Complexity for AI Developers with New Managed Offering

SAN FRANCISCO--(BUSINESS WIRE)-- Voltage Park, the company building the future of AI factories with world-class performance and service, today announced the launch of its managed Kubernetes service. This fully-managed Kubernetes control plane solution is specifically designed to simplify and accelerate the deployment of containerized AI and machine learning workloads on Voltage Park's high-performance bare metal GPU clusters. By offloading the operational overhead—including setup, security, patching, and monitoring—Voltage Park enables customers such as to focus their resources on building, training, and deploying cutting-edge models Share Workloads running on Voltage Park's high-performance bare metal GPU clusters benefit from a fully managed Kubernetes infrastructure. By offloading the operational overhead—including setup, security, patching, and monitoring—Voltage Park enables customers such as to focus their resources on building, training, and deploying cutting-edge models, rather than managing complex infrastructure. This launch marks a significant step in Voltage Park's mission to create a seamless AI factory, integrating optimized hardware with intelligent software to provide accessible, high-performance AI infrastructure. This managed Kubernetes offering was developed in direct response to feedback from AI pioneers and ML engineers who require robust, production-ready environments without the steep learning curve or operational overhead of managing Kubernetes themselves. Saurabh Giri, CPTO at Voltage Park, shares, 'Across the spectrum of AI infrastructure I've worked with – from vast, general-purpose clouds to bespoke, specialized systems – the challenge isn't just accessing compute, but unlocking its full potential with agility. The Voltage Park AI factory is our blueprint for this. Our managed Kubernetes service, a key pillar of the Voltage Park AI factory, is engineered to do just that. We streamline the complex orchestration of bare metal GPUs, so that AI teams can focus on rapidly building and deploying their workloads.' While Voltage Park handles the provisioning, updates, and health monitoring of the Kubernetes control plane, seamlessly integrated with bare metal clusters, AI/ML teams are able to: Bypass the complexities of Kubernetes control plane setup, security patching, and ongoing maintenance. Dedicate their expertise to developing, training, and deploying cutting-edge models. Leverage the full power of Kubernetes for their GPU-accelerated applications without the prerequisite of deep Kubernetes expertise, fostering faster innovation cycles. To accelerate readiness for AI workloads, Voltage Park's managed Kubernetes includes pre-configured, yet customizable, essential components on worker nodes: NVIDIA GPU Operator: Ensures seamless NVIDIA driver management and device plugin operation for optimal GPU utilization. Prometheus and Grafana: Provides a robust, out-of-the-box monitoring stack for real-time insights into cluster and application performance. SentinelOne: Delivers enhanced security observability and threat detection for containerized environments. These defaults are fully customizable, allowing teams to tailor the environment to their specific workflow and tooling preferences. It is engineered to empower research institutions, AI startups, and enterprise AI labs working on demanding deep learning, model training, and high-performance computing workloads. Currently tailored for optimal performance on bare metal GPU clusters, Voltage Park is actively working to extend Managed Kubernetes support to virtual machine environments in future iterations, offering even greater flexibility. About Voltage Park Voltage Park is your enterprise AI factory. We offer scalable compute power, on-demand and reserved bare metal AI infrastructure using NVIDIA GPUs, with world-class service, performance and value. Whether you need on-demand bursts or long-term reserve AI compute, we offer virtual machines and bare metal access with transparent pricing, leveraging the latest NVIDIA GPUs for high-performance, secure and reliable computing. With our top-tier support, we help power everyone from builders to enterprises to unlock AI's full potential — quickly, flexibly and without hidden costs. For more information visit or follow us on LinkedIn and X.

Voltage Park Appoints Former AWS Leader Saurabh Giri as Chief Product and Technology Officer
Voltage Park Appoints Former AWS Leader Saurabh Giri as Chief Product and Technology Officer

Business Wire

time15-05-2025

  • Business
  • Business Wire

Voltage Park Appoints Former AWS Leader Saurabh Giri as Chief Product and Technology Officer

SAN FRANCISCO--(BUSINESS WIRE)-- Voltage Park, the enterprise-grade AI factory company known for delivering world-class performance, value, and service, today announced the appointment of Saurabh Giri as its Chief Product and Technology Officer (CPTO). In this role, Saurabh will leverage his extensive experience in artificial intelligence to shape the company's strategic vision, define the product roadmap, and lead engineering. He will also serve on Voltage Park's board as an executive director. Saurabh brings decades of experience across business, product, and engineering roles, along with a proven track record of building and scaling high-performing teams. To further strengthen its executive leadership team, Voltage Park also welcomed Jesica Church as Vice President of Marketing and Cameron Huang as Vice President of Finance. Together, these new leaders bring added momentum to Voltage Park's mission to make AI infrastructure accessible to all. Saurabh brings decades of experience across business, product, and engineering roles, along with a proven track record of building and scaling high-performing teams. Previously, he led teams at Amazon Web Services (AWS) that built, launched, and operated Amazon Bedrock, Amazon's flagship platform for generative AI. Earlier in his career, Saurabh co-founded and served as CEO of a next-generation payments platform; co-founded an algorithmic trading firm specializing in market making and curve arbitrage in the futures markets; and advised boards and management teams on strategy, operations, and financial matters. 'The demand for AI infrastructure is accelerating rapidly, and Voltage Park is at the forefront as the AI factory for builders, startups, and enterprises,' said Saurabh. 'Joining this team is an exciting opportunity to innovate on the core of AI infrastructure — delivering the scale, flexibility, and performance that cutting-edge AI workloads require. I'm thrilled to help shape the future of how AI is developed and deployed.' As vice president of Marketing at Voltage Park, Jesica Church draws from her deep experience at high-growth, pre-IPO, and public companies to build and mentor diverse and high-performing global teams. In this dynamic market, she is focused on advancing Voltage Park's market presence and product leadership. Previously, she served in senior marketing roles at Nginx, F5, and LogicMonitor, where she helped scale marketing teams and operations for high-growth environments in application and security delivery to hybrid observability. Her extensive marketing and communications expertise will be integral to growing Voltage Park's brand and customer experience. With more than a decade of experience across capital planning, go-to-market strategy, and financial diligence, Cameron Huang was selected to lead strategic finance initiatives spanning FP&A, infrastructure investments, fundraising, and M&A at Voltage Park. Before entering the realm of high-growth startups, Cameron began his career in investment banking at J.P. Morgan and Centerview Partners, where he advised clients in the consumer, retail, and technology sectors. Cameron was the first strategic finance hire at Chime, where he helped lead the finance function from Series C onward — supporting the company's fundraising through Series G and scaling operations toward IPO readiness. Most recently, Cameron led Finance at Eppo, guiding the company through its acquisition by Datadog. 'The additions of Saurabh, Jesica, and Cameron mark a pivotal moment in building out our executive bench,' said Ozan Kaya, CEO of Voltage Park. 'Their leadership strengthens our ability to scale Voltage Park into an enterprise-grade AI factory provider while staying true to our core mission — making advanced AI infrastructure accessible to everyone. With this expanded team, we're better positioned than ever to deliver comprehensive, high-performance AI cloud solutions and an unmatched customer experience.' About Voltage Park Voltage Park is your enterprise AI factory. We offer scalable compute power, on-demand and reserved bare metal AI infrastructure using NVIDIA GPUs, with world-class service, performance and value. Whether you need on-demand bursts or long-term reserve AI compute, we offer virtual machines and bare metal access with transparent pricing, leveraging the latest NVIDIA GPUs for high-performance, secure and reliable computing. With our top-tier support, we help power everyone from builders to enterprises to unlock AI's full potential — quickly, flexibly and without hidden costs. For more information visit or follow us on LinkedIn and X.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store