logo
#

Latest news with #KAYTUS

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Associated Press

time2 days ago

  • Business
  • Associated Press

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

SINGAPORE--(BUSINESS WIRE)--Jun 12, 2025-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. This press release features multimedia. View the full release here: MotusAI Dashboard As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on CONTACT: Media Contacts [email protected] KEYWORD: EUROPE SINGAPORE SOUTHEAST ASIA ASIA PACIFIC INDUSTRY KEYWORD: APPS/APPLICATIONS TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET HARDWARE DATA MANAGEMENT ARTIFICIAL INTELLIGENCE SOURCE: KAYTUS Copyright Business Wire 2025. PUB: 06/12/2025 07:11 AM/DISC: 06/12/2025 07:10 AM

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Business Wire

time2 days ago

  • Business
  • Business Wire

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

SINGAPORE--(BUSINESS WIRE)-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X.

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Yahoo

time2 days ago

  • Business
  • Yahoo

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Streamlined inference performance, tool compatibility, resource scheduling, and system stability to fast-track large AI model deployment. SINGAPORE, June 12, 2025--(BUSINESS WIRE)--KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on Contacts Media Contacts media@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

KAYTUS Unveils KSManage V2.0, Quadrupling Data Center O&M Efficiency
KAYTUS Unveils KSManage V2.0, Quadrupling Data Center O&M Efficiency

Yahoo

time14-04-2025

  • Business
  • Yahoo

KAYTUS Unveils KSManage V2.0, Quadrupling Data Center O&M Efficiency

KSManage V2.0 delivers a one-stop intelligent data center solution, featuring centralized management of 5,000+ IT device models, with one-click fully automated batch configuration SINGAPORE, April 14, 2025--(BUSINESS WIRE)--KAYTUS, a leading provider of end-to-end AI server and liquid cooling solutions, has announced the release of KSManage V2.0, its next-generation data center management platform. The upgraded platform offers broad compatibility with over 5,000 mainstream IT device models, enabling seamless integration across diverse environments. With one-click fully automated batch configuration, KSManage V2.0 boosts management efficiency by up to four times. Leveraging advanced AIOps capabilities, the platform achieves a fault diagnosis accuracy rate of over 98% and reduces energy consumption by 20%. These enhancements significantly optimize operations and maintenance (O&M) for scaled data centers, empowering sustainable and intelligent infrastructure management. With the rapid advancement of cloud computing and AI applications, data centers have scaled at an unprecedented pace—from just over a hundred devices to tens of thousands. This explosive growth presents significant challenges for operations and maintenance (O&M), particularly in managing vast arrays of heterogeneous servers, storage systems, and network equipment. KSManage is purpose-built to address these complexities, delivering intelligent and efficient data center O&M. It tackles key pain points such as the difficulty of managing diverse hardware, low operational efficiency, and inconsistent infrastructure performance. By ensuring reliable, streamlined, and intelligent infrastructure operations, KSManage enables enterprises to focus entirely on driving their core business innovation. Centralized Management, All-in-One Integrated Platform A major challenge in scaled data centers is the management of heterogeneous devices across multiple vendors and models—each with its own management interfaces and protocols. While open-source tools offer basic functionality, their decentralized approach often leads to fragmented resource allocation and increased operational complexity. KSManage V2.0 addresses this with a unified, enterprise-grade platform designed to streamline O&M. It supports a wide range of IT devices from different vendors, offering compatibility with over 5,000 models of servers, storage systems, and network devices. Through standardized interfaces and protocols, KSManage V2.0 enables centralized, out-of-band management of heterogeneous infrastructure at scale—greatly simplifying operations while enhancing efficiency and control. KSManage V2.0 delivers significant upgrades in data center monitoring and management, with enhanced capabilities across health monitoring, performance tracking, inspection management, and network testing tools. These improvements enable granular, component-level health monitoring, a comprehensive view of performance metrics, and customizable inspection workflows—offering a more precise and intelligent monitoring experience. The platform supports both 2D and 3D global visualization, allowing users to monitor key resource metrics such as power consumption, temperature, and capacity in real time. This enhanced visibility empowers operators to proactively track infrastructure status and optimize management efficiency. In addition, KSManage V2.0 can generate customized visual analytics reports within minutes, simplifying data analysis and accelerating data-driven decision-making. Fully Automated Batch Upgrades, Quadrupling Operational Efficiency Server configuration low efficiency remains a major challenge in scaled data centers, where manual firmware upgrades are time-consuming, complex, and prone to human error. To address this, KSManage V2.0 introduces one-click automated batch upgrades, significantly simplifying workflows and boosting O&M efficiency. Complementing this capability, KAYTUS has launched the KSManage Repo, a centralized firmware repository that hosts the latest updates for KAYTUS servers. After registration and entry of device serial numbers (SNs), customers can connect to the official image repository to automatically detect and retrieve the most up-to-date firmware versions in real time. Leveraging both in-band and out-of-band communication channels, KAYTUS servers support full-stack firmware batch upgrades and automated configuration—including BMC, BIOS, CPLD, FRU, NICs, drives, and more—either online or via batch downloads. This automation ensures optimal device performance and delivers up to a 400% increase in maintenance efficiency. AIOps for Enhanced Reliability and Energy Efficiency Scaled data centers often face challenges related to infrastructure stability and excessive energy consumption. Manual monitoring lacks the responsiveness needed for real-time device analysis, while open-source management tools are frequently plagued by security risks, instability, and limited functionality—resulting in delayed fault detection, slow incident resolution, and potential business disruptions. Additionally, insufficient visibility into energy usage contributes to elevated power usage effectiveness (PUE). KSManage V2.0 addresses these issues through built-in AIOps capabilities, integrating intelligent operations throughout the entire lifecycle of fault prediction, alarm reporting, and diagnostics. The platform not only enhances fault response time and system stability but also provides real-time energy consumption tracking, including carbon emissions monitoring. This enables data centers to optimize energy efficiency and supports sustainable, eco-friendly operations aligned with green IT initiatives. Intelligent Prediction and Rapid Diagnosis. KSManage V2.0 takes predictive maintenance and fault diagnosis to the next level with advanced AI-powered capabilities. It supports drive failure prediction up to 15 days in advance, while its memory failure prediction accuracy has improved by 30%. In the event of a fault, AI algorithms are leveraged for both performance and capacity prediction, enabling proactive and informed decision-making. Designed for scaled data centers, KSManage V2.0 can process real-time, billion-level O&M data within seconds, and respond to thousands of alarms in under five seconds. It employs an innovative ETF (Event-Trigger-Free) threshold-free alarm algorithm, achieving an impressive alarm accuracy rate of 95.26%. For diagnostics, KSManage V2.0 actively and passively monitors metric data and collects logs to quickly detect and accurately pinpoint faults—delivering a diagnostic accuracy rate exceeding 98%. These capabilities significantly improve operational resilience and reduce downtime across complex IT environments. Comprehensive Energy Management for Sustainable Operations. KSManage V2.0 delivers robust energy consumption management across a wide range of data center infrastructure—including AI and general-purpose servers, storage systems, network equipment, cooling units, lighting, and power supply devices. Tailored to meet diverse business needs, KSManage V2.0 offers a variety of power consumption control strategies, enabling dynamic workload-based energy adjustments and visual tracking of carbon emissions. By intelligently managing workloads to maintain peak efficiency and avoiding no-load or overload conditions, the platform reduces overall energy consumption by 15% to 20%. In addition, KSManage V2.0 provides predictive energy analytics based on historical data trends, enabling data centers to proactively plan operational and energy strategies. This minimizes the risk of under- or over-supply of energy and supports sustainable, eco-friendly operations aligned with long-term carbon reduction goals. KSManage has been successfully deployed across a wide range of industries, including cloud service providers (CSPs), finance, and telecommunications. In one notable case involving a leading e-commerce platform in Turkey, KSManage effectively addressed critical operational challenges such as inefficient firmware upgrades, error-prone configurations, and slow OS deployments. By automating the management of over 3,000 servers, KSManage reduced firmware upgrade time by 70%, increased configuration accuracy to 99.8%, and enabled the daily deployment of up to 500 servers. These improvements translated into an 80% boost in overall O&M efficiency and a 40% reduction in hardware failure rates. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on Contacts Media contact media@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store