logo
Prediction: This Monster Artificial Intelligence (AI) Software Stock Could Be Wall Street's Next Stock-Split Candidate (Hint: It's Not Palantir)

Prediction: This Monster Artificial Intelligence (AI) Software Stock Could Be Wall Street's Next Stock-Split Candidate (Hint: It's Not Palantir)

Yahooa day ago

While shares of Palantir continue to crush the market, another AI software stock, ServiceNow, has underperformed so far this year.
Shares of ServiceNow have climbed considerably over the last year, and its $1,000 stock price may be turning investors away -- making now an interesting time for management to consider a stock split.
ServiceNow is also benefitting from strong AI tailwinds, and the company's premium valuation appears warranted.
10 stocks we like better than ServiceNow ›
As of closing bell on June 6, shares of enterprise software darling Palantir Technologies have gained 69% on the year, making it the top-performing stock in the Nasdaq-100 index.
While Palantir appears to be on an unstoppable run, smart investors understand that there are other opportunities at the intersection of artificial intelligence (AI) and software.
One AI software company that has become overshadowed by Palantir's rise is ServiceNow (NYSE: NOW), a provider of various cloud-based workplace management solutions and solutions for information technology (IT) professionals.
With a share price of over $1,000 as of this writing, ServiceNow is a potential candidate for a stock split in the near future.
Let's talk about why that is, and whether the stock is a good buy right now.
From time to time, a company may choose to split its stock, increasing its share count while decreasing its share price in proportion. In a 10-for-1 stock split, the company's share price decreases by a factor of 10 while its outstanding shares rise tenfold. Therefore the market cap remains unchanged.
Over the last 12 months, ServiceNow's stock price has climbed by 46%, dwarfing the 12% and 14% gains of the S&P 500 and Nasdaq Composite indexes, respectively.
Unfortunately, 2025 has been a different story so far. As of the closing bell on June 6, shares of ServiceNow are down by 3% on the year. Admittedly, this is a little peculiar, as software-as-a-service (SaaS) businesses are relatively insulated from tariffs, which have been the biggest drain on the stock market this year.
ServiceNow may have underperformed this year in part because of analyst suspicions that the company's public sector business could be at risk from the budget-cutting government project known as the Department of Government Efficiency (DOGE).
If investors have been panic-selling ServiceNow over DOGE concerns, it's ironic, because Palantir -- which derives more than half of its revenue from government contracts -- hasn't witnessed a similar dynamic.
My thinking is as follows: With a share price north of $1,000, ServiceNow appears more expensive than other enterprise software stocks such as Palantir, Salesforce, and SAP.
Oftentimes, a company will decide to split its stock in an effort to reignite interest from investors. The lower share price can sometimes lead to a resurgence in buying.
Considering ServiceNow has never done a stock split and the share price has underwhelmed compared to its peers this year, now could be an interesting time for management to consider a split.
Smart investors understand that the share price alone doesn't dictate the value of a company. The chart below compares ServiceNow's price-to-sales ratio (P/S) to those of some of its leading peers. (I excluded Palantir from this cohort because the company's P/S of 102 makes it a clear outlier.)
I think it's reasonable to say that ServiceNow is indeed a pricey stock, as its P/S of 18.8 is the highest in this group and far ahead of most.
The question investors should consider is whether ServiceNow deserves its premium. I think the answer is yes.
Per the company's first-quarter earnings, ServiceNow's AI-powered services continue to drive demand for its software, as shown by healthy increases in remaining performance obligations. In addition, I see management's current guidance as robust overall, considering all the uncertainties driving the economic environment right now and the competitive market for AI-powered software solutions.
Whether the company performs a stock split or not, I see ServiceNow as a solid long-term buy-and-hold opportunity.
Before you buy stock in ServiceNow, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and ServiceNow wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $660,341!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $874,192!*
Now, it's worth noting Stock Advisor's total average return is 999% — a market-crushing outperformance compared to 173% for the S&P 500. Don't miss out on the latest top 10 list, available when you join .
See the 10 stocks »
*Stock Advisor returns as of June 9, 2025
Adam Spatacco has positions in Palantir Technologies. The Motley Fool has positions in and recommends Adobe, Atlassian, Palantir Technologies, Salesforce, ServiceNow, Snowflake, and Workday. The Motley Fool has a disclosure policy.
Prediction: This Monster Artificial Intelligence (AI) Software Stock Could Be Wall Street's Next Stock-Split Candidate (Hint: It's Not Palantir) was originally published by The Motley Fool

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Associated Press

time12 minutes ago

  • Associated Press

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

SINGAPORE--(BUSINESS WIRE)--Jun 12, 2025-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. This press release features multimedia. View the full release here: MotusAI Dashboard As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on CONTACT: Media Contacts [email protected] KEYWORD: EUROPE SINGAPORE SOUTHEAST ASIA ASIA PACIFIC INDUSTRY KEYWORD: APPS/APPLICATIONS TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET HARDWARE DATA MANAGEMENT ARTIFICIAL INTELLIGENCE SOURCE: KAYTUS Copyright Business Wire 2025. PUB: 06/12/2025 07:11 AM/DISC: 06/12/2025 07:10 AM

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Business Wire

time21 minutes ago

  • Business Wire

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

SINGAPORE--(BUSINESS WIRE)-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X.

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Yahoo

time21 minutes ago

  • Yahoo

KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment

Streamlined inference performance, tool compatibility, resource scheduling, and system stability to fast-track large AI model deployment. SINGAPORE, June 12, 2025--(BUSINESS WIRE)--KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on Contacts Media Contacts media@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store