Latest news with #RedHat
Yahoo
a day ago
- Business
- Yahoo
Meta Platforms (NasdaqGS:META) Announces US$0.53 Dividend for Shareholders
Meta Platforms recently declared a quarterly cash dividend of $0.53 per share, reflecting its continued commitment to shareholder value. Over the last month, the company's stock saw a notable price increase of 16%, possibly influenced by this announcement. Despite broader market uncertainties, Meta's robust earnings growth and positive quarterly revenue guidance likely added weight to this upward trend. Additionally, the company's collaboration with Red Hat on generative AI technology might have positively influenced investor sentiment. While the market, including the Nasdaq, experienced gains, Meta's stronger rise indicates specific investor confidence in its strategic moves. We've discovered 1 risk for Meta Platforms that you should be aware of before investing here. Trump has pledged to "unleash" American oil and gas and these 22 US stocks have developments that are poised to benefit. The recent announcement of a quarterly cash dividend by Meta Platforms may reinforce investor confidence in the company's ongoing commitment to shareholder value. This confidence coincides with the 16% increase in Meta's share price over the past month, which may also reflect positive sentiment surrounding its collaboration with Red Hat on generative AI technology. Undoubtedly, the focus on AI and improved ad targeting holds the potential to bolster revenue by enhancing digital engagement, a sentiment echoed by the narrative that AI and business messaging could significantly impact Meta's growth prospects. Reflecting on the longer-term performance, Meta's total return of over 225.9% in the past three years exemplifies its significant growth trajectory, far outpacing the US market and industry, both of which saw lower returns over the past year. As Meta strives to expand its commerce capabilities through WhatsApp and Messenger, these developments might catalyze further revenue expansion, aligning with analysts' forecasts of revenue growth at an annual rate of 12.7% over the next few years. However, increased AI investments and operational challenges in Europe pose potential risks to maintaining current profit margins. With Meta trading at US$587.31, the share price is currently positioned below the consensus analyst price target of US$703.89, suggesting further upside potential. Nevertheless, this valuation hinges on achieving the projected earnings of US$86 billion by 2028. Investors should weigh these factors critically, considering the persistent challenges faced by the Reality Labs and potential impacts of regulatory adjustments in Europe. As these elements unfold, they will determine the accuracy of future earnings forecasts and the alignment with the consensus price target. Explore historical data to track Meta Platforms' performance over time in our past results report. This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Companies discussed in this article include NasdaqGS:META. This article was originally published by Simply Wall St. Have feedback on this article? Concerned about the content? with us directly. Alternatively, email editorial-team@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Insider
2 days ago
- Business
- Business Insider
‘Don't Be Afraid to Buy IBM Stock Right Now,' Says Five-Star Analyst
Five-star Bank of America analyst Wamsi Mohan kept a Buy rating on IBM (IBM) and set a price target of $270 after saying that the tech company is a safe investment with expected growth ahead. Indeed, he believes that IBM will grow its revenue and cash flow, which could help fund more mergers and acquisitions in the future. In his note, Mohan focused on transaction processing (TP) software, a key part of IBM's mainframe systems, which has not only stabilized but is now growing again, even in today's cloud-focused tech world. Confident Investing Starts Here: Mohan explained that TP software makes up about one-third of IBM's total software revenue, earning between $5 billion and $7 billion per year, with most of this coming from IBM Z mainframe systems. In addition, growth in this area is being driven by new pricing models, technological improvements, and high demand for reliable systems that are used in critical business operations. Mohan also pointed out that many IBM Z customers are increasing their usage in order to handle more digital transactions, which has boosted IBM's recurring software revenue. Interestingly, while Red Hat has been a major driver of growth for IBM, TP software is also becoming more important. In fact, IBM expects this part of the business to grow by mid-single digits, while the total software segment is expected to grow by about 10%. It is worth noting that IBM has added features like WatsonX AI and switched to more flexible subscription pricing, which has helped more customers sign on and stay longer. As a result, these strategies are helping IBM grow both usage and revenue by making the software a deeper part of customers' businesses, according to Mohan. What Is the Target Price for IBM? Turning to Wall Street, analysts have a Moderate Buy consensus rating on IBM stock based on eight Buys, five Holds, and two Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average IBM price target of $260.62 per share implies 0.7% upside potential.


Globe and Mail
3 days ago
- Business
- Globe and Mail
AMD's AI Stack Gets a Boost with Red Hat — Can It Finally Challenge Nvidia's Dominance?
Advanced Micro Devices (AMD) is no longer just chasing Nvidia (NVDA). Instead, it's quietly building a full-stack AI ecosystem that could change the game. In that regard, TipRanks' AI Analyst seems to like what it sees, giving AMD an 'Outperform' rating and an 81 score, backed by strong growth signals in its data center and AI business. Confident Investing Starts Here: AMD and Red Hat Join Forces The latest boost comes from a major new partnership with Red Hat, announced at the Red Hat Summit on May 20. AMD's Instinct MI300X GPUs are now fully supported on Red Hat OpenShift AI, giving enterprise customers a powerful, open-source alternative to Nvidia's AI dominance. OpenShift AI is a plug-and-play cloud platform built for AI and machine learning, giving data scientists the freedom to build, test, and deploy models fast, while still giving IT teams the control and security they need to sleep at night. And this isn't just surface-level integration. The two companies are working deep in the code, optimizing performance on AMD hardware for vLLM—an AI inference engine built for speed and scalability. That means AI workloads can now run more efficiently on AMD chips, with fewer virtual machines, less energy usage, and lower overall cost. Even Microsoft Azure (MSFT) is getting in on it—Red Hat and AMD demonstrated large language model inference across multiple GPUs on a single Azure virtual machine, something that slashes infrastructure complexity and makes AMD a very real option for cloud-scale AI. AMD Offers Full-Stack Experience But this goes beyond GPUs. AMD's EPYC CPUs are also tightly woven into the story. Red Hat OpenShift Virtualization now runs on EPYC processors, helping businesses modernize legacy applications while laying the groundwork for AI. It's an end-to-end stack: virtual machines, containers, inference engines, and high-performance hardware. AMD's AI ecosystem now spans cloud providers, open-source platforms, and enterprise infrastructure, and the market is starting to respond. In Q1 2025, AMD reported $7.4 billion in revenue, up 36% year-over-year. Data center revenue soared 57%, and the AI business posted a strong double-digit increase. The company also announced a $6 billion share buyback, signaling confidence in its future performance. Yes, there are risks. Export controls on China are expected to shave $1.5 billion off this year's revenue. And some fund managers are backing away amid valuation concerns. But the big picture looks different: AMD is no longer just a chip supplier—it's becoming an AI platform company. What Spark Sees in AMD That's what TipRanks' AI Analyst is picking up on. The model isn't just reading earnings—it's analyzing product launches, strategic partnerships, and macro trends. And it sees AMD executing well in the right places. With an 'Outperform' rating and a 14%+ upside based on a $126 price target, AMD's AI ambitions are gaining credibility. Investors aren't just betting on better chips anymore—they're betting on an ecosystem. And that's a bet AMD looks increasingly ready to deliver on. Is AMD a Buy or Sell? The Street's analysts rate Advanced Micro Devices as a Moderate Buy, with an average AMD stock price target of $126.55. This implies a 14.72% upside. See more AMD analyst ratings Disclaimer & Disclosure Report an Issue


Mid East Info
6 days ago
- Business
- Mid East Info
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server
Red Hat AI Inference Server, powered by vLLM and enhanced with Neural Magic technologies, delivers faster, higher-performing and more cost-efficient AI inference across the hybrid cloud BOSTON – RED HAT SUMMIT – MAY, 2025 — Red Hat, the world's leading provider of open source solutions, announced Red Hat AI Inference Server, a significant step towards democratizing generative AI (gen AI) across the hybrid cloud. A new offering within Red Hat AI, the enterprise-grade inference server is born from the powerful vLLM community project and enhanced by Red Hat's integration of Neural Magic technologies, offering greater speed, accelerator-efficiency and cost-effectiveness to help deliver Red Hat's vision of running any gen AI model on any AI accelerator in any cloud environment. Whether deployed standalone or as an integrated component of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI, this breakthrough platform empowers organizations to more confidently deploy and scale gen AI in production. Inference is the critical execution engine of AI, where pre-trained models translate data into real-world impact. It's the pivotal point of user interaction, demanding swift and accurate responses. As gen AI models explode in complexity and production deployments scale, inference can become a significant bottleneck, devouring hardware resources and threatening to cripple responsiveness and inflate operational costs. Robust inference servers are no longer a luxury, but a necessity for unlocking the true potential of AI at scale, navigating underlying complexities with greater ease. Red Hat directly addresses these challenges with Red Hat AI Inference Server — an open inference solution engineered for high performance and equipped with leading model compression and optimization tools. This innovation empowers organizations to fully tap into the transformative power of gen AI by delivering dramatically more responsive user experiences and unparalleled freedom in their choice of AI accelerators, models and IT environments. vLLM: Extending inference innovation: Red Hat AI Inference Server builds on the industry-leading vLLM project, which was started by University of California, Berkeley in mid-2023. The community project delivers high-throughput gen AI inference, support for large input context, multi-GPU model acceleration, support for continuous batching and more. vLLM's broad support for publicly available models – coupled with its day zero integration of leading frontier models including DeepSeek, Gemma, Llama, Llama Nemotron, Mistral, Phi and others, as well as open, enterprise-grade reasoning models like Llama Nemotron – positions it as a de facto standard for future AI inference innovation. Leading frontier model providers are increasingly embracing vLLM, solidifying its critical role in shaping gen AI's future. Introducing Red Hat AI Inference Server: Red Hat AI Inference Server packages the leading innovation of vLLM and forges it into the enterprise-grade capabilities of Red Hat AI Inference Server. Red Hat AI Inference Server is available as a standalone containerized offering or as part of both RHEL AI and Red Hat OpenShift AI. Across any deployment environment, Red Hat AI Inference Server provides users with a hardened, supported distribution of vLLM, along with: Intelligent LLM compression tools for dramatically reducing the size of both foundational and fine-tuned AI models, minimizing compute consumption while preserving and potentially enhancing model accuracy. Optimized model repository, hosted in the Red Hat AI organization on Hugging Face, offers instant access to a validated and optimized collection of leading AI models ready for inference deployment, helping to accelerate efficiency by 2-4x without compromising model accuracy. Red Hat's enterprise support and decades of expertise in bringing community projects to production environments. Third-party support for even greater deployment flexibility, enabling Red Hat AI Inference Server to be deployed on non-Red Hat Linux and Kubernetes platforms pursuant to Red Hat's third-party support policy. Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform – a standard for more seamless, high-performance AI innovation, both today and in the years to come. Just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of modern IT, the company is now poised to architect the future of AI inference. vLLM's potential is that of a linchpin for standardized gen AI inference, and Red Hat is committed to building a thriving ecosystem around not just the vLLM community but also llm-d for distributed inference at scale. The vision is clear: regardless of the AI model, the underlying accelerator or the deployment environment, Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud. Red Hat Summit: Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Modernized infrastructure meets enterprise-ready AI — Tuesday, May 20, 8-10 a.m. EDT (YouTube) Hybrid cloud evolves to deliver enterprise innovation — Wednesday, May 21, 8-9:30 a.m. EDT (YouTube) Supporting Quotes: Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment.' Ramine Roane, corporate vice president, AI Product Management, AMD 'In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™ GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators.' Jeremy Foster, senior vice president and general manager, Cisco 'AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next.' Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel 'Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel® Gaudi® accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications.' John Fanelli, vice president, Enterprise Software, NVIDIA 'High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design.' About Red Hat: Red Hat is the world's leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future. Forward-Looking Statements: Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the company's current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.


Web Release
6 days ago
- Business
- Web Release
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server
Red Hat Unlocks Generative AI for Any Model and Any Accelerator Across the Hybrid Cloud with Red Hat AI Inference Server Red Hat, the world's leading provider of open source solutions, announced Red Hat AI Inference Server, a significant step towards democratizing generative AI (gen AI) across the hybrid cloud. A new offering within Red Hat AI, the enterprise-grade inference server is born from the powerful vLLM community project and enhanced by Red Hat's integration of Neural Magic technologies, offering greater speed, accelerator-efficiency and cost-effectiveness to help deliver Red Hat's vision of running any gen AI model on any AI accelerator in any cloud environment. Whether deployed standalone or as an integrated component of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI, this breakthrough platform empowers organizations to more confidently deploy and scale gen AI in production. Inference is the critical execution engine of AI, where pre-trained models translate data into real-world impact. It's the pivotal point of user interaction, demanding swift and accurate responses. As gen AI models explode in complexity and production deployments scale, inference can become a significant bottleneck, devouring hardware resources and threatening to cripple responsiveness and inflate operational costs. Robust inference servers are no longer a luxury, but a necessity for unlocking the true potential of AI at scale, navigating underlying complexities with greater ease. Red Hat directly addresses these challenges with Red Hat AI Inference Server — an open inference solution engineered for high performance and equipped with leading model compression and optimization tools. This innovation empowers organizations to fully tap into the transformative power of gen AI by delivering dramatically more responsive user experiences and unparalleled freedom in their choice of AI accelerators, models and IT environments. LLM: Extending inference innovation Red Hat AI Inference Server builds on the industry-leading vLLM project, which was started by University of California, Berkeley in mid-2023. The community project delivers high-throughput gen AI inference, support for large input context, multi-GPU model acceleration, support for continuous batching and more. LLM's broad support for publicly available models – coupled with its day zero integration of leading frontier models including DeepSeek, Gemma, Llama, Llama Nemotron, Mistral, Phi and others, as well as open, enterprise-grade reasoning models like Llama Nemotron – positions it as a de facto standard for future AI inference innovation. Leading frontier model providers are increasingly embracing vLLM, solidifying its critical role in shaping gen AI's future. Introducing Red Hat AI Inference Server Red Hat AI Inference Server packages the leading innovation of vLLM and forges it into the enterprise-grade capabilities of Red Hat AI Inference Server. Red Hat AI Inference Server is available as a standalone containerized offering or as part of both RHEL AI and Red Hat OpenShift AI. Across any deployment environment, Red Hat AI Inference Server provides users with a hardened, supported distribution of vLLM, along with: Intelligent LLM compression tools for dramatically reducing the size of both foundational and fine-tuned AI models, minimizing compute consumption while preserving and potentially enhancing model accuracy. Optimized model repository , hosted in the Red Hat AI organization on Hugging Face , offers instant access to a validated and optimized collection of leading AI models ready for inference deployment, helping to accelerate efficiency by 2-4x without compromising model accuracy. Red Hat's enterprise support and decades of expertise in bringing community projects to production environments. Third-party support for even greater deployment flexibility, enabling Red Hat AI Inference Server to be deployed on non-Red Hat Linux and Kubernetes platforms pursuant to Red Hat's third-party support policy . Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform – a standard for more seamless, high-performance AI innovation, both today and in the years to come. Just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of modern IT, the company is now poised to architect the future of AI inference. vLLM's potential is that of a linchpin for standardized gen AI inference, and Red Hat is committed to building a thriving ecosystem around not just the vLLM community but also llm-d for distributed inference at scale. The vision is clear: regardless of the AI model, the underlying accelerator or the deployment environment, Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud. Red Hat Summit Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Supporting Quotes Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat 'Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment.' Ramine Roane, corporate vice president, AI Product Management, AMD 'In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD Instinct™ GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators.' Jeremy Foster, senior vice president and general manager, Cisco 'AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next.' Bill Pearson, vice president, Data Center & AI Software Solutions and Ecosystem, Intel 'Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel® Gaudi® accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications.' John Fanelli, vice president, Enterprise Software, NVIDIA 'High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design.' Additional Resources Connect with Red Hat