Arista 2.0 Strategy Gaining Traction: Is it the Next Key Growth Driver?
Cisco Systems, Inc. CSCO is the largest player in the networking space. The company has a strong presence in the router and switch market. It has retained a leadership position in WLAN and Ethernet switching and is rapidly expanding its presence in the network security domain. Cisco provides Next-Generation Network Routers that transport data, voice and video information from one IP network to another. It is expanding its Agentic AI footprint with the introduction of Webex AI Agent, Cisco AI Assistant for Webex Contact Center, Workflow Automation in Cisco AI Assistant for Webex, AI Capabilities in Webex Control Hub and Webex Calling Customer Assist. Its strategy of integrating AI across security and collaboration platforms and developing agentic capabilities across the portfolio is a key catalyst for improving customer experience.Juniper Networks Inc. JNPR is taking significant steps to enhance the adoption of its AI-Native Networking Platform through the introduction of its Blueprint for AI-Native Acceleration. This comprehensive framework is designed to simplify and accelerate the deployment and utilization of AI-driven networking solutions, benefiting enterprises across various sectors. By leveraging Juniper's AI-Native Networking Platform, organizations can expect up to an 85% reduction in operational expenses and a 90% decrease in network trouble tickets. The platform's industry-leading AIOps technology is key to delivering these results, providing reliable, measurable and secure connections across all devices and applications. The Blueprint for AI-Native Acceleration further enhances these capabilities, enabling customers to achieve up to nine times faster deployments.ANET's Price Performance, Valuation and EstimatesArista has surged 27.1% over the past year compared with the industry's growth of 36.8%
Image Source: Zacks Investment Research
From a valuation standpoint, Arista trades at a forward price-to-sales ratio of 13.63, above the industry.
Image Source: Zacks Investment Research
The Zacks Consensus Estimate for Arista's earnings for 2025 has increased over the past 60 days.
Image Source: Zacks Investment Research
Arista currently carries a Zacks Rank #2 (Buy). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here.
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
Cisco Systems, Inc. (CSCO) : Free Stock Analysis Report
Juniper Networks, Inc. (JNPR) : Free Stock Analysis Report
Arista Networks, Inc. (ANET) : Free Stock Analysis Report
This article originally published on Zacks Investment Research (zacks.com).
Zacks Investment Research
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
19 minutes ago
- Tom's Guide
ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)
With the launch of OpenAI's newest model, the company has introduced four distinct personality modes for ChatGPT-5. As the company's most advanced large language model to date, it delivers major upgrades in reasoning, memory and multimodal capabilities. It's better at sustaining complex conversations, understanding context across chats and producing more accurate and creative responses. Now, during text-based conversations, users can customize how the chatbot sounds and responds. It's similar to custom GPTs, but these modes were created by OpenAI to give users a new way to tailor the assistant's tone and demeanor with a single click. Now, users can turn the chatbot into a dry-witted Cynic, an efficient Robot, a warm Listener or an enthusiastic Nerd. These personality styles are available to users running GPT-5 in text mode and can be changed at any time during a chat. Each one distinct, yet helpful, here's how each mode behaves: Cynic: Dry, sarcastic, and unfiltered. Offers practical advice with a side of side-eye. Great for humor or cutting through fluff. Robot: Emotionless, direct, and to the point. No extra words — just clean, accurate responses. Listener: Calm, warm and empathetic. Reflects your thoughts gently and adds emotional nuance to its replies. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Nerd: Curious, excitable, and deeply knowledgeable. Explains things thoroughly and loves sharing fun facts. Each personality mode brings a different tone to your conversations and can even shift how you brainstorm, write or learn with AI. Activating these modes is easy: The personality will apply to your current chat, but you can switch or turn off the personality mode at any time. Here's how each personality mode answers the same prompt: 'Why is the sky blue?' Cynic: Oh, you mean besides the fact that Mother Nature likes to mess with us? Fine. Sunlight hits the atmosphere, the shorter blue wavelengths scatter more than the other colors, so the sky looks blue to us simple humans. You'll get your nice orange or red at sunset when the light's taking the scenic route. Happy now? Robot: The sky appears blue due to Rayleigh scattering. When sunlight enters Earth's atmosphere, shorter wavelengths (blue) scatter more efficiently than longer wavelengths (red). This selective scattering causes the human eye to perceive the sky as blue during daylight hours. Listener: That's such a great question! It's actually because sunlight is made up of many colors, and when it passes through our atmosphere, the tiny air particles scatter the shorter blue wavelengths more than the others. So when we look up, our eyes catch mostly that soft, blue light. It's kind of peaceful when you think about it, right? Nerd: Oh, you've unlocked one of my favorite topics! The blue sky is thanks to Rayleigh scattering, which is inversely proportional to the fourth power of wavelength. That means shorter wavelengths like blue and violet scatter much more than red or yellow. Our eyes are more sensitive to blue than violet, and some violet is absorbed by the upper atmosphere, so we see a rich blue. Fun fact: the same physics makes Mars' sunsets blue and ours red! Switching between ChatGPT's personalities is fun, but it can also seriously boost your productivity and creativity. Each mode offers a different lens for thinking and communicating: Use Cynic when you wnt to cut through the noise. It's good for brainstorming hot takes or injecting humor into dry topics. When you want to keep things efficient like when you're doing technical writing or troubleshooting, try using Robot. It's also a useful companion when coding. Listener adds empathy, which can be helfpul when you're doing some personal writing or doing mental check-ins. It could also be useful for writing to customers if you run a business. Nerd is a useful personality when you want to make learning fun. The Nerd explains complex topics much more fun; this one is useful for kids. Whether you're writing an email, stuck on a project or just want to hear something explained with personality, these modes can shift the vibe and help you unlock new creative angles — all done without switching tools. These new personality styles give ChatGPT-5 a more human-like edge and give you more control. As in the example above, you'll see that they all respond differently. This is an opportunity to choose how your AI sounds, thinks and helps, instead of the one-size-fits-all assistant that we got with GPT-4. Try them all. You might be surprised which one becomes your Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Forbes
20 minutes ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.
Yahoo
2 hours ago
- Yahoo
Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell
As if suspiciously AI-generated descriptions of real estate listings weren't enough, agents are starting to use AI-generated images of houses that don't exist to sell expensive properties. The Register spotted a dubious listing of a fully renovated 3-bedroom in the UK with a photo — which has since been deleted, but can still be viewed in an archived version — that shows the tell-tale signs of generative AI, including awnings that don't line up, hedges that inexplicably turn into walls, and a flowerbed obstructing a neighboring door. The real — or at least less-edited — version of the image tells a dramatically different story, with a tiled awning, no flowerbed obstructing the path to the neighboring door, and an adjoining hair salon that was seemingly deleted from the tampered-with version entirely. In some ways, the haphazard use of AI shouldn't come as much of a surprise. The real estate world has long made use of CGI tools to virtually "stage" homes, a cost-cutting measure designed to save agencies from having to physically move furniture around. But given the damning evidence — alongside society's current obsession with shoehorning generative AI into virtually every aspect of modern life — the trend has seemingly found its next logical application. And we're not just talking about photoshopping a couch into the photo of an empty living room. As The Register points out, the original listing of the house pictured above confused entire structural elements, down to the placement of the toilet in the bathroom. Reached by The Register, a spokesperson for the agency responsible for the AI carnage responded with a "sharp intake of breath and a declaration that 'we're a little busy at the moment.'" It didn't reply to followup questions. The trend paints a troubled picture of an industry that already has a tarnished reputation for twisting the truth. It's a worrying reality, given that entire livelihoods are being put on the line in the pursuit of buying one's dream home. Worse, experts say this kind of use of AI could even run afoul of the law. "For me, the use of AI for imagery in property listings is a major red flag aligned to what was previously covered by the Property Misdescriptions Act," University of Reading associate professor of building surveying Adrian Tagg told The Register. Tagg explained that building surveyors are "bound by regulations to deliver evidence-based opinion and hold a duty of care to deliver correct, appropriate advice." "Estate agency has never really had this professional duty, and ultimately it's all about sales and doing 'the deal,'" he explained. "Therefore I'm not surprised that there appears an openness to accept AI when ultimately it's an industry with little obligation to be accountable for their actions." Despite very real concerns of misleading home buyers, the real estate market has welcomed the age of generative AI with open arms. Consultancy firm McKinsey & Company predicted in 2023 that the tech could "generate $110 billion to $180 billion or more in value for the real estate industry." The consultancy said that generative AI could help in "customer engagement," and generating "new creative content," among other use cases. An entire cottage industry has cropped up, offering cheap, "AI-powered virtual staging" and real estate "photo editing" services. But prospective buyers are understandably furious. Earlier this year, one user in the FirstTimeHomeBuyer subreddit came across a listing on Zillow, which included a (since-deleted) picture of a small bungalow in Newport, Kentucky. The heavily edited image didn't even get the house number right, showing up as a mangled "418," instead of 1026. "Using AI in listing photos should be illegal," the user wrote. More on generative AI: Microsoft Releases List of Jobs Most and Least Likely to Be Replaced by AI