logo
Wells Fargo Upgrades Klaviyo (KVYO) Stock, Lifts PT

Wells Fargo Upgrades Klaviyo (KVYO) Stock, Lifts PT

Yahoo4 hours ago

Klaviyo, Inc. (NYSE:KVYO) is one of the 10 software stocks analysts are upgrading. On June 13, Wells Fargo upgraded the company's shares to 'Overweight' from 'Equal Weight,' while raising the price objective to $40 from the prior target of $32. The firm has meaningfully shifted its view on Klaviyo, Inc. (NYSE:KVYO)'s shares, as it believes that several concerns have been addressed.
A close-up of a transportation performance measurement system, highlighting the effectiveness of the company's software.
Klaviyo, Inc. (NYSE:KVYO) is in the early stages of unlocking international markets, with growth stable at 45% over the previous 4 quarters, and has a strategic emphasis on capturing more e-commerce wallet share given its entry into the service. These factors, together with the improvement in competitive positioning, have led the firm to believe that Klaviyo, Inc. (NYSE:KVYO) has a favorable path ahead for durable and outsized growth.
Klaviyo, Inc. (NYSE:KVYO) ended Q1 2025 with 3,030 customers garnering more than $50,000 of ARR as compared to 2,157 at Q1 2024 end, reflecting an increase of 40% YoY. The company added that more than 169,000 customers use Klaviyo to fuel their revenue growth as of March 31, 2025, compared to more than 146,000 customers as of March 31, 2024. Sands Capital, an investment management company, released its Q1 2025 investor letter. Here is what the fund said:
'Klaviyo, Inc. (NYSE:KVYO) is an innovative provider of business-to-consumer (B2C) marketing technology. The business reported a strong 2024 fourth quarter result, with key metrics exceeding both consensus and our expectations. While shares declined—due to high pre-earnings expectations, a slightly softer guide, and the broader tech selloff—we view the business as executing well. Management continues to improve communication and operational delivery, with strong momentum across upmarket adoption, international expansion, and product innovation. The standout announcement was the launch of a full CRM platform that integrates marketing, customer service, and analytics. This shift positions Klaviyo as a true platform business for B2C brands, in our view, and expands its AI opportunity.'
While we acknowledge the potential of KVYO to grow, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than KVYO and that has 100x upside potential, check out our report about this cheapest AI stock.
READ NEXT: 13 Cheap AI Stocks to Buy According to Analysts and 11 Unstoppable Growth Stocks to Invest in Now
Disclosure: None.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Watchdog finds ‘rampant abuse' of remote work among federal employees during Biden administration
Watchdog finds ‘rampant abuse' of remote work among federal employees during Biden administration

Yahoo

time28 minutes ago

  • Yahoo

Watchdog finds ‘rampant abuse' of remote work among federal employees during Biden administration

A U.S. government watchdog found 'rampant abuse' of work-from-home policies by federal workers, according to a new report released on Friday. The Inspector General of the Office of Personnel Management (OPM), which oversees the federal workforce, found 'compliance failures and weak internal oversight' as the root cause of the problem. The report focused on procedures that allowed employees to work remotely, rather than whether they were effectively performing their jobs. The report sampled badging data, timesheet, and remote-work agreements of dozens of federal employees in 2024, during President Joe Biden's administration, following a 2023 request from Republican Senator Joni Ernst of Iowa, who took issue with telework policies. 'Under the previous administration, OPMʼs telework and remote work policies were mismanaged and oversight was virtually nonexistent,' OPM Acting Director Chuck Ezell said in a statement. 'That era of telework abuse is over,' Ezell declared. 'At President Trumpʼs direction, OPM has restored in-person operations to ensure federal employees are working for the taxpayers.' On the first day of his second term, President Donald Trump signed an executive order directing federal agencies and departments to 'take all necessary steps to terminate remote work arrangements' and require employees to return to the office on a full-time basis. Federal employees were required to return on March 3; therefore, the findings and recommendations of the OPM report, which aimed to develop written procedures detailing internal controls concerning remote work, are now considered closed, according to the executive summary. OPM is the chief human resources agency and personnel policy manager for the federal government's 2.8 million employees. President Trump has claimed that many federal workers took on second jobs while still being paid by the federal government, or were not fulfilling their duties when working remotely. There was a dramatic increase in working from home during the Covid-19 pandemic in the first Trump administration. Based on a small sample of timesheets, the report found that 58.1 percent of the sampled employees failed to meet the minimum requirements for in-office work in 2024. According to OPM's inspector general, three in ten (29.7%) telework agreements had lapsed, 21 percent of those sampled had discrepancies in their paperwork, and 15 percent did not have any approved agreements on file. The report did not investigate why this was the case, but suggested that possible reasons included 'weak or missing management controls,' 'negligence or carelessness,' and 'intentional fraud or abuse.' Under the order signed by President Trump mandating a return to in-office work, limited exemptions are allowed as determined by departmental heads. Similarly, new internal controls and compliance reviews have been set for employees who continue to telework. When workers were summoned back into their offices five days per week in March, many were met with less-than-desirable conditions, from cramped workspaces to dirty bathrooms. In addition to the return to the office, the Trump administration also sought to cut costs by reducing space and staff. Multiple federal employees across various agencies and departments told news outlets at the time that they found themselves working elbow-to-elbow as staff consolidated into smaller workspaces. Understaffed cleaning crews are reportedly struggling to keep up with the demand for tidy spaces, resulting in dirty bathrooms with no paper towels. Some staff were asked to bring their own toilet paper or help out by taking their trash home, a federal employee told USA Today.

Mark Cuban Corrects Elizabeth Warren As She Slams JNJ, PFE For Higher Prices And Zero Taxes On Big Pharma: 'PBMs Corrupt Healthcare'
Mark Cuban Corrects Elizabeth Warren As She Slams JNJ, PFE For Higher Prices And Zero Taxes On Big Pharma: 'PBMs Corrupt Healthcare'

Yahoo

time30 minutes ago

  • Yahoo

Mark Cuban Corrects Elizabeth Warren As She Slams JNJ, PFE For Higher Prices And Zero Taxes On Big Pharma: 'PBMs Corrupt Healthcare'

Benzinga and Yahoo Finance LLC may earn commission or revenue on some items through the links below. A heated debate over the U.S. healthcare system erupted on X, pitting billionaire entrepreneur Mark Cuban against Sen. Elizabeth Warren (D-Mass) over the root causes of soaring drug prices. What Happened: The clash, unfolding as of Friday, has reignited discussions on Pharmacy Benefit Managers (PBMs) and Big Pharma's tax practices, with the Senate poised to address reform amid rising public scrutiny. Enter Cuban, the Dallas Mavericks owner and founder of Cost Plus Drugs, who fired back with a pointed correction. Trending: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — In a post quoting Warren, Cuban argued that PBMs—not Big Pharma—are the true culprits behind inflated drug prices. 'PBMs corrupt healthcare,' he wrote, explaining that these intermediaries control formularies and manipulate prices to maximize rebate revenue, with three major PBMs negotiating over 90% of rebates for commercial insurance plans. Warren, in her post, had blamed pharmaceutical giants like Johnson & Johnson (NYSE:JNJ) and Pfizer Inc. (NYSE:PFE) for charging Americans the 'highest drug prices in the world' while paying 'zero dollars in federal taxes.' Citing data from her Senate Finance Committee role, Warren highlighted how these companies, alongside AbbVie Inc. (NYSE:ABBV), Amgen Inc. (NASDAQ:AMGN), and Merck & Co Inc. (NYSE:MRK), have exploited tax loopholes from the 2017 Tax Cuts and Jobs Act to shield billions in profits. 'Republicans want to make it worse. I'm fighting back,' she declared, sharing a CNBC headline and urging action against what she calls a 'rigged tax system.' Why It Matters: Cuban has been a critic of the healthcare system, saying that it should be simple. According to him, it's become overly complicated by big insurance companies and PBMs. These groups, he says, act as middlemen, controlling not just the costs but also the accessibility of care. He blames PBMs for lack of transparency, inflated specialty drug prices, rebate distortion, formulary restrictions, and 'Sh–ing on' on independent pharmacies. Some significant firms engaged in the PBM business listed in the U.S. include;Stocks YTD Performance One Year Performance CVS Health Corp. (NYSE:CVS) 51.15% 9.57% Cigna Group. (NYSE:CI) 14.54% -6.95% UnitedHealth Group Inc. (NYSE:UNH) -39.11% -36.60% Meanwhile, here is how some pharmaceutical sector ETFs have performed;Pharma ETFs YTD Performance One Year Performance VanEck Pharmaceutical ETF (NASDAQ:PPH) 1.69% -4.48% iShares US Pharmaceuticals ETF (NYSE:IHE) -0.49% -1.85% Invesco Pharmaceuticals ETF (NYSE:PJP) -3.23% -1.64% SPDR S&P Pharmaceuticals ETF (NYSE:XPH) -4.06% 2.61% KraneShares MSCI All China Health Care Index ETF (NYSE:KURE) 20.55% 22.94% First Trust Nasdaq Pharmaceuticals ETF (NASDAQ:FTXH) -5.44% -6.27% Direxion Daily Pharmaceutical & Medical (NYSE:PILL) -23.49% -14.63% The SPDR S&P 500 ETF Trust (NYSE:SPY) and Invesco QQQ Trust ETF (NASDAQ:QQQ), which track the S&P 500 index and Nasdaq 100 index, respectively, were mixed in premarket on Friday. The SPY was down 0.30% at $595.67, while the QQQ was 0.015% higher at $529.07, according to Benzinga Pro data. Read Next: Invest early in CancerVax's breakthrough tech aiming to disrupt a $231B market. Back a bold new approach to cancer treatment with high-growth potential. If there was a new fund backed by Jeff Bezos offering a 7-9% target yield with monthly dividends would you invest in it? Photo courtesy: Shutterstock This article Mark Cuban Corrects Elizabeth Warren As She Slams JNJ, PFE For Higher Prices And Zero Taxes On Big Pharma: 'PBMs Corrupt Healthcare' originally appeared on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

time31 minutes ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store