logo
CV Sciences Inc (CVSI) Q1 2025 Earnings Call Highlights: Navigating Challenges with Strategic ...

CV Sciences Inc (CVSI) Q1 2025 Earnings Call Highlights: Navigating Challenges with Strategic ...

Yahoo15-05-2025

Release Date: May 14, 2025
For the complete transcript of the earnings call, please refer to the full earnings call transcript.
CV Sciences Inc (CVSI) maintained a healthy gross margin of 46% in Q1 2025, demonstrating cost efficiency.
The company continues to hold the number one position in the natural product retail channel, with increased market share.
CV Sciences Inc (CVSI) is diversifying its product offerings by including more non-cannabinoid products, which could mitigate regulatory challenges.
The acquisition of Elevated Soft Gels is expected to improve cost efficiency and gross margin through in-sourced manufacturing.
The launch of new products, such as the Lunar Fox line and fizz effervescent tablets, shows successful product innovation and adaptation to consumer demands.
Revenue for Q1 2025 was $3.6 million, a decline from $3.9 million in Q4 2024, primarily due to lower unit sales.
The company faced temporary out-of-stock issues with key products due to manufacturing delays from contract manufacturers.
Increased state regulations have negatively impacted unit sales, highlighting ongoing regulatory challenges.
The direct-to-consumer (B2C) revenue declined by 9% sequentially, attributed to seasonality around the holiday season.
Despite improvements, CV Sciences Inc (CVSI) reported a net loss of $0.1 million for Q1 2025, indicating ongoing financial challenges.
Warning! GuruFocus has detected 3 Warning Signs with CVSI.
Q: Can you provide an overview of CV Sciences' financial performance for Q1 2025? A: Joseph Dowling, CEO, reported that CV Sciences generated $3.6 million in revenue, slightly down from $3.9 million in Q4 2024. The company maintained a gross margin of 46% and is moving closer to profitability and positive cash flow. The focus remains on increasing revenue and cost efficiency through M&A strategies and product innovation.
Q: What are the key strategic goals for CV Sciences' M&A strategy? A: Joseph Dowling, CEO, stated that the primary goals are to increase scale with greater revenue and cost efficiency. The company is looking for transactions that leverage its assets, including brands, people, processes, and distribution, and that allow for further insourcing of manufacturing to improve cost efficiency and gross margin.
Q: How is CV Sciences adapting its product innovation efforts? A: Joseph Dowling, CEO, explained that the company is diversifying its product offerings to include more non-cannabinoid products under the Plus Health brand. This includes the launch of new products like the Lunar Fox line of plant-based foods and the Aura collection's fizz effervescent tablets. The aim is to meet consumer demand and mitigate regulatory challenges in the hemp industry.
Q: What challenges did CV Sciences face in Q1 2025, and how are they being addressed? A: The CFO noted that the company faced lower unit sales due to increased state regulations and temporary out-of-stock situations caused by manufacturing delays. These challenges are being addressed by insourcing manufacturing to reduce dependency on contract manufacturers and improve cost efficiency.
Q: What are the expectations for CV Sciences' financial performance in the second half of 2025? A: The CFO indicated that CV Sciences expects to see improved cost efficiency and gross margin from the integration of elevated soft gels and other acquisitions. The company anticipates generating positive cash flow and realizing synergies from these acquisitions in the second half of 2025.
For the complete transcript of the earnings call, please refer to the full earnings call transcript.
This article first appeared on GuruFocus.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This Startup Is Disrupting Traditional Tech Outsourcing With AI-Powered Teams
This Startup Is Disrupting Traditional Tech Outsourcing With AI-Powered Teams

Yahoo

time3 days ago

  • Yahoo

This Startup Is Disrupting Traditional Tech Outsourcing With AI-Powered Teams

SAN FRANCISCO, June 23, 2025 /PRNewswire/ -- As global IT spending heads toward an estimated $5.74 trillion in 2025, one aspect of the sector remains largely unchanged: the outsourcing model. Even with the rapid advancements in AI technology, many IT service providers still rely on a decades old playbook - one that emphasizes headcount, billable hours, and one-size-fits-all delivery. A startup founded in 2018, EMB Global, is betting that the future looks very different. The company, which has raised over $25 million to date and operates across 23 countries, has built a delivery framework that blends proprietary artificial intelligence with highly specialized human expertise. Its goal: to replace static, resource-heavy outsourcing with leaner, faster, and more adaptive teams. At the center of EMB's model is Aura, a proprietary AI agent powered by Small Language Models (SLMs) trained on industry-specific and regionally contextual datasets. According to the company, Aura now automates up to 80% of key functions, from requirement gathering to project management workflows, dramatically reducing manual overhead and time to delivery. Human expertise still plays a critical role. EMB Global works with a distributed network of more than 1,500 boutique agencies, each selected for domain alignment, allowing the company to bring both speed and specificity to projects across sectors like fintech, healthcare, logistics, and retail. "We're not in this to deliver code," says Rohan Barua, Co-founder of EMB Global and a Forbes 30 Under 30 Asia honoree. He is blunt about the state of the industry. "The legacy outsourcing model was built for a different era-one where inefficiency was profitable and nuance didn't matter. That doesn't cut it anymore." As EMB expands into the U.S. market, its positioning is clear: fewer layers, faster execution, and solutions built around real business goals-not headcount. "I've spoken to our customers and found out that, on average, we helped them reduce delivery times by 52% and cut costs by around 35%," says Nishant Behl, Founder and CEO of EMB Global. A member of the Forbes Technology Council, he adds, "But more importantly, we're giving them tech capabilities that were once the exclusive domain of tech giants." Of the company's total funding, $5 million has been set aside specifically to scale its AI infrastructure, which is a part of a broader effort to support growing demand for personalized, high-impact technology partnerships. The company has also earmarked an additional $25 million over the next two years to accelerate the development and deployment of its AI capabilities. With global enterprise AI spend projected to reach $644 billion by 2025, a 76.4% increase over 2024, EMB Global is carving out a distinct space in the market - where automation is intelligently paired with deep, domain-specific expertise. "Our objective," Behl adds, "is to make advanced technology not just available, but usable, cost-effective, context-aware, and aligned with tangible business outcomes." For more information, visit Media Contact: Ata Suhail | Logo: View original content to download multimedia: SOURCE EMB Global

Deepgram Launches Voice Agent API: World's Only Enterprise-Ready, Real-Time, and Cost-Effective Conversational AI API
Deepgram Launches Voice Agent API: World's Only Enterprise-Ready, Real-Time, and Cost-Effective Conversational AI API

Business Wire

time16-06-2025

  • Business Wire

Deepgram Launches Voice Agent API: World's Only Enterprise-Ready, Real-Time, and Cost-Effective Conversational AI API

SAN FRANCISCO--(BUSINESS WIRE)-- Deepgram, the leading voice AI platform for enterprise use cases, today announced the general availability (GA) of its Voice Agent API, a single, unified voice-to-voice interface that gives developers full control to build context-aware voice agents that power natural, responsive conversations. Combining speech-to-text, text-to-speech, and large language model (LLM) orchestration with contextualized conversational logic into a unified architecture, the Voice Agent API gives developers the choice of using Deepgram's fully integrated stack (leveraging industry-leading Nova-3 STT and Aura-2 TTS models) or bringing their own LLM and TTS models. It delivers the simplicity developers love and the controllability enterprises need to deploy real-time, intelligent voice agents at scale. Today, companies like Aircall, Jack in the Box, StreamIt, and OpenPhone are building voice agents with Deepgram to save costs, reduce wait times, and increase customer loyalty. Voice Agent API Industry's Only Offering That Delivers the Single, Real-Time API Experience Developers Love, Combined with Full Controllability Enterprises Need. Share In today's market, teams building voice agents are often forced to choose between two extremes: rigid, low-code platforms that lack customization, or DIY toolchains that require stitching together STT, TTS, and LLMs with significant engineering effort. Deepgram's Voice Agent API eliminates this tradeoff by providing a unified API that simplifies development without sacrificing control. Developers can build faster with less complexity, while enterprises retain full control over orchestration, deployment, and model behavior, without compromising on performance or reliability. 'The future of customer engagement is voice-first,' said Scott Stephenson, CEO of Deepgram. 'But most voice systems today are rigid, fragmented, or too slow. With our Voice Agent API, we're giving developers a powerful yet simple interface to build conversational agents that feel natural, respond instantly, and scale across use cases without compromise.' 'We believe the future of customer communication is intelligent, seamless, and deeply human—and that's the vision behind Aircall's AI Voice Agent,' said Scott Chancellor, Chief Executive Officer of Aircall. 'To bring it to life, we needed a partner who could match our ambition, and Deepgram delivered. Their advanced Voice Agent API enabled us to build fast without compromising accuracy or reliability. From managing mid-sentence interruptions to enabling natural, human-like conversations, their service performed with precision. Just as importantly, their collaborative approach helped us iterate quickly and push the boundaries of what voice intelligence can deliver in modern business communications.' 'We believe that integrating AI voice agents will be one of the most impactful initiatives for our business operations over the next five years, driving unparalleled efficiency and elevating the quality of our service,' said Doug Cook, CTO of Jack in the Box. 'Deepgram is a leader in the industry and will be a strategic partner as we embark on this transformative journey.' Developer Simplicity and Faster Time to Market For teams taking the DIY route, the challenge isn't just connecting models but also building and operating the entire runtime layer that makes real-time conversations work. Teams must manage live audio streaming, accurately detect when a user has finished speaking, coordinate model responses, handle mid-sentence interruptions, and maintain a natural conversational cadence. While some platforms offer partial orchestration features, most APIs do not provide a fully integrated runtime. As a result, developers are often left to manage streaming, session state, and coordination logic across fragmented services, which adds complexity and delays time to production. Deepgram's Voice Agent API removes this burden by providing a single, unified API that integrates speech-to-text, LLM reasoning, and text-to-speech with built-in support for real-time conversational dynamics. Capabilities such as barge-in handling and turn-taking prediction are model-driven and managed natively within the platform. This eliminates the need to stitch together multiple vendors or maintain custom orchestration, enabling faster prototyping, reduced complexity, and more time focused on building high-quality experiences. In addition to the Voice Agent API, organizations seeking broader integrations can leverage Deepgram's extensive partner ecosystem, including Twilio and others, to access comprehensive conversational AI solutions and services powered by Deepgram APIs. Maximum Control and Flexibility While the Voice Agent API streamlines development, it also gives teams deep control over performance, behavior, and scalability in production. Built on Deepgram's Enterprise Runtime and full model ownership across the entire voice AI stack, the platform enables model-level optimization at every layer of the interaction loop. This allows for precise tuning of latency, barge-in handling, turn-taking, and domain-specific behavior in ways not possible with disconnected components. Key capabilities include: Flexible Deployment: Run the complete voice stack in cloud, VPC, or on-prem environments to meet enterprise requirements for security, compliance, and performance. Runtime-Level Orchestration: Deepgram's runtime supports mid-session control, real-time prompt updates, model switching, and event-driven signaling to adapt agent behavior dynamically. Bring-Your-Own Models: Teams can integrate their own LLMs or TTS systems while retaining Deepgram's orchestration, streaming pipeline, and real-time responsiveness. 'Deepgram gives us the flexibility to bring our own models, voices, and customize behavior while controlling how we build and orchestrate our voice agents,' said Harshal Jethwa, Engineering Manager at OpenPhone. 'Their system seamlessly handles the complexity of real-time voice coordination, letting us focus on creating exactly the experience we want.' This tightly coordinated design translates directly into measurable performance gains. In recent benchmark testing using the Voice Agent Quality Index (VAQI), Deepgram achieved the highest overall score among all evaluated providers (see Figure 1). VAQI is a composite benchmark that measures the core elements of voice agent quality: latency (how quickly the agent responds), interruption rate (how often it cuts users off), and response coverage (how often it misses valid input). Deepgram outperformed OpenAI by 6.4% and ElevenLabs by 29.3%, reflecting the advantage of its integrated architecture and model-driven turn-taking. The result is smooth, responsive conversations without missed inputs, premature responses, or unnatural delays. Cost-Effectiveness at Scale In addition to control and performance, the Voice Agent API is built for cost efficiency across large-scale deployments. When teams run entirely on Deepgram's vertically integrated stack, pricing is fully consolidated at a flat rate of $4.50 per hour (see Figure 2). This provides predictable, all-in-one billing that simplifies planning and scales with usage. Deepgram's vertically integrated runtime also delivers unmatched compute efficiency, optimizing every stage of the speech pipeline to minimize infrastructure costs while maintaining real-time responsiveness. For teams that bring their own LLM or TTS models, Deepgram offers built-in rate reductions, enabling even lower total cost of ownership for production-scale deployments. 'Deepgram's Voice Agent API stands out for its technical prowess, affordability, and flexibility, making it the smart bet for customer service voice AI,' said Bill French, Senior Solutions Engineer at StreamIt. Start Building with the Voice Agent API Experience how fast and flexible voice agents can be with Deepgram's unified voice-to-voice API. Explore the API in our interactive playground, review documentation, or integrate in minutes using our SDK. New users receive $200 in free credits, enough to process over 40 hours of real-time voice agent usage. Start building natural, responsive conversations with infrastructure built for real-time performance and enterprise-scale. Additional Resources: About Deepgram Deepgram is the leading voice AI platform for enterprise use cases, offering speech-to-text (STT), text-to-speech (TTS), and full speech-to-speech (STS) capabilities–all powered by our enterprise-grade runtime. 200,000+ developers build with Deepgram's voice-native foundational models – accessed through cloud APIs or as self-hosted / on-premises APIs – due to our unmatched accuracy, low latency, and pricing. Customers include technology ISVs building voice products or platforms, co-sell partners working with large enterprises, and enterprises solving internal use cases. Having processed over 50,000 years of audio and transcribed over 1 trillion words, there is no organization in the world that understands voice better than Deepgram. To learn more, visit read our developer docs, or follow @DeepgramAI on X and LinkedIn.

Seismic Advances AI Vision With Spring 2025 Product Release, Bringing AI to Every Stage of Enablement, From Insight to Action
Seismic Advances AI Vision With Spring 2025 Product Release, Bringing AI to Every Stage of Enablement, From Insight to Action

Business Wire

time12-06-2025

  • Business Wire

Seismic Advances AI Vision With Spring 2025 Product Release, Bringing AI to Every Stage of Enablement, From Insight to Action

SAN DIEGO--(BUSINESS WIRE)-- Seismic, the global leader in AI-powered enablement, today announced its Spring 2025 Product Release, advancing its vision to bring AI into every stage of enablement. The latest updates transform how revenue teams operate with AI, allowing them to prepare faster, coach smarter, and create content more efficiently. With Seismic's Aura AI agents embedded throughout the Go-to-Market tech stack, enablement professionals, marketers, and sellers are empowered to confidently own every customer interaction. "Seismic is helping us move faster, elevate every interaction, and deepen relationships with our investors while driving growth across the business." - Kapil Arora, SVP, Head of Digital, AllianceBernstein Share According to Gartner®, 'sellers with skills in leveraging technology to automate and augment key tasks and responsibilities are 3.7 times more likely to meet quota than sellers with less AI competency. Yet, they represent only 7% of the sales force.' As buying journeys grow more complex and expectations rise, teams need intuitive AI-powered tools embedded in their workflows to be better prepared for every customer engagement. Seismic's latest innovations meet that need by infusing automation, real-time insights, and coaching directly into the tools sellers use every day. 'We've long relied on Seismic to bridge the gap between marketing and sales, but with these new AI-driven capabilities, we're truly stepping into the future of enablement,' said Kapil Arora, SVP, Head of Digital, AllianceBernstein. 'AI in Seismic is helping our sales teams find and understand our content better. By combining that with strategic automation, engagement insights, and up-to-date assets directly into our sales team's workflows, Seismic is helping us move faster, elevate every interaction, and deepen relationships with our investors while driving growth across the business.' Key innovations from the Spring 2025 Release include: Role-Play Agent: Seismic's new Role-Play Agent uses generative AI to simulate customer conversations informed by the content within Seismic, and provides instant feedback to the rep – along with rich post-practice insights, scoring, and recommendations for the enablement and sales leadership teams to address any knowledge and skill gaps. With Role-Play Agent, reps can practice and build confidence at scale, and managers can deliver more personalized, one-to-one coaching. Aura AI ecosystem updates: Aura AI is now embedded across Microsoft Teams, Microsoft Copilot, Slack and Salesforce Agentforce, bringing trusted AI-powered answers and content into every seller's workflow. Aura Chat: Aura Chat enables users to engage in multi-turn conversations with Seismic, such as asking questions, getting answers, researching topics, generating summaries, and taking action, all while respecting permissions and governance policies. Seismic for Meetings templates: Enablement teams can now create reusable meeting templates tailored to common selling scenarios. Sellers can apply these templates in seconds, saving time and ensuring every meeting is backed by the right agenda, content, and notes. Customer Stories Orchestrator: Customer Stories Orchestrator automates the creation of case studies and logo slides, transforming a once time-consuming task into a seamless workflow in a matter of seconds. Self-Service Reports: Enablement teams can now access real-time reporting without relying on analysts or admins. These intuitive, self-service tools allow users to run one-off analyses, explore engagement data, and share insights quickly —driving smarter, faster decisions across the organization. 'New data proves that AI-powered salespeople are outperforming their peers who are not effectively using AI,' said Krish Mantripragada, Chief Product Officer, Seismic. 'We also know that when seller preparation falls short, pipeline slows, win rates drop, and customer trust erodes. This release accelerates our mission to equip organizations with scalable AI-powered capabilities to drive consistent performance, maximize seller productivity, and deliver business impact at every stage of the revenue cycle.' To explore all the innovations included in the Spring 2025 Product Release, visit the Seismic Blog. About Seismic Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store