logo
SESTEK Boosts Its Virtual Agents with Agentic AI

SESTEK Boosts Its Virtual Agents with Agentic AI

With customer expectations rising and digital transformation accelerating across the world, SESTEK's Virtual Agent solution now powered by Agentic AI, helps enterprises to automate complex customer interactions while maintaining compliance and security
New York, New York--(Newsfile Corp. - April 10, 2025) - SESTEK, one of the leading companies that offers conversational automation technologies has announced new Agentic AI capabilities added to its virtual agent solution, Knovvu. The platform enables enterprises to deliver personalized, secure, and multilingual customer support while managing costs and complexity.
To view an enhanced version of this graphic, please visit:
'Enterprises are being asked to do more with less - fewer agents, tighter budgets, and rising customer expectations. We address this challenge directly by combining reasoning, memory, and task execution into one intelligent virtual agent, ready for scale,' said Prof. Levent Arslan, SESTEK Founder and CEO.
'Virtual assistants powered by Agentic AI features can now offer more human-like, more autonomous, goal-oriented, and hyper-personalized experiences for customers. Agentic AI technology uses large language models (LLMs) for smarter and more accurate responses, enabling more realistic, context-sensitive communication. We have strengthened our Knovvu Virtual Agent solution with this approach, and it is being used by leading companies across various sectors, from finance and insurance to telecommunications and e-commerce. Agentic AI is starting in a new era in customer experience.'
'Thanks to this new technology, virtual agents' responses are now more accurate, precise and sensitive to past interactions, significantly elevating customer satisfaction,' he added.
Knovvu Virtual Agent by SESTEK, allows companies to assign particular tasks to different assistants and define special roles for each. They can also set industry-specific regulatory rules, restrictions, and automated responses, further boosting secure interactions especially for regulation-heavy industries.
Levent Arslan stated, 'Advancements in LLMs and Agentic AI aim to improve customer service and call centers. The advancements aim to enhance customer convenience while ensuring security, privacy, and compliance.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What AI Agents Are Getting Right – And Wrong
What AI Agents Are Getting Right – And Wrong

Forbes

time2 hours ago

  • Forbes

What AI Agents Are Getting Right – And Wrong

Agentic AI promises to transform IT operations, but many platforms still fall short. Here's what's ... More working, what's hype and how CIOs can separate fad from reality. For many CIOs and technology executives, AI's promise was straightforward: smarter, faster and more efficient IT operations. The technology was envisioned as a game-changer, capable of reducing operational costs, automating mundane tasks, enhancing system reliability and freeing up human resources for much more important work. But ask them today, and you might hear frustration rather than enthusiasm. That's because the reality on the ground is starkly different from the optimistic projections that have headlined the news so far, primarily due to the complexities involved in effectively integrating AI into IT operations. It's a challenge that was front and center at Agentic AI Demo Day, where executives gathered to explore how autonomous agents can help streamline operations — but only if the underlying complexity is addressed first. Despite the global operational intelligence market valued at $3.2 billion in 2024, according to IMARC Group, and projected to reach $6.8 billion by 2033, growing at a CAGR of 8.8%, enterprises are still grappling with real-world barriers to effectively implementing AI in their IT operations. At the heart of it all is one major hurdle — untangling the operational complexity that prevents AI agents from delivering on their promise. The big question, though, is: How can they move past this complexity and harness the true power of AI? According to Andy Thurai, industry analyst at Field CTO, a major problem for enterprise IT today is that many organizations still run their IT operations through 'manual incident management processes,' a reality he described as 'shocking.' A 2024 report from the Uptime Institute found that nearly 60% of enterprises suffered major outages and downtimes tied to escalating IT complexity. One joint report by Splunk, a Cisco company, and global research institute Oxford Economics estimated the yearly global cost of such downtimes to be $400 billion. That's a huge cost when you think about the sheer numbers and it shows why enterprises are now scrambling to simplify the long-standing inefficiencies in IT. And in that scramble, many technical decision makers have bought into the AI hype and deployed AI tools which didn't fully solve their operational problems. While traditional machine learning and GenAI tools have addressed specific operational tasks — like forecasting or summarization — they still fall short when it comes to cross-domain workflow automation and real-time system orchestration. 'AI tools have tackled the easy parts,' Thurai said during Fabrix's Agentic AI Demo Day. 'But they haven't solved the fundamental workflow problems at the heart of IT operations.' Instead, many organizations have adopted fragmented point solutions that generate too much noise and too few insights. 'AI solutions promised to streamline operations, but instead, companies ended up with fragmented tools producing too much data and too few actionable insights,' Thurai explained during a recent webinar. He noted that one of the biggest pain points today is 'alert fatigue,' where IT teams are overwhelmed by excessive system alerts, diminishing their effectiveness and responsiveness. Thurai's sentiment is rooted in facts, with a report by McKinsey noting that while 92% of companies plan to grow their AI investments over the next three years, just 1% of surveyed C-Suite leaders describe their organizations as 'AI mature' — meaning AI is fully embedded into their operations and driving positive business outcomes. Many organizations face data overload from numerous sources, increasing rather than reducing existing operational pressures. As Thurai explained, this problem stems from the fact that modern enterprises rely heavily on intricate, microservices-based architectures. Systems at companies like Netflix, Uber and Amazon manage thousands of interdependent services simultaneously, dramatically increasing operational complexity. When incidents occur, traditional monitoring tools struggle to quickly pinpoint root causes, resulting in delayed resolutions that can cost millions in downtime and lost productivity. To address these shortcomings, the industry is gradually shifting toward agentic AI — also called agentic AIOps when applied to IT environments — which are so-called autonomous agents capable of independent action, reasoning and adaptive decision-making without constant human oversight. These agentic systems are particularly suited to IT operations precisely because they can operate independently, detecting and resolving incidents autonomously. While much is still being understood about how these agentic systems behave at scale and experts continue to call for companies to prioritize safety in building or deploying AI agents, they could potentially mitigate human error, reduce response times and directly address IT departments' alert fatigue. As Thurai noted in the webinar, ​​organizations can achieve unprecedented efficiency, resilience and proactive management across their IT environments by orchestrating intelligent agents that can analyze, predict and act autonomously Companies are beginning to explore how these autonomous systems can be deployed effectively. For example, — which offers a modern intelligence platform for the agentic AI era — recently showcased practical ways businesses can deploy AI agents for operational intelligence during its Agentic AI Demo Day. The company's platform enables businesses to build customized AI agents tailored to specific operational scenarios, from anomaly detection to real-time event management. The anomaly detector agents demonstrated at the event can autonomously identify KPI deviations, automatically open trouble tickets and dynamically adjust system capacity. Event intelligence agents also showed capabilities in real-time alert correlation and executing closed-loop remediation. In practical terms, this means that AI agents — like Fabrix's solutions demonstrated — have a strong potential to significantly reduce operational costs and improve overall system reliability for organizations. However, the adoption of advanced autonomous systems isn't without hurdles. isn't the only player in this emerging space. Cisco has also introduced AI-native observability tools that incorporate agent-like behaviors to automate root cause analysis and AI observability. Similarly, Dynatrace is layering AI agents into its Davis AI engine to enhance multi-domain remediation across cloud-native environments. These developments reflect a broader move toward intelligent automation — though each vendor is taking a different route. Still, these agentic systems remain in early phases. Critics note that many so-called agentic platforms are still rule-based at their core, lacking the true autonomy and reasoning needed to adapt across diverse workflows. Even Fabrix's approach, while promising, is still evolving and may require customization for complex enterprise environments. As competition heats up, the key differentiator may not be the platform itself — but how well it balances adaptability, trust and enterprise-grade integration. Thurai warned that without robust guardrails, autonomous AI could exhibit unpredictable, or 'stochastic' behaviors. Companies must invest not only in agentic platforms but also in frameworks ensuring security, observability and ethical AI practices. 'Implementing guardrails and quality controls are essential,' Thurai said. 'Without proper oversight, you risk AI that doesn't just hallucinate — these systems can confidently produce inaccurate outcomes, leading to significant operational risks.' That message was echoed by multiple speakers at the Agentic AI Demo Day, including Cisco and IBM executives, who emphasized the need for enterprise-grade controls like embedded testing, persona-based access governance and auditable AI execution paths. These capabilities, they argued, are non-negotiables for agentic platforms that aim to operate autonomously at enterprise scale. Another significant challenge enterprises face is the severe shortage of skilled IT professionals. Korn Ferry predicts a global shortage of up to 85 million tech workers by 2030. This skill gap could further worsen already challenging operational issues, forcing enterprises to rely increasingly on automation and AI-driven solutions. Autonomous agents could be helpful in this regard, providing a critical lifeline that fills talent gaps and performs routine and even complex tasks previously managed by overstretched human teams. For now, the road to fully autonomous AI operations remains under construction. Enterprises considering this journey must prepare carefully, ensuring that their investments in agentic AI are matched with a thorough understanding of potential pitfalls with rushing to deploy AI, as well as a disciplined approach to implementing AI. Despite these challenges, the potential rewards — reduced downtime, increased operational efficiency and substantial cost savings — make agentic AI an investment worth serious consideration. But as Thurai noted, agentic AIOps — which describes the application of autonomous, decision-making AI agents within AI-powered IT operations — is still in the very early stages and only a few vendors offer it. In the next year, he added, 'we'll probably see too much vendor snake oil coming out of the market saying, 'Oh, we're an AI agent platform,' when they really aren't.' The big message, according to Thurai, is that as we enter into a new agentic era for AI applications, choosing the right vendor could be the deciding factor between scalable automation and another failed AI deployment. 'The major difference between choosing the right vendor and wrong vendor, especially in IT ops, is not just about the platform,' he said, 'but the capabilities that can and should be expandable by agents.'

Three Tips For The C-Suite On How To Balance Innovation And Cyber Risk
Three Tips For The C-Suite On How To Balance Innovation And Cyber Risk

Forbes

time3 hours ago

  • Forbes

Three Tips For The C-Suite On How To Balance Innovation And Cyber Risk

Hed Kovetz is the CEO and Co-Founder of Silverfort, provider of the first Unified Identity Protection Platform, and a cybersecurity expert. getty Since the launch of ChatGPT in the fall of 2022, AI's potential—much like the rise of the internet—has captivated the world. In 2023, it was about experimentation. In 2024, organizations across industries began to implement AI to drive efficiency, especially by answering questions and writing code, emails and content. 2025 is the year in which AI will start doing tasks and making decisions. I believe this will truly set the leaders apart from the laggards, changing industries forever. As AI starts to evolve from a buzzword to a practical, everyday tool, CEOs are racing against the clock to integrate AI into their workflows and products. One advancement that is gaining particular traction is agentic AI. As foundational AI models are improving, AI agents are becoming the frontier for AI improvement and research. They can boost productivity, transform industries and enable a future where human-AI collaboration is essential for success. However, as CEOs are eager to reap the pros of adopting AI agents and other AI advancements, they must understand the cons. According to Gartner, agents create significant cybersecurity risks. By 2028, 25% of cyber breaches will be linked to AI agent misuse. This includes supply chain vulnerabilities, AI hallucinations, unauthorized access to data, personally identifiable information (PII) leaks and more. Why is that? For AI agents to work effectively, they require broad read and write permissions to various systems and company resources. However, granting AI agents access to a diverse set of endpoints creates a new, previously unsecured attack surface for bad actors to take advantage of. Today, organizations are gradually adopting security solutions that monitor and alert against compromised non-human identities (NHIs), including digital identities used by machines, applications and automated scripts. This investment is highly important, but unfortunately, AI agents don't work like the other types of NHIs for which these solutions are built. Until now, it was usually simple to understand how NHIs behave because their access activity was repetitive and predictable. This is not the case with AI agents. The predictability is gone because large language model (LLM) outputs are non-deterministic by nature. Much like humans, they think, learn and act in ways that are not always foreseeable. Continuous, end-to-end monitoring across the network is important for an organization to track what AI agents are actually doing and block undesired activity. This must also include the human users who may be utilizing the AI agent. So, how can organizations navigate this delicate balance? How can they drive innovation and adopt AI agents to stay ahead of competitors without exposing themselves to significant risk? Here are three key tips for the C-suite to consider: As organizations race to adopt AI and stay ahead of competitors, they often turn to the chief information security officer (CISO) to ensure implementation is done with security in mind. However, AI agents are so new, complex and rapidly evolving that CISOs are not always fully prepared to manage the associated risks. Despite this, they face immense pressure from their businesses to implement quickly. This leaves CISOs in a difficult position. They must adopt AI agents fast while navigating significant security concerns, often without adequate safeguards in place and without enough time. To avoid serious consequences, including misuse or mishandling of sensitive data, it is imperative that the C-suite closely align with the CISO to better understand the risks they face and align on a strategy that ensures the business continues to innovate and move forward without sacrificing security. Otherwise, one breach could put the entire company and its reputation at risk. A significant gap in AI security is the lack of focus on protection from the AI agents themselves—how they access systems, what identities they assume and what those identities are doing. The new MCP protocol specification, which governs the AI agent's ability to interact with external services, addresses authorization. However, there are still major gaps that need to be addressed as it rapidly matures. Most discussions around AI security focus on protecting data by ensuring it isn't manipulated or used for training public models but don't address the broader issue of securing AI agents' access to critical systems. These agents interact with various identities and systems, raising concerns about whether they could inadvertently or maliciously access sensitive data or perform unauthorized actions. To avoid this, organizations need to ensure AI agents only have access to what they need and aren't granted broad privileged access. Furthermore, given AI agents work much like the human brain, limiting access can help organizations ensure that AI agents cannot learn and train on proprietary data, such as out-of-scope customer data or intellectual property. This is also important to ensure the organization is protected from a legal standpoint. You don't want to open the door to a potential legal battle after inadvertently sharing sensitive customer data externally or granting your competitors access to your secret sauce. While AI agents pose significant security risks, they also have the potential to improve an organization's security posture. CEOs should encourage CISOs to leverage AI agents proactively to identify and remediate risks and vulnerabilities while defending against adversarial AI in real time. Much like a traffic cop directs the flow of vehicles to ensure a smooth and safe environment, AI agents have the potential to oversee and coordinate the actions of other agents and security tools to achieve better security. My intent here isn't to discourage or scare organizations from progress. I strongly believe in AI's potential. But much like any technology, it is imperative that leaders understand the challenges and risks they are up against. By clearly understanding the nature of AI agents, their potential risks and steps to mitigate those risks, C-suite leaders can better prepare for this massive change while ensuring the long-term health of the business. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Tony G Co-Investment Holdings Announces Proposed Name Change
Tony G Co-Investment Holdings Announces Proposed Name Change

Yahoo

time3 hours ago

  • Yahoo

Tony G Co-Investment Holdings Announces Proposed Name Change

Toronto, Ontario--(Newsfile Corp. - June 13, 2025) - Tony G Co-Investment Holdings Ltd. (CSE: TONY) (the "Company") is pleased to announce that it intends to file articles of amendment to change its corporate name from "Tony G Co-Investment Holdings Ltd." to "HYLQ Strategy Corp." (the "Name Change"). The Name Change is expected to take effect on or around June 19, 2025. The Name Change is to reflect the Company's investments in the HyperLiquid ecosystem, which is within the Company's investment policy and mandate. As the digital asset market continues to rapidly evolve and mature, so has the Company. The Company's rebranding reflects the Company's evolution and maturity. Concurrently with the completion of the proposed Name Change, the Company's trading symbol on the Canadian Securities Exchange is expected to change to "HYLQ". Further details regarding the Name Change - including the effective date, new CUSIP and ISIN numbers for the Company's common shares, and the date on which trading will begin under the new ticker symbol - will be provided in a subsequent news release. The Name Change was approved by shareholders of the Company at its annual and special meeting held on August 16, 2024. No action will be required by existing shareholders with respect to the Name Change. Share certificates representing common shares of the Company will not be affected and will not need to be exchanged. For more information, please contact: Matt ZahabChief Executive OfficerTel: (647) 365-2867Email: contact@ This news release contains certain "forward-looking information" within the meaning of applicable securities laws. Forward looking information is frequently characterized by words such as "plan", "expect", "project", "intend", "believe", "anticipate", "estimate", "may", "will", "would", "potential", "proposed" and other similar words, or statements that certain events or conditions "may" or "will" occur. These statements are only predictions. Forward-looking information is based on the opinions and estimates of management at the date the information is provided, and is subject to a variety of risks and uncertainties and other factors that could cause actual events or results to differ materially from those projected in the forward-looking information. For a description of the risks and uncertainties facing the Company and its business and affairs, readers should refer to the Company's Management's Discussion and Analysis. The Company undertakes no obligation to update forward-looking information if circumstances or management's estimates or opinions should change, unless required by law. The reader is cautioned not to place undue reliance on forward-looking information. Neither the Canadian Securities Exchange nor its Regulation Services Provider (as that term is defined in policies of the Canadian Securities Exchange) accepts responsibility for the adequacy or accuracy of this news release. To view the source version of this press release, please visit Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store