Latest news with #AILeaders


Forbes
09-07-2025
- Business
- Forbes
The Silent Breach: Why Agentic AI Demands New Oversight
Keren Katz is an AI & security leader with 10 years in management & hands-on roles, leads Security Detection at Apex Security. Agentic AI is moving fast, and enterprises are racing to deploy it. These agents don't just assist—they reason, make decisions and take action across systems. That shift redefines risk, not through breaches, but through success in the wrong context. A legal agent discloses draft merger and acquisition terms. A finance agent exposes forecasts. These aren't technical bugs. They're leadership blind spots. The Rise Of Enterprise Agents Agentic AI is reshaping enterprise software. These systems are evolving from passive tools into semi-autonomous agents that can interpret user instructions, select appropriate tools or workflows and execute tasks across integrated systems, within the limits of predefined permissions and controls. According to Gartner, by 2028, 33% of enterprise software applications will embed agentic AI, up from less than 1% in 2024. More strikingly, they project that 15% of all business decisions will be made autonomously—a significant rise from none today. This future is arriving quickly, bringing new forms of risk that traditional security frameworks weren't designed to handle. The Emerging Threat Surfaces Of Agentic AI Agentic AI introduces risk in motion, arising from the way agents are prompted, how they reason and what they execute. Understanding these surfaces is key to controlling their impact. Let's break it down. The most alarming threats from agentic AI don't always stem from external attackers. They often originate inside the organization, from employees issuing prompts that seem routine or from individuals with malicious intent who understand how to exploit the system's capabilities. In both cases, the agent's lack of contextual understanding becomes a liability. Here are three examples of prompts that could trigger high-risk actions: • 'Transfer the remaining budget from R&D to the following bank account.' • 'Send the latest board presentation to our external legal team.' • 'Push the revised quarterly revenue forecast to the investor portal.' Whether the intent is efficiency or exploitation, these prompts can trigger high-stakes actions—touching core business workflows or exposing sensitive data—and agents will carry them out without hesitation. Even more subtly, every company has its red lines. For a bank, it might be automating regulatory reporting. For a biotech, accessing patient trial data. These company-specific intentions can't be addressed with generic filters. They require granular, policy-driven definitions of risk rooted in business operations, not just security protocols. Unlike traditional software, agentic AI doesn't follow fixed logic. It reasons across multiple steps, fills gaps and adapts dynamically to achieve its goal. This flexibility is powerful, but it introduces a second critical threat surface: non-determinism. That risk becomes clear in scenarios where seemingly reasonable prompts lead to harmful autonomous decisions, such as: • An operations agent prematurely pushes configuration changes to production, causing system downtime and disrupting critical services. • A legal agent updates contract templates and pushes unapproved changes live, binding the company to terms never reviewed by counsel. • A customer success agent resolves a billing issue by granting a full-year refund instead of one month, resulting in unexpected financial loss. These aren't edge cases—they're the direct result of agents improvising in context-poor environments, without business policy awareness or human judgment. While the user prompt may seem safe, the execution path becomes risky as the agent makes autonomous decisions. To mitigate this, companies must monitor agent behavior as it unfolds, not just the initial prompt or the final output. Mid-task intent detection is now essential to prevent agents from escalating simple requests into strategic liabilities. Even with strong guardrails, some agent actions will slip through. That's why it's critical to maintain accurate visibility into what the agent did—what it accessed, modified or communicated after the fact. This serves as your last line of defense, enabling timely alerts when risky actions are detected, incident response documented in detailed activity logs and retrospective audits to refine policies and adjust safeguards. Without visibility into downstream actions, organizations remain blind to the full impact of agent behavior. And when autonomous systems operate without oversight, even a single unnoticed action can lead to financial loss, data exposure or operational disruption. What Executives Can Do Now This isn't a call to pause agentic AI adoption. It's a call to govern it with intent. Done right, agents can accelerate productivity, unlock automation and free up human creativity. But to do it safely, leaders need a new strategic playbook. Work with business units to identify which tasks or processes pose the highest risk if automated. Build intent-detection models that go beyond keywords to understand what the user is actually trying to accomplish. This enables prevention of risky workflows before they occur, and helps surface high-risk user profiles for long-term monitoring. Don't just evaluate inputs and outputs—intercept the agent's chain of reasoning mid-task. Insert checkpoints, human approvals or escalation triggers in sensitive flows to halt unsafe behavior before it unfolds, and to continuously update the agent's context in line with company policy. Treat agent behavior like system activity: log it, monitor it and investigate anomalies. Over time, this data helps refine what 'risky' looks like in your environment, uncovers blind spots and guide how future agent interactions are governed. Autonomy and safety aren't opposites. By designing policies around intent—not just identity—you can preserve speed while reducing exposure. The goal isn't to slow the agent down. It's to ensure it acts within the boundaries that leadership defines. The Bottom Line—Lead The Agents Before They Lead You Agentic AI is reshaping enterprise operations—and it's not slowing down. The imperative isn't to halt innovation, but to ensure agents act safely, reliably and in service of the business. That means governing intent and holding AI to the same standards we expect from people: smart enough to act, but guided by integrity and clear boundaries. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
09-07-2025
- Business
- Yahoo
Tredence Launches Agentic AI Playbook for CDAOs to Scale Enterprise Modernization
Designed for enterprises moving beyond experimentation, the playbook challenges outdated leadership models in the GenAI age SAN JOSE, Calif. and BENGALURU, India, July 9, 2025 /PRNewswire/ -- Tredence today announced the launch of the Agentic AI Playbook, a first-of-its-kind strategic guide for CDAOs and AI leaders navigating the transition from AI pilots to enterprise-scale modernization. The playbook offers a bold, practical framework for reimagining workflows, decision-making structures, and leadership models in the GenAI era. As enterprises race to deploy AI across business functions, the Agentic AI Playbook challenges the prevailing focus on tools and models. Instead, it urges leaders to address the fundamental question: How must organizations evolve when AI agents become central to decision-making and execution? Unlike typical AI reports focused on tools and trends, the playbook offers a contrarian perspective: the biggest risk with AI is not misuse, it's underuse due to outdated organizational design. The playbook positions AI not as a bolt-on solution but as a force reshaping workflows, decision rights, and business models. "Many Agentic AI discussions today are still tactical—focused on use cases, models, and tools. But leaders don't scale strategy through pilots," said Sumit Mehra, Co-founder and CTO of Tredence. "This playbook is built for those designing organizations where humans and machines are peers in decision-making. That shift requires new mental models, not just new tech." The Agentic AI Vision Playbook is anchored in five strategic lenses: Business Value Realization: Structuring AI initiatives to deliver measurable ROI, sustain stakeholder engagement, and maximize long-term value. Human + AI Agents = Co-Intelligence: Redefining the role of humans in an AI-automated world and ensuring alignment between human strategy and machine execution. Business Process Reengineering: Using decision intelligence and Agentic AI systems to automate and optimize end-to-end workflows. Technology Evolution: Adapting to emerging AI innovations such as quantum computing, brain-computer interfaces, and small, domain-specific AI models. Governance & Compliance: Creating agile compliance frameworks that embed responsible AI principles, integrate new regulations, and scale AI adoption across organizations and ecosystems. Each lens is mapped across three phases of maturity: Now – What leaders must act on in the next 12 months New – How operating models and systems evolve in 2 to 3 years Next – What long-term leadership looks like in AI-native organizations The playbook distills insights from Tredence's cross-industry work with Fortune 500 clients and was co-developed with perspective from executives at Mars, Nestlé, Casey's, Databricks, Google Cloud, Snowflake, Forrester, IDC among the others. The playbook provides strategies to embed AI agents across enterprises—streamlining supply chains, strengthening data governance, and transforming customer experiences through real-time insights and automation. "We've seen AI pilots fail not due to technology, but because organizations weren't ready—lacking clear decision structures, governance, and accountability for human-machine collaboration," said Soumendra Mohanty, Chief Strategy Officer at Tredence. "As AI agents take on more decisions, leaders must rethink when humans stay in, oversee, or step back from the loop. This playbook guides leaders to build the right systems, teams, and mindsets to scale GenAI successfully." The full playbook is available for download at About Tredence Tredence is a global data science and AI solutions provider focused on solving the last-mile problem in AI – the gap between insight creation and value realization. Tredence leverages deep domain expertise, data platforms and accelerators, and strategic partnerships to provide targeted, impactful solutions to its clients. The company has 3,500+ employees across San Francisco Bay Area, Chicago, London, Toronto, and Bengaluru, serving top brands in Retail, CPG, Hi-tech, Telecom, Healthcare, Travel, and Industrials. For more information, visit and follow us on LinkedIn. Video: View original content to download multimedia: SOURCE Tredence Sign in to access your portfolio


CBC
23-05-2025
- Politics
- CBC
Canada's AI minister, explained
Former journalist Evan Solomon is Canada's first minister of artificial intelligence. His exact mandate is still under wraps, but Canadian AI leaders say there are a few areas his department could tackle first.


CNET
16-05-2025
- Business
- CNET
You Can Get a Google AI Certification for $99. Or Just Do the Training for Free
You can't scroll through LinkedIn for more than 30 seconds without running across someone telling you that generative AI is either going to take your job or change it dramatically. Maybe the loudest refrain is that gen AI won't take your job, but someone who uses it will. So how do you demonstrate to a current or future employer that you're part of the former group? Google has an idea: This week, Google Cloud unveiled a "Generative AI Leader" certification, which involves taking a multiple-choice test to demonstrate your knowledge of the technology. The exam costs $99. But if you're curious about AI and want to learn more from Google, the training course -- which is seven-eight hours long -- is totally free. Spending the money to take the test and get a certification is one thing. Credentials can be valuable in a job hunt or in bargaining for a promotion, but the skills are probably more important. And even if gen AI isn't something you think you will or should use at work, understanding how it works and what it's capable of is perhaps more important. While some companies are going all-in on AI, others are placing more value on your human skills. What's in Google Cloud's Generative AI Leader course? The training path includes a few different categories of training: One segment of basic concepts around generative AI beyond just chatbots. A section on how large language models and other machine learning systems work. A look at the broader tech space where AI exists in the workplace (with a focus, naturally, on Google Cloud). A practical examination of gen AI applications and tools at work. An overview of AI agents -- tools that can do things on your behalf. The course includes videos and interactive components, with exercises using Google's Gemini model and other tools. Other ways to learn about AI Gen AI is everywhere these days, and that means courses and trainings on how to use it are multiplying almost daily. Google isn't alone in offering something. Microsoft has a special "AI Skills Fest" promotion running through May 28 with a wide range of trainings and educational sessions for free. Just as the Google Cloud course emphasizes Google's tech, expect a focus on Copilot and other Microsoft tools here. LinkedIn last week announced it will make its 10 most popular courses on AI free to all members through the end of May. We here at CNET have put together plenty of our own guides on gen AI to help you out if you're AI-curious. A good place to start is all the tips in our AI Essentials guide.