logo
Why companies implementing agentic AI before putting proper governance in place will end up behind, not ahead of, the curve

Why companies implementing agentic AI before putting proper governance in place will end up behind, not ahead of, the curve

Fast Company2 days ago

Agentic AI is the buzzword of 2025. Although technically an 'emerging technology,' it feels like companies of all sizes are quickly developing and acquiring AI agents to stay ahead of the curve and competition. Just last week, OpenAI launched a research preview of Codex, the company's cloud-based software engineering agent or its 'most capable AI coding agent yet.' And it's fair that people are interested and excited.
Transforming industries
From customer service to supply chain management and the legal profession, AI agents are set to transform industries across the board and are already showing us that they can be pervasive across both consumer and enterprise environments, bringing AI fully into the mainstream. Unlike chatbots and image generators, which provide answers, but require prompts, AI agents execute multistep tasks on behalf of users. In 2025, these autonomous software programs will dramatically change how people interact with technology and how businesses operate.
This aligns with Forrester's latest findings, which had agentic AI at the top of its recent Top 10 Emerging Technologies for 2025, highlighting the power and potential of this emerging trend. However, as the report also points out, the rewards come with big risks and challenges. Let's dive into these as well as why companies must prioritize governance before development and implementation in order to stay ahead of, not behind, the curve and their competition.
A Governance-First Approach
In just three years, at least 15% of day-to-day work decisions will be made autonomously by AI agents—up from virtually 0% in 2024. This prediction by Gartner, while promising, sits alongside another key stat: 25% of enterprise breaches will be linked to AI agent abuse. As mentioned above, the rapid and widespread adoption of AI agents, while exciting, comes with complex challenges, such as shadow AI, which is why companies must prioritize a governance-first approach here.
So what is it about AI agents that makes them particularly challenging to control?
Short answer: their ability to operate autonomously. Long answer: this technology makes it difficult for organizations to have visibility over four things:
Who owns which agent
What department oversees them
What data they have access to
What actions the agent can take
How do you effectively govern them
A comprehensive approach
This is where unified governance can step in. With a comprehensive governance framework, companies can ensure that AI agents operate responsibly and are aligned with organizational standards and policies. The alternative: a lack of governance framework for AI agents can mishandle sensitive data, violate compliance regulations, and make decisions misaligned with business objectives.
Let's use a real-world example: you are a CEO for a major organization. Your company builds and introduces an AI-powered assistant to help automate workflows and save you time. Now imagine that the assistant gains access to your confidential files. Without guidance or governance, the assistant summarizes sensitive financial projections and closed-door board discussions and shares them with third-party vendors or unauthorized employees. This is definitely a worst-case scenario, but it highlights the importance for a solid governance framework.
Here's a helpful governance checklist:
Establish guidelines that clearly define acceptable use and assign accountability.
Carry out regular reviews to help identify and mitigate potential risks and threats.
Appoint the right stakeholder to foster transparency and build trust in how AI agents are used internally and externally.
Blurred lines
According to Sunil Soares, Founder of YDC, Agentic AI will drive the need for new governance approaches. As more applications include embedded AI and AI agents, the line between applications and AI use cases will become increasingly blurred. I couldn't agree more.
Whether you develop AI agents internally or partner with a third-party vendor, this technology will unlock significant value. But the challenges are not one-size-fits-all and will not go away. And while the human element remains important, manual oversight on its own is not sufficient or realistic when it comes to scale and size.
Therefore, when you build out your governance framework, ensure that you have automated monitoring tools in place that detect and correct violations of policies, record decisions for greater transparency, and escalate the complex cases that require additional oversight, such as a human-in-the-loop. A centralized governance framework ensures accountability, risk assessment, and ethical compliance. Like everything else in life, you need to create and establish boundaries.
And don't worry—implementing a governance framework first won't slow innovation down. When you find the right balance between innovation and risk management, you stay ahead of the curve and competition, leaving room for more cutting-edge AI agents and fewer headaches. For a perfect pill, deploy a unified governance platform for data and AI, as it will be the key to managing and ensuring AI agents don't become the next shadow IT.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Securonix Acquires ThreatQuotient to Deliver Industry's Broadest and Deepest Threat Detection Investigation and Response
Securonix Acquires ThreatQuotient to Deliver Industry's Broadest and Deepest Threat Detection Investigation and Response

Business Wire

time34 minutes ago

  • Business Wire

Securonix Acquires ThreatQuotient to Deliver Industry's Broadest and Deepest Threat Detection Investigation and Response

PLANO, Texas--(BUSINESS WIRE)--Today, Securonix, a five-time Leader in the Gartner® Magic Quadrant™ for Security Information and Event Management (SIEM), announced the acquisition of ThreatQuotient, a four-time leader in threat intelligence based on QKS Group Spark Matrix report and the force behind ThreatQ, the most innovative external threat intelligence platform. This combination will create a comprehensive, modular, and fully integrated AI-driven platform for threat detection, investigation, and response (TDIR), leveraging advanced analytics and insights across both internal and external threats. This acquisition accelerates the modernization of security operations by uniting internal and external threat intelligence with real-time analytics and agentic AI. Unlike external threat intelligence bolt-on solutions with disconnected management interfaces, the integrated platform from Securonix and ThreatQuotient will deliver unified visibility, faster response, and greater operational clarity. "Bringing threat intelligence management and SIEM together in a unified platform is a game changer. We've already seen the value of deeply enriched advanced analytics and detection in our Securonix SIEM environment—but coupling that with integrated threat curation, prioritization, and response should help customers move even faster. It means fewer swivel-chair investigations, more accurate triage, and greater confidence that security analysts are working with the most relevant threats. This kind of integration has the potential to accelerate the ability to detect, respond, and stay ahead," said Marcel Jonker, Director of Cybersecurity Operations at Cambia Health Systems. The integration of Securonix and ThreatQuotient promises to deliver up to a 70% reduction in Mean Time to Respond (MTTR), enabling security teams to detect, investigate, and remediate threats significantly faster. By combining curated threat intelligence with AI-driven automation, the solution will deliver exponential improvements in filtering out false positives, enriching alerts with actionable context, and automating historical threat sweeps and incident response. This reduces alert overload, speeds up root cause analysis, and minimizes manual handoffs—cutting investigation time from hours to minutes and enabling automated containment before threats escalate. 'Security teams are drowning in noise and struggling to keep up with evolving threats,' said Kash Shaikh, CEO and President of Securonix. 'This acquisition brings together Securonix's Agentic AI-driven Platform with ThreatQuotient's deep threat intelligence to deliver clarity, speed, and automated workflows to our customers, reducing false positives by up to 90%. Together, we're building the modern SOC Platform—proactive, intelligent, and built for what's next.' Kash added, 'Securonix and ThreatQuotient bring together complementary strengths—deep innovation across internal and external threat domains, and a shared commitment to innovation and customer service. Both companies serve enterprise and government customers as well as managed security service providers (MSSPs), and we're excited to welcome the talented ThreatQuotient team and their customers to Securonix.' Purpose-Built for Analysts. Proven Against Real-World Threats. ThreatQuotient's Threat Intelligence Platform (TIP) strength lies in delivering curated, contextualized threat intelligence that drives smart, timely decisions. When combined with Securonix's EON Agentic AI-based SIEM, SOAR, UEBA and Data Pipeline Manager, customers can accelerate their migration from reactive threat hunting-based defense to proactive, real-time, behavior-driven, open-agentic security operations. With this integration, Securonix customers and partners will enjoy the following benefits: Gain clear visibility: Integrate deep enriched real-time analytics from Securonix with curated external intelligence from ThreatQuotient to create a single, high-context stream of alerts. Eliminate blind spots and accelerate threat identification with confidence. Stay ahead of risk: Auto-enrich Indicator of compromise (IoCs) and preemptively respond to repeat attacks, blocking 90 percent before they start. Act smarter: Automate repetitive tasks, reduce false positives, and streamline investigations. Teams can stay focused on high-priority threats and reduce time spent on manual triage. Deploy your way: Continue to use ThreatQ as a standalone threat intelligence platform or as part of the fully integrated Securonix solution. Deploy on-premise or SaaS in a way that fits the current architecture and scales with needs. Accelerated Roadmap: Combined R&D synergies will accelerate upcoming roadmap innovations, including Agentic AI and ThreatQuotient's innovation priorities. With this acquisition, ThreatQuotient customers and partners will enjoy the following benefits: Increased Scale: ThreatQ customers can take advantage of Securonix's global R&D scale and GTM reach, including access to Securonix's Threat Labs Intelligence. Deeper Integrations: Gain access to an enriched roadmap and integration between Securonix's best-in-class SIEM, SOAR, and UEBA portfolio and ThreatQ, including extension of Agentic AI advancements. Continued Focus: Zero interruption of their existing service, as ThreatQuotient will continue to operate as a standalone offering, with no disruption to existing roadmap and workflows. 'Enterprises, government institutions and managed security service providers rely on ThreatQuotient to protect their mission critical businesses. Joining Securonix marks a powerful new chapter for ThreatQuotient. By uniting our strengths, we can accelerate innovation, expand our reach, and deliver greater value to our customers. I'm proud of what we've built and excited for what's ahead,' said John Czupak, CEO of ThreatQuotient. BTIG, LLC served as exclusive financial advisor, and King & Spalding LLP served as legal advisor to ThreatQuotient. Vinson & Elkins LLP served as legal advisor to Securonix. About Securonix Securonix is leading the transformation of cybersecurity with the industry's first Unified Defense SIEM powered by agentic AI, purpose-built to decide and act across the threat lifecycle with a human-in-the-loop philosophy. Built for scale, precision, and speed, our cloud-native platform empowers global enterprises to shift from reactive security to proactive, autonomous operations. Recognized as a Leader in the Gartner® Magic Quadrant™ for SIEM and a Customers' Choice by Gartner Peer Insights™, Securonix is driving the next era of intelligent, autonomous security operations. Learn more at About ThreatQuotient ThreatQuotient improves security operations by fusing disparate data sources, tools, and teams to accelerate threat detection, investigation, and response (TDIR). ThreatQ is the first purpose-built, data-driven threat intelligence platform that helps teams prioritize, automate, and collaborate on security incidents; enables more focused decision-making; and maximizes limited resources by integrating existing processes and technologies into a unified workspace. The result is reduced noise, clear priority threats, and the ability to automate processes with high-fidelity data. ThreatQuotient's leading integration marketplace, data management, orchestration, and automation capabilities support multiple use cases including threat intelligence management and sharing, incident response, threat hunting, spear phishing, alert triage, and vulnerability management. For more information, visit

New York passes a bill to prevent AI-fueled disasters
New York passes a bill to prevent AI-fueled disasters

Yahoo

time2 hours ago

  • Yahoo

New York passes a bill to prevent AI-fueled disasters

New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump Administration have prioritized speed and innovation. Safety advocates including Nobel prize laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America's first set of legally mandated transparency standards for frontier AI labs. The RAISE Act has some of the same provisions and goals as California's controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn't chill innovation among startups or academic researchers — a common criticism of SB 1047. 'The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,' said Senator Gounardes. 'The people that know [AI] the best say that these risks are incredibly likely […] That's alarming.' The RAISE Act is now headed for New York Governor Kathy Hochul's desk, where could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York's AI safety bill would require the world's largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York's Attorney General to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world's largest companies — whether they're based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill's transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents. While similar to SB 1047 in some ways, the RAISE Act was designed to address criticisms of previous AI safety bills, according to Nathan Calvin, the Vice President of State Affairs and General Counsel at Encode, who worked on this bill and SB 1047. Notably, the RAISE Act does not require AI model developers to include a 'kill switch' on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms. Nevertheless, Silicon Valley has pushed back significantly on New York's AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way. 'The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,' said Andreessen Horowitz general partner Anjney Midha in a Friday post on X. Andreessen Horowitz, alongside the startup incubator Y Combinator, were some of the fiercest opponents to SB 1047. Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to 'smaller companies.' When asked about Anthropic's criticism, state Senator Gounardes told TechCrunch he thought it 'misses the mark,' noting that he designed the bill not to apply to small companies. OpenAI, Google, and Meta did not respond to TechCrunch's request for comment. Another common criticism of the RAISE Act is that AI model developers simply wouldn't offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it's largely what's played out in Europe thanks to the continent's tough regulations on technology. Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn't require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly. 'I don't want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reasons for them to not make their models available in New York,' said Assemblymember Borres. Sign in to access your portfolio

Barbie maker Mattel teams with OpenAI to change future of play
Barbie maker Mattel teams with OpenAI to change future of play

Yahoo

time3 hours ago

  • Yahoo

Barbie maker Mattel teams with OpenAI to change future of play

Mattel (MAT) is teaming up with OpenAI to integrate the latter's artificial intelligence tools into its toys. The Toy Book editor in chief and The Toy Insider senior editor James Zahn speaks with Asking for a Trend's Julie Hyman about the progress of AI use in the toy industry, the opportunity this partnership presents for Mattel, and regulators' concerns around children's exposure to AI. To watch more expert insights and analysis on the latest market action, check out more Asking for a Trend here. Mattel, the maker of Barbie dolls and Hot Wheels cars is signing a deal with OpenAI to use its artificial intelligence tools to design its toys. For more on what that means for the industry, we're bringing in James Zahn, editor in chief at the Toy Book and senior editor at the Toy Insider. James, good to see you. Um, and when I saw this news immediately I thought of you because I was curious to get your take on what this means and if this is going to be sort of the new normal for the toy industry. Hello, thanks for thinking of me with this. And yeah, AI has been a hot topic within the toy industry now for about three or four years. Mattel started dabbling in it a few years ago for some packaging and things. They were using the Adobe suite of products at the time, but the OpenAI deal certainly raised a few eyebrows yesterday morning. And so, you know, have we seen in your view a successful marriage in any of the toy makers, either using generative AI to develop toys, or the toys sort of successfully incorporating AI elements? So I think you've probably seen some stuff that has used AI in the creative process and it just hasn't been out there as an advertised sort of thing. As far as a successful marriage on the consumer front, uh, in 2023 or so we saw a first wave, if you will, of AI enabled toys and those did not do very well. But what I think's happened now is that the technology has advanced so quickly and it's still, it's advancing as we're having this conversation. It continues to move. I think that the deal with OpenAI really signals a huge leap forward for Mattel because the first major toy maker to sort of crack the code with incorporating AI not just in the development of their products, and when we think about play, not just toys and games, but the experience of play and what that could mean, it's it's all being reinvented. Whoever cracks that first, and maybe it's going to be Mattel here, they're going to be hailed as a pioneer because AI is not going away. It's here to stay. We need to learn to use it. So, you know, I don't know if our our puny human brains can can imagine, but like what are some possibilities of how we could see this play out? Like is it going to mean, you know, toy, the playing and toys are radically reinvented, does it mean new features, what should we be looking for? I have a variety of takes on this. Uh, first, think about your core toys. Can Barbie talk to He-Man? Can Skeletor go out there and have an adventure with Polly Pocket and converse with them, not only themselves, but with the child, and can they do that in a safe manner? Can you have a Magic 8-Ball and instead of those few stock answers, can it do something that's more personalized? Uh, can you take a game like Uno and incorporate an AI player into it? There's a lot of potential and I also think too beyond the typical toys, games and experiences. There's also the live entertainment element like the LBE, the location-based experience. Can that somehow merge with the physical play experiences? And I think that's what we're going to see going forward because there's a lot of technology that the consumers don't really know about yet and it's moving so quickly. So I I have a really strong feeling that OpenAI and Mattel have something up their sleeves that they're not telling us yet. Oh, interesting. I don't know if it's Skeletor talking to Polly Pocket, but I eagerly await that day. Um, you sort of touched on this and that is the safety element, of course, right? Where you know, a lot of different states as well as on the federal level, they're trying to figure out how to regulate all of this, but it's really sensitive when it comes to kids, right? So, um, you know, there's obviously also potential safety issues if you've got these toys interacting with kids and you've got artificial intelligence. So, you know, I'm sure there are concerns in the industry and there's awareness and sensitivity to that issue. Absolutely. There are a ton of concerns and a bunch of regulations and then of course there are new regulations that are going to have to be made because AI is sort of, as we love to say the wild west of the moment. And a few years ago, and if you back it up, maybe even a decade ago when the first Bluetooth enabled toys and Wi-Fi enabled toys came on board. Uh, there were concerns there. There were some security holes, some folks got into some trouble because of that and they had to go back to the drawing board. I expect AI is going to have that too, because some of the some of the toys we played with in 2023, when they were really just testing the market and these weren't major toy makers. They were sort of the mid-tier, was testing the waters. There were a few that seemed very GPT driven and that sort of opened up some uh, icky possibilities, I guess you could say. But because we're talking about such a reputable company that has an 80-year legacy, which is Mattel, I really doubt that they're going to put their name on something that's not well thought, and some of the folks that are involved with this, you know, have a really strong track record. Right. Right. And a strong motivation to not screw it up. James, it's great to see you, have a good weekend. Absolutely. You too. Thank you. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store