logo
#

Latest news with #Snyk

Establishing A Secure Foundation Of Trust For AI Development
Establishing A Secure Foundation Of Trust For AI Development

Forbes

time28-05-2025

  • Business
  • Forbes

Establishing A Secure Foundation Of Trust For AI Development

A race car isn't fast because of the engine alone—it's the brakes that make speed safe and ... More controlled. Trust enables acceleration. Artificial intelligence is changing how we build software. It speeds up development and helps teams ship faster. But with that speed comes a big question: Can we trust the software AI creates? In a world of AI-powered code, trust isn't a bonus—it's a must. AI coding tools like GitHub Copilot and Gemini Code Assist are everywhere. Developers are using them to build faster and automate more. But AI also brings new risks. AI doesn't just help write code. It changes how software is built. It changes who builds it. And it changes what's possible—both good and bad. I sat down with Danny Allan, CTO of Snyk, to talk about how software development is evolving and what we need to do to ensure we can trust it. 'We're in a perfect storm right now,' he declared. Allan described the three converging fronts of the perfect storm: AI is creating more code than ever. That code is often less secure than what senior developers would write. And AI-native applications have a larger attack surface, especially when large language models are involved. A recent study by Snyk found that 96% of CISOs are worried about how AI is being used in development. That concern is well-placed. AI-generated code may look like regular code—but it's not. The risks are different. That's why we need a new approach. LLMs add new dangers. Prompt injection, model theft, data leaks and poisoned training sets are all part of the picture. Allan noted we are also still not logging prompt history or tracking model outputs in most organizations. He compared today's AI rush to the early days of cloud. 'Back then, no one was locking down instances or logging access,' he said. 'Now, we're doing the same with AI models.' AI isn't just another tool. It's a new layer of infrastructure. And right now, it's going mostly unsecured. That's where AI trust platforms come in. These tools aim to secure the entire AI pipeline—from how the code is written to how the models behave. Snyk announced the launch of its own AI Trust Platform to help address this. It includes: Allan explained the platform's goal: 'Technology can never achieve its full potential unless we trust the technology that we're using.' The rise of AI coding assistants has sparked fears that software engineers might soon be obsolete. But that vision misses the bigger picture. AI doesn't eliminate the need for developers—it changes what they do and how they add value. Danny Allan sees a future where developers fall into three evolving categories: In this model, AI doesn't shrink the developer community—it expands it. Everyone becomes a builder, but with different levels of sophistication and responsibility. And as AI-generated code becomes more common, the need for oversight, security and skilled guidance only grows. AI is a powerful tool. But human judgment—especially when it comes to security, ethics and edge-case logic—remains irreplaceable. The challenge isn't how to replace developers. It's how to re-skill and redefine them for the AI era. As AI tools become more connected, through systems like Model Context Protocol, companies must make sure those connections are safe. Snyk, for example, is offering both integrations and security guidance for MCP. That's key. Every new tool is also a new attack surface. Speed without safety leads to disaster. But trust lets you go faster with confidence. Allan shared a quote from his CEO to drive the point home: 'The reason why racers can go fast is because they have brakes. It's not because of the engine. You can go faster. And so if you want to trust it, it's the brakes that you're trusting. It's not the engine itself.' AI will keep changing how we work. That's a good thing. But trust needs to grow with it. The companies that succeed will be the ones who build trust into every layer—from the models they use to the code they ship. That means educating developers, adopting secure tools and setting clear standards. AI is the engine. Trust is the brake. And both are needed if we want to go the distance.

AI And Cybersecurity: The New Administration's 100-Day Reckoning
AI And Cybersecurity: The New Administration's 100-Day Reckoning

Forbes

time02-05-2025

  • Business
  • Forbes

AI And Cybersecurity: The New Administration's 100-Day Reckoning

Just 100 days into the Trump administration, cybersecurity leaders are grappling with a volatile mix of deregulation, rising geopolitical tension, and accelerated adoption of AI. At the RSAC 2025 conference this week, Snyk hosted a timely panel titled 'The First 100 Days: How AI, Policy & Cybersecurity Collide,' featuring an all-star lineup: Jen Easterly, former CISA Director; Nicole Perlroth, former journalist and partner with Ballistic Ventures; Sumit Dhawan, CEO of Proofpoint; and Peter McKay, CEO of Snyk. Moderated by Axios cybersecurity reporter Sam Sabin, the conversation examined the early signs of disruption and dysfunction—and what it all means for software security, national defense, and innovation. The discussion was grounded in new findings from a Snyk-commissioned CISO survey, which revealed stark concerns about AI-generated threats, fragmented regulation, and eroding trust between the public and private sectors. Since January, 70% of surveyed CISOs reported experiencing a cyberattack involving AI. Panelists noted that organizations are rapidly embracing AI to increase productivity, but often without properly considering security implications. This rush to adopt AI is creating a widening gap between innovation and risk management. At the same time, nearly all CISOs surveyed expressed concern that AI-generated code may be introducing hidden vulnerabilities, suggesting a dangerous disconnect between perceived readiness and the evolving threat landscape. Peter McKay observed, 'Everybody is just focused on productivity... just get the benefits of AI and we'll figure out security later,' highlighting the widespread rush to adopt AI tools without sufficient safeguards. The panel addressed the impact of federal workforce reductions and policy reversals, including the rollback of Biden-era AI executive orders. Former CISA Director Jen Easterly described the loss of technical talent from government agencies as damaging to national cyber readiness. The panelists noted that reported loyalty requirements for federal cybersecurity personnel could further erode morale and independence. Concerns also extended to international partnerships, with reports that allied nations are beginning to limit intelligence sharing with the U.S., reflecting declining trust in the current administration. AI is drastically accelerating software development cycles, but this rapid pace is straining traditional security frameworks. Panelists highlighted how internal pressure to innovate often overrides caution, leading to insufficiently vetted tools and code. They stressed the importance of integrating security from the outset rather than as an afterthought, and called for secure-by-design practices to become standard. Without these safeguards, AI tools that can prevent threats may also be exploited to cause harm. Speakers emphasized that recent setbacks in federal cybersecurity leadership and policy risk reversing years of progress in public-private cooperation. The collaboration that once enabled a strong collective response to cyber threats—most notably in Ukraine—is now showing signs of strain. Several panelists expressed concern that trust is weakening on both sides, with private companies unsure about their role in threat reporting and mitigation, and government agencies losing key channels for visibility. CISA's partnership-enabling authorities, such as CPAC, were cited as critical tools that are currently on hold. When asked what single change they would make if given a 'magic wand,' panelists offered a range of pragmatic solutions. Proposals included mandating secure-by-design standards for consumer-grade routers—long a weak link in infrastructure security—and launching a national effort to clean up the open source codebase that underpins most modern applications. Others called for harmonized, standardized AI development regulations to prevent a patchwork of conflicting state laws. There was also strong support for a software liability regime tied to demonstrable secure development practices, as well as the use of AI to refactor legacy code written in memory-unsafe languages. As his top policy wish, McKay advocated for a national effort to improve software security at the source: 'If we all just focused on how we can just clean up open source code, we would have been in a better place.' A unifying theme throughout the discussion was the urgent need for coordination—across sectors, agencies, and borders. The convergence of rapid AI adoption, regulatory rollbacks, and mounting cyber threats is creating a perfect storm. Industry leaders stressed that security cannot be an afterthought, and that public trust and international cooperation hinge on transparency, integrity, and mutual accountability. The panel concluded with a call to preserve the principles of trust and collaboration that once underpinned America's cyber defense strategy—and to ensure those values guide policy moving forward. Easterly closed with a reflection on her time at CISA and how that should serve as a guiding light moving forward: 'We built trust and catalyzed trust and collaboration, and we did it with integrity, we did it with humility, we did it with transparency, and we did it with character. And that's what you all should demand from your government.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store