28-07-2025
Is AI Safe For Software Development?
Arthur Hicken is chief evangelist at Parasoft, a global leader in automated software testing solutions.
It should come as no surprise that the common response to concerns about AI's safety for software development is: 'It depends.' AI is exceptional at many things, like handling common coding tasks. But when AI is forced to consider context or deal with incomplete instructions, you risk compromising software safety and security.
Software bugs are caused more often by insufficient requirements than poor code. Add AI to the mix and you introduce increased opportunities to compromise code quality, reliability, security and safety. Sure, AI makes it easy to automate software creation and testing—if you start with clearly understood instructions.
This same problem extends to the prompts given to the system to generate code or fixes. If the prompt is ambiguous, incomplete or otherwise incorrect, AI is unlikely to produce the correct code. Unfortunately, however, a lot of software development is designed using a move-fast-and-break-things philosophy due to inexact specifications and evolving rules.
Put AI on an imprecise path and the opportunity to veer off course grows significantly. Ambiguity is the enemy of software development, so it is reasonable to expect uncertainty to be one of the biggest obstacles to ensuring AI-produced software is safe for deployment.
AI In The Mirror
The phrase 'garbage in, garbage out' describes the results of working with vague specifications. AI exacerbates this issue because it has not learned yet how to recognize code quality.
Today, AI is a mirror of ourselves—not as individuals, but as a whole, because it looks at the sum of software code that humans have written and reflects it back to us. Since few objective measures exist for determining whether code is good or bad, AI simply looks for code that will solve the requirement it is given without taking quality into account.
While matching coding patterns can yield code that does what the developer wants, more learning is required to help AI recognize good coding practices. Major attributes must be considered, such as quality, security, performance and maintainability. Until then, early AI interactions are likely to recommend code that is not completely up to snuff. And it's not AI's fault, as it just echoes what is found without being able to apply context to the results.
The good news is that, over time, AI will become increasingly adept at discerning between good and bad code. But like newbies on your development team, it needs to be mentored first. Beginner programmers are not given unfettered access until they earn trust by proving their code is reliable, safe, maintainable, editable and scalable. Why should AI-generated code be different?
The Capricious Prodigy
Think of AI as a capricious prodigy who has the potential to be brilliant but occasionally does something idiotic without warning. Most days, this wunderkind quickly produces helpful code suggestions. While exceedingly clever, however, AI-driven solutions also can present a false sense of security that they know what they're doing all the time, even when that is not true.
Unfortunately, new whiz kids become easily distracted and careless unless given explicit directions. AI needs around-the-clock oversight from experienced programmers to improve accuracy. Skilled experts are ideally suited to discover early opportunities for AI to become a force multiplier. Armed with knowledge and experience, senior-level developers are poised to accomplish greater feats of creative problem solving through measured applications of AI.
I believe that when AI receives detailed guidance from development experts, the results can lift the entire team's productivity. Conversely, providing AI tools to everyone regardless of skill level is more likely to cause a precipitous drop in productivity along with a spike in software errors.
Not everyone agrees with this thinking, especially developers of AI-coding assistants and Meta CEO Mark Zuckerberg. He told Joe Rogan earlier this year that AI will replace mid-level engineers soon.
While that is a great aspirational goal, I see it as unrealistic. What is most needed right now are vital protective barriers that safeguard development teams and give the new generation of capricious prodigies the best shot at success.
It's All About The Guardrails
Guardrails along highways prevent driving our cars off the road or into oncoming traffic. AI guardrails for software development perform similar functions by protecting teams from inadvertently introducing software bugs or security risks. That is why it is critical to share rules, best practices and automated controls that guide AI to move decisively within safe and efficient boundaries.
I cannot overstress the importance of peer review to ensure your capricious AI prodigy operates within specific parameters, encompassing everything from basic naming conventions to acceptable behaviors. Remember, constant coaching and immediate feedback play equally pivotal roles in training AI as well as mentoring the newest member of your development team.
Establishing effective guardrails is not about relying on a single tool, but about creating a strategic ecosystem of safeguards that reinforce one another. For example, static code analysis provides an extra measure of protection by examining code structure, syntax and patterns to find potential bugs, vulnerabilities and other violations—but this must be complemented by other foundational practices to ensure reliability and maintainability at every stage of the development lifecycle.
Approaches like automated static code analysis can function like a pair programmer or 'buddy system'—flagging bugs, vulnerabilities or deviations from coding standards in real time. When complemented by other approaches such as integrated code coverage or unit testing, these safeguards can help provide critical visibility across the development lifecycle.
To reap the benefits from AI to boost quality in software development and testing requires experience and mature internal processes. Quality must be viewed as integral to a complete ecosystem of safeguards to ensure seamless collaboration between software developers, testers and quality assurance teams. Regular check-ins, ongoing training, open communication and well-documented processes are essential to keeping pace with rapid AI advancements.
To increase AI's effectiveness and efficacy for software development, apply both the right mindset and testing methodologies to all code, generated by humans and AI alike. In doing so, we give AI the best opportunity to fulfill its potential to transform software development, safely and securely.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?