3 days ago
The Future Of AI Is Specialization
With 16+ years in cybersecurity, Édouard Viot, CTO of Symbiotic Security, is a hacker at heart and an innovator in AppSec, WAFs and EDR.
The rapid evolution of AI has led to an important realization: the infrastructure, training costs and ongoing reinforcement learning required to maintain a generalist AI model are astronomical, impractical and unsustainable.
In my opinion, the future belongs instead to hyperspecialized AI models that are tailored to excel in hyper-specific domains.
Fundamentally, using a large language model (LLM) for a hyper-specialized task is like using a sledgehammer to crack a nut: it's not the most efficient tool for the job. So instead of relying on large, resource-intensive models for every task, the industry is shifting toward domain-specific AI agents.
For example, AI specializing in code security would outperform a general-purpose model like ChatGPT when it comes to detecting and remediating vulnerabilities. In fact, we ran an internal study on this topic that you can find here.
Agentic AI substantially increases these capabilities. Agentic AI is a solution engineered to function independently by making decisions, executing actions and adjusting dynamically to evolving conditions with minimal human oversight. Take, for example, an agent specialized not just in code security, but specific families of vulnerabilities, such as XSS, SQL injection and buffer overflow. In these cases, AI can adapt to the type of vulnerability it has detected and route the user to proper, hyper-focused resources for remediation and/or training.
The agentic approach can also be used to chain AI models. Using a slightly different example, let's say the user is working with Terraform code. Within the workspace, one agentic AI can be used to remediate vulnerabilities in the code in Terraform and then route to another agent that will check the syntax to make sure that everything is correct. This will provide better results, but will also lead to increased latency.
All of this raises a fundamental question: Do we really need general-purpose AI models that know everything? The answer is increasingly clear—no, we don't. What we need is AI that is exceptional at a specific task, delivering high performance with lower compute costs. The advantages extend beyond efficiency: hyperspecialized AI reduces latency, improves accuracy and even lowers environmental impact due to reduced resource consumption.
Hyperspecialized models can have an outsized impact in areas that call for both accuracy and flexibility. Looking again at cybersecurity, different AI techniques can work together to make the whole process faster and more efficient.
For instance, machine learning models, trained on large datasets of known threats and safe software, are great at classification. They can quickly spot anomalies, categorize vulnerabilities and reduce false alarms during automated scans. This is a huge win for security teams, who can then focus on higher-level strategy and incident response rather than sifting through endless alerts.
Meanwhile, LLMs shine when it comes to code-related tasks, in that they can generate specific fixes across a range of programming languages. This means developers don't have to be experts in every single language; they can rely on an LLM to create targeted solutions that fit the situation at hand.
Bringing these two approaches together—machine learning for classification and LLMs for code generation—creates an effective combination that addresses both identification and remediation of security issues. Not only does this save time and resources, but it also bolsters an organization's overall security posture by delivering quick, precise results.
The productivity gains from AI-driven automation are undeniable. In software development, AI can function like an outsourced team, accelerating coding efforts and reducing development timelines. However, this speed comes with a trade-off: without proper oversight, AI-generated code can, and does, introduce security vulnerabilities, leading to increased risk.
In fact, a recent Stanford study has shown, among other things, that participants "who had access to an AI assistant wrote significantly less secure code than those without access to an assistant." It also found that participants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code.
Rather than replacing developers, AI is transforming their role. Developers will shift from being pure coders to acting as AI controllers and overseers, ensuring that AI-generated output meets security and quality standards. This evolution places a greater emphasis on critical thinking and judgment, elevating the role of developers within organizations.
As AI models become more widely available, the competitive edge will shift towards data quality and specialization. Large, general-purpose models require immense investment, but hyperspecialization allows smaller players to compete effectively. This disrupts the traditional AI hierarchy, potentially enabling new innovators to challenge the dominance of tech giants.
AI is increasingly learning from human interactions, a concept known as reinforcement learning. Using the case of code security again, if a developer modifies AI-suggested remediation code before accepting it, the AI can learn from this adjustment and refine its future recommendations. This continuous feedback loop allows AI to evolve based on real-world usage, improving accuracy and effectiveness over time.
It's important to note, however, that for an AI to be truly self-improving, the capabilities of the human interacting with it need to be taken into consideration. In fact, only with that awareness should the reinforcement learning be adjusted. If the developer modifying the suggested remediation code makes those changes without understanding the root problem, and as a result, the changes are ill-advised or wrong, learning from that interaction would be detrimental to the AI.
As AI advances, hyperspecialization will become the dominant strategy for enterprises seeking cost-effective, high-performance solutions. The era of trying to build a single AI that does everything is giving way to a more practical approach: deploying multiple smaller, task-specific AIs that are more efficient, precise and ultimately more beneficial for organizations and society.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?