logo
The Future Of AI Is Specialization

The Future Of AI Is Specialization

Forbesa day ago

With 16+ years in cybersecurity, Édouard Viot, CTO of Symbiotic Security, is a hacker at heart and an innovator in AppSec, WAFs and EDR.
The rapid evolution of AI has led to an important realization: the infrastructure, training costs and ongoing reinforcement learning required to maintain a generalist AI model are astronomical, impractical and unsustainable.
In my opinion, the future belongs instead to hyperspecialized AI models that are tailored to excel in hyper-specific domains.
Fundamentally, using a large language model (LLM) for a hyper-specialized task is like using a sledgehammer to crack a nut: it's not the most efficient tool for the job. So instead of relying on large, resource-intensive models for every task, the industry is shifting toward domain-specific AI agents.
For example, AI specializing in code security would outperform a general-purpose model like ChatGPT when it comes to detecting and remediating vulnerabilities. In fact, we ran an internal study on this topic that you can find here.
Agentic AI substantially increases these capabilities. Agentic AI is a solution engineered to function independently by making decisions, executing actions and adjusting dynamically to evolving conditions with minimal human oversight. Take, for example, an agent specialized not just in code security, but specific families of vulnerabilities, such as XSS, SQL injection and buffer overflow. In these cases, AI can adapt to the type of vulnerability it has detected and route the user to proper, hyper-focused resources for remediation and/or training.
The agentic approach can also be used to chain AI models. Using a slightly different example, let's say the user is working with Terraform code. Within the workspace, one agentic AI can be used to remediate vulnerabilities in the code in Terraform and then route to another agent that will check the syntax to make sure that everything is correct. This will provide better results, but will also lead to increased latency.
All of this raises a fundamental question: Do we really need general-purpose AI models that know everything? The answer is increasingly clear—no, we don't. What we need is AI that is exceptional at a specific task, delivering high performance with lower compute costs. The advantages extend beyond efficiency: hyperspecialized AI reduces latency, improves accuracy and even lowers environmental impact due to reduced resource consumption.
Hyperspecialized models can have an outsized impact in areas that call for both accuracy and flexibility. Looking again at cybersecurity, different AI techniques can work together to make the whole process faster and more efficient.
For instance, machine learning models, trained on large datasets of known threats and safe software, are great at classification. They can quickly spot anomalies, categorize vulnerabilities and reduce false alarms during automated scans. This is a huge win for security teams, who can then focus on higher-level strategy and incident response rather than sifting through endless alerts.
Meanwhile, LLMs shine when it comes to code-related tasks, in that they can generate specific fixes across a range of programming languages. This means developers don't have to be experts in every single language; they can rely on an LLM to create targeted solutions that fit the situation at hand.
Bringing these two approaches together—machine learning for classification and LLMs for code generation—creates an effective combination that addresses both identification and remediation of security issues. Not only does this save time and resources, but it also bolsters an organization's overall security posture by delivering quick, precise results.
The productivity gains from AI-driven automation are undeniable. In software development, AI can function like an outsourced team, accelerating coding efforts and reducing development timelines. However, this speed comes with a trade-off: without proper oversight, AI-generated code can, and does, introduce security vulnerabilities, leading to increased risk.
In fact, a recent Stanford study has shown, among other things, that participants "who had access to an AI assistant wrote significantly less secure code than those without access to an assistant." It also found that participants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code.
Rather than replacing developers, AI is transforming their role. Developers will shift from being pure coders to acting as AI controllers and overseers, ensuring that AI-generated output meets security and quality standards. This evolution places a greater emphasis on critical thinking and judgment, elevating the role of developers within organizations.
As AI models become more widely available, the competitive edge will shift towards data quality and specialization. Large, general-purpose models require immense investment, but hyperspecialization allows smaller players to compete effectively. This disrupts the traditional AI hierarchy, potentially enabling new innovators to challenge the dominance of tech giants.
AI is increasingly learning from human interactions, a concept known as reinforcement learning. Using the case of code security again, if a developer modifies AI-suggested remediation code before accepting it, the AI can learn from this adjustment and refine its future recommendations. This continuous feedback loop allows AI to evolve based on real-world usage, improving accuracy and effectiveness over time.
It's important to note, however, that for an AI to be truly self-improving, the capabilities of the human interacting with it need to be taken into consideration. In fact, only with that awareness should the reinforcement learning be adjusted. If the developer modifying the suggested remediation code makes those changes without understanding the root problem, and as a result, the changes are ill-advised or wrong, learning from that interaction would be detrimental to the AI.
As AI advances, hyperspecialization will become the dominant strategy for enterprises seeking cost-effective, high-performance solutions. The era of trying to build a single AI that does everything is giving way to a more practical approach: deploying multiple smaller, task-specific AIs that are more efficient, precise and ultimately more beneficial for organizations and society.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta plans to automate many of its product risk assessments
Meta plans to automate many of its product risk assessments

TechCrunch

time24 minutes ago

  • TechCrunch

Meta plans to automate many of its product risk assessments

In Brief An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR. NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators. Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an 'instant decision' with AI-identified risks, along with requirements that an update or feature must meet before it launches. This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates 'higher risks,' as 'negative externalities of product changes are less likely to be prevented before they start causing problems in the world.' In a statement, Meta seemed to confirm that it's changing its review system, but it insisted that only 'low-risk decisions' will be automated, while 'human expertise' will still be used to examine 'novel and complex issues.'

Vespa's 1950s Microcar Was Just As Cute as Its Scooters
Vespa's 1950s Microcar Was Just As Cute as Its Scooters

The Drive

time37 minutes ago

  • The Drive

Vespa's 1950s Microcar Was Just As Cute as Its Scooters

Get The Drive's daily newsletter The latest car news, reviews, and features. Email address Sign Up Thank you! Terms of Service & Privacy Policy. Let's face it: cute is gonna cute. I mean, a mini Mini is a thing, so, of course, a cuter Vespa would exist. And this time, no chop shops are involved. In fact, this adorbs Vespa has four wheels and is French. According to Below the Radar, the manufacturer now known for scooters also built a teeny transporter from 1957 to 1961. With a 393cc two-stroke two-cylinder engine mounted in the rear, the Vespa 400 was smaller than its counterparts from Fiat and Mini. Measuring just 113 inches in length, the Vespa 400 was five inches shorter than the Fiat 500, which debuted the same year, and took up eight fewer inches than the incoming Mini Cooper that would launch in 1959. Other specs include 14 horsepower and a top speed of about 50 mph. Equipped with a three-speed manual, zero to 40 mph took a leisurely 23 seconds. Utilizing less than 9.5 feet of street space, understandably, there was only room for two occupants. Despite the rear engine, the Vespa 400 had no frunk. That space was actually where your feet would go. Vespa left no space to waste in its already tight-quartered 400. Ingress and egress were made easy via rear-hinged doors. Have luggage? There was a small area behind the two seats for that purpose. Have a kid? A small cushion could be fitted in between the seats to create a temporary bench. Have more than one kid? Hmm, how do I put this? You purchased the wrong vehicle. As the story goes, the Vespa 400 was manufactured out of a factory in Fourchambault, France, to take advantage of the market's growing affinity for microcars. Or perhaps because Fiat was going to have a fit and launch a competing cutesy scooter. Italian tit for tats aside, the Vespa 400 sold well during its first run out the gate with more than 12,000 sold, but sales slid in the following years. Mostly left-hand drive, the Vespa microcar was sold throughout Europe, with about 1,700 actually finding their way stateside. When the Vespa 400 was finally imported to the UK, a vehicle review at the time offered positive remarks. 'Helped by the well-sprung seats, ride comfort is unusually good for such a small car…There is little to distinguish the Vespa from a normal family car, apart from its size,' wrote The Motor, as shared by Below the Radar. That 1959 Vespa 400 cost 351,725 French francs or 255 British pounds. Bring a Trailer Unfortunately, not many have survived, but shiny side up examples can be found, like this one listed on Bring a Trailer which put the car back on our radar today. This little red number with a matching red and tan patterned interior has been an Arizona resident since the late 1960s and underwent a refurbishment in 2015. The speedometer has been replaced, so its true mileage is unknown, but it's a fact that the Vespa 400 is super cute with those 10-inch polished steelies. Offered at no reserve. The vehicle was purchased less than a year ago, selling for $21,023. However, with only a day to go, the current high bid is just $9,000. Are there any other tiny cars from scooter brands we should know about? Drop us a line at tips@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store