
Will AI Replace Cybersecurity? Not Quite—But It's Rewriting The Rules
Put yourself in the mind of a master cybercriminal. Fun, right? Stay with me. I promise this role play exercise will pay off.
Just a few years ago, your illicit schemes were small time. You were content to steal unwitting individuals' personal data, including credit card and social security numbers to buy merchandise on the Dark Web.
Your petty exploits paid off—modestly but steadily. But it was a volume game. You had to keep finding more unsuspecting marks to exploit and more creative ways to turn a profit.
Then came AI.
You quickly learned all about ChatGPT and other forms of generative artificial intelligence. As you did, you thought: why not use this tech to level up—crime-wise?
Armed with new tools you evolved from a petty thief. You developed a budding criminal empire capable of exploiting not just singular victims, but enterprise corporations with billion-dollar balance sheets. After all, that's where the real dough is, right?
Welcome to the new age of cyberwarfare—courtesy of AI.
'The stakes have never been higher,' says Ed Vasko, COO of High Wire Networks, a leading global provider of managed cybersecurity services. A seasoned veteran with 33 years' experience, we sat down to discuss the elephant in the room. 'The cyber war has shifted. It no longer wages between hackers and IT departments. It's now AI versus AI.'
Vasko is not alone in this assessment.
Speaking at DefenseScoop's Google Defense Forum Pentagon last year, military insider Jude Sunderbruch of the Defense Department's Cyber Crime Center warned attendees of future 'AI versus AI conflicts' spreading all the way to the international stage. ''I think we're really just at the start,' Sunderbruch said, later adding that the U.S. and its allies will have to get creative and learn how to best use existing AI systems to gain a leg up on competing intelligence giants like China,' according to Defense One.
The implications go beyond boardrooms and command centers. To appreciate what may be coming, Vasko paints a chilling scenario for the not-so-distant future. Time for more role playing. Now imagine you're the head of a major retailer. For the last few years you've relied on AI supply chain forecasting. Similar to how fintech increasingly depends on AI—not humans—to handle the complexities of trading, artificial intelligence is core to your business' operations.
Without it, you're flying blind.
This reality makes things all the more disturbing when the AI supporting your organization begins behaving erratically, wreaking havoc. Orders stop arriving. Inventory goes out of stock. Even your pricing models collapse.
These internal problems don't stay contained for long. They proliferate externally—in dire ways. Once loyal customers defect. Revenues drop up. And your stock price plummets. But that's not all…
According to Vasko, cyber criminals behind this type of villainy may be thought of as AI buccaneers—digital pirates often paid to instigate corporate espionage and theft. 'Unlike the antiquated variety from centuries past, AI buccaneers know the power of perception—that it's possible to tank a rival company's stock price by spreading lies to disrupt public sentiment.'
To this end, the hits keep on coming for your major retailer company. A video surfaces of your CFO making disturbing comments in a shareholder meeting. The remarks are so inflammatory they go viral, driving your already plummeting stock even lower.
But here's the thing. That CFO video is actually a deepfake. It was produced by Google's Veo 3, similar to extant bogus news anchor content already proliferating the Internet.
True or not, the damage is done.
Within hours, your unscrupulous corporate rivals bask in ignoble victory. Short selling your plunging stock, they make out like bandits—along with their AI buccaneer accomplices, benefitting from your demise.
As Vasko explains, 'Cyber criminals can even now use AI co-pilots to coordinate attacks on corporations, industries, even governments. They're faster, smarter, and more dangerous than anything we've seen before.'
More on that below.
AI co-pilots are but one part of a growing criminal toolkit, capable of automating surveillance, coordinating attacks, and orchestrating malfeasance at scale. What's now possible almost defies belief. Example: AI can analyze thousands of profiles across social media, company directories, and public databases to identify weak links for exploitation. 'Once inside, these same AI tools can poison an organization's internal data lakes—sabotaging predictive systems and decision-making engines from the inside out,' explains Vasko.
This means bad actors needn't limit themselves to stealing personal data. They can expand their scope of attack, going so far as to manipulate market outcomes. Per the above cautionary tale, they can influence how a company performs, how it's perceived, and ultimately, what happens to it long-term.
Hundreds of years ago, back when pirates plundered by sea, countries ravaged by buccaneers authorized so-called privateers to fight off the bad guys. Unfortunately, the U.S. government doesn't allow proactive 'hacking back' in the form of AI privateers. Not even in self-defense.
This is why High Wire Networks and other cybersecurity firms avoid going on the offensive. Instead, they turn to AI-augmented defense platforms to preemptively determine and intercept threats, shifting reactive security into a proactive shield. In other words, they're fighting AI with AI.
'Hyperautomation' is the term Vasko uses to describe the fusion of machine learning and automated decision-making throughout a security stack. In the old model, a cyber victim might be notified of a data breach. 'Dear so and so,' an email might read. 'We regret to inform you that your credit card was compromised.'
That's not so helpful, is it?
Hyperautomation, on the other hand, reacts proactively. Here's how it could work with the above individual scenario. Alerted of a breach, AI could stop it, issue a new credit card, and continually monitor a victim's credit card and exposure.
Many cybersecurity firms employ similar tech at the organizational level. As Cyber Magazine reports, Varonis leverages artificial intelligence to act autonomously as a counterweight to AI-enabled mischief. The company's 'AI Shield' offers real-time ongoing protection for large organizations. 'By integrating real-time risk analysis, automated risk remediation, behavior-based threat detection, and 24/7 alert response, Varonis' AI Shield empowers enterprises to safely use AI technologies while safeguarding sensitive data.'
Looking forward, the more things change, the more they stay the same. Once upon a time, swashbuckling pirates patrolled the high seas, pilfering valuables from individuals, companies, and governments alike. Nowadays, that threat has migrated from the physical theatre to cyberspace as criminals wield code to rob and steal at will. To survive, much less thrive, tomorrow's organizations would do well to wrest back the power of AI.
Without it? We're surely sunk.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
a minute ago
- Android Authority
You might soon be able to use Gemini Live on top of other apps (APK teardown)
Tushar Mehta / Android Authority TL;DR Google is testing more upgrades to Gemini Live's floating interface before its launch. The upgrade now enables support for contextual cards within the floating interface. This will allow you to seamlessly switch between an app and Gemini Live seamlessly, thanks to a collapsible interface. Google Gemini is destined to fully replace Google Assistant on Android phones by the end of this year. Preparing for that transition, Google is continually upgrading the chatbot's functionality, with a special emphasis on Gemini Live, which allows real-time conversations and a freer exchange of information compared to the standard interface. In this pursuit, Google is also optimizing the Gemini Live's interface, and today, we're looking at another step in this direction in the form of a sleeker and less obtrusive Gemini Live UI. In recent months, we have witnessed Google experiment with a compact Gemini Live interface that occupies only a portion of the screen. Now, we're seeing more updates to the overlay, which is now gaining support for other apps through extensions. The standard Gemini has supported extensions for apps such as Google Calendar, Keep, Maps, and Tasks for a long time, and Google recently rolled out support within Gemini Live. Besides these, we've also seen a feature where context cards from these apps show up in Gemini Live to make information easier to understand, though support for those has yet to roll out. But now, we're seeing cards show up in a cleaner format with a condensed Gemini Live interface. This interface allows Gemini Live to provide you with any helpful information while taking up less space. In contrast, the current Gemini Live UI covers the entire screen and limits usability. Here's what the interface looks like. Notably, the sleeker Gemini Live interface isn't live yet, and we were able to tweak internal settings in version 16.32.48 of the Google app to enable it. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. While the primary benefit of this new interface is to ensure that Gemini Live takes up less space, it also brings a fundamental advantage over the current method of using it. Presently, when you tap back (or use gesture navigation) to close Gemini Live, it takes you back to the standard Gemini chatbot interface. That means you must go back once more to close Gemini fully and return to the app you were using earlier. Even then, you might end up back on your phone's homepage. This is what the floating Gemini Live UI could abate, by allowing you to easily resize Gemini Live and return to the app in the background or interact with it intermittently, especially if you are using Gemini Live to feed information into that app. As you can see in the images above, the upgraded floating interface for Gemini Live lets you view notes from Keep or explore a location in Maps using cards. Along with the other apps, we also see the Pixel Weather app being supported via extensions in Gemini Live. As with other experimental features, it is difficult to comment on whether and when Google might release this to a broader audience. But Google is already testing a floating interface for the regular Gemini chatbot in beta, and we can anticipate the same for Gemini Live. ⚠️ An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. Follow


Android Authority
a minute ago
- Android Authority
Trump Mobile's latest ads ditch Android reality for a golden iPhone fantasy
Trump Mobile on X TL;DR Trump Mobile's marketing materials now show a gold iPhone embossed with the carrier's branding and the American flag. The device looks nothing like the T1 Phone, which the company is currently taking orders for on its website. Trump Mobile seems to have pulled a classic switcheroo, promoting a shiny iPhone in its ads instead of the Android phone it's actually selling. Move over, reality, there's a new 'gold' standard in smartphone marketing, and all credit goes to Trump Mobile. The Trump organization's mobile carrier kicked off with a flashy gold, Android 15-based 'T1 Phone,' but now, it looks like the company is headed in a different direction. This elusive T1 Phone is nowhere to be seen in Trump Mobile's latest marketing materials and ads posted across social media platforms like X. Now, the ads feature what appears to be a gold-embossed iPhone 16 Pro. Let's be clear, this phone does not exist. Trump Mobile on X In June this year, Donald Trump Jr. and Eric Trump unveiled Trump Mobile with a gaudy $499 gold Android phone called the T1 Phone, complete with MAGA slogans and a $47.45 monthly plan. The phone was advertised as 'proudly designed and built in the United States,' a line that still features in the official press release of the company, even though shortly after launch, the 'Made in the USA' claims vanished from the site. Now the phone is just 'designed with American values in mind.' Trump Mobile Details about the specs also changed with time. The screen size shrunk from 6.78 inches to 6.25 inches, the RAM dropped, and the phone itself still looks suspiciously like a rendered mock-up, not a real device. Unsurprisingly, the T1 Phone is nowhere to be seen in Trump Mobile's marketing materials. Recent ads posted on X show a gold iPhone 16 Pro-looking device embossed with the words 'Trump Mobile' and the American flag. There's no T1 branding in sight. So, is Trump Mobile now selling custom iPhones? The answer is very, very likely a big no. Trump Mobile on X The gold iPhone shown in Trump Mobile ads is almost certainly a creative, and a misleading one at that. For obvious reasons, this Trump-branded iPhone is not mentioned on the company's website. What's even more bizarre is a Trump Mobile ad that shows a man actually holding this anomaly of an iPhone in his hand. The company could have achieved this using a case on an existing iPhone or perhaps some cinematic tricks. But even here, the T1 Phone is nowhere to be seen. Looks like Trump Mobile has pulled a classic switcharoo. But don't fall for the flashing gold iPhone renders and patriotic symbols. While the T1 Phone is still very much a mystery, it certainly won't look anything like this gleaming iPhone. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. Follow


Fast Company
a minute ago
- Fast Company
Founder fraud isn't an outlier: it's a design flaw
Another month, another founder accused of fraud. This time it's Christine Hunsicker of CaaStle, indicted on July 18 for allegedly falsifying financial records, misrepresenting profits, and continuing fraud even after her removal by the board. According to reports, before meeting with an audit firm, she searched online for the terms 'fraud,' 'created an audit firm fake,' and 'JP morgan 4m records faked'—an apparent reference to fraud charges related to yet another disgraced founder, Charlie Javice of Frank. These incidents are no longer outliers. They're becoming a pattern, and the startup world has yet to confront what that the pattern reveals: The startup ecosystem is designed to encourage deception. Risk-taking and self-confidence We all know that most founders share a penchant for risk-taking and a healthy sense of self-confidence. But couple these characteristics with the relentless assault of pressures that constitute daily startup life, and you have a recipe for trouble. Risk-taking slips into recklessness, and confidence metastasizes into outright narcissism. Lying is the norm. Particularly during the early stages, a 'Growth at All Costs' imperative means that startups feel obliged to pursue aggressive growth to secure high valuations and attract continuous investment rounds. This pressure can lead founders to inflate metrics, fabricate success, or conceal failures to maintain investor confidence. Sam Bankman-Fried of FTX secretly transferred customer funds to his trading firm, Alameda Research, concealing these movements and misleading stakeholders. From optimism to deception A 'Fake It Till You Make It' culture means that what starts as harmless optimism can easily escalate into deliberate deception. Founders initially omit negative details, then progressively falsify data to uphold illusions of success. Nikola founder Trevor Milton exaggerated product capabilities, even staging videos of a nonoperational electric truck rolling down a hill. The brutal demands of fundraising result in constant pressure to secure funding and maintain operational cash flow, which often pushes founders to compromise ethically. The necessity to present a highly favorable narrative to investors encourages fraudulent embellishments. Combined with a lack of oversight and governance, especially in early-stage startups, this leaves founders unchecked, increasing opportunities for fraud. Early investors and boards often fail to provide rigorous oversight due to limited motivation or expertise. A gradual process White-collar fraud is always a gradual process. No one jumps straight into the deep end of the criminality pool. Law enforcement officials have a '10:10:80' rule of thumb when it comes to white-collar fraud: 10% of people would never commit fraud, 10% of people are actively seeking out opportunities to commit fraud, and 80% of people have the potential to commit fraud if the timing and circumstances are right. The vast majority of these founders probably started in the 80%, along with most of the rest of us. It often begins with minor embellishments aimed at securing initial investment. Successful deception attracts further funding, creating a self-reinforcing cycle. But as the discrepancies between reality and claims widen, founders face intensified pressure to maintain their narratives, resorting to increasingly severe fraud to conceal earlier lies. Witness Christine Hunsicker's continued deception even after her board had essentially kicked her out of her company. Seismic consequences The consequences of all this founder misbehavior can be cataclysmic. They extend well beyond the direct financial losses to investors. Broader investor confidence deteriorates, leading to reduced funding availability for legitimate startups. Employees suffer job losses, reputational damage, and psychological distress. Customers can experience direct harm, as in Theranos's false medical test results. The broader innovation ecosystem becomes risk-averse, slowing innovation due to increased regulatory scrutiny and cautious investment behaviors. Potential time bombs To mitigate this deadly cocktail of ego and pressure, we first need to understand that all founders are potential time bombs: the same traits that help them secure money, talent, and press are the ones that can eventually lead to their undoing. The old method was pretty straightforward: fire the founder, and replace them with a manager. But that only leads to zombie companies that stifle innovation in the crib. Startup founders are constantly being gaslit. They're being flattered as geniuses and world-changers on a daily basis. Many of their direct reports are sharp, canny careerists who only want to share good news. It's easy to see how people can lose perspective and start believing their own hype within the 'emperor's new clothes' environment of a startup. These people need perspective in order to curb the worst tendencies of startup culture. Every founder should cultivate a 'star chamber' of mentors who are removed from the everyday persecutions of the startup in question (perhaps an older CEO, or a colleague from an accelerator program, or a startup blogger you admire). They need advice from people whom I call 'models of values': transparency, accountability, and ethical leadership. Many boards are sadly hopeless at this, because they're complicit in the success (at all costs) of the startup. Oversight and accountability On the stick side of the carrot and stick approach, however, these people also need oversight and accountability. Their boards and investors must actively engage in governance roles, monitoring company practices and demanding transparency. They need to ensure financial transparency and operational integrity through audits and detailed reference checks. To prevent the next Hunsicker, Javice, Bankman-Fried, or Holmes, we need to confront the cultural rot at the core of startup life. We still need ambitious entrepreneurs to drive innovation, but not within a system that rewards deception and punishes transparency. Unless we change the rules of the game—by rethinking incentives, strengthening oversight, and investing in founder development—we'll keep producing brilliant visionaries who become cautionary tales.