logo
Crisis PR Will Be a Lifeline When AI Supercharges Misinformation

Crisis PR Will Be a Lifeline When AI Supercharges Misinformation

Newsweek23-05-2025

Advocates for ideas and draws conclusions based on the interpretation of facts and data.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
I recently saw an AI-generated photo of Chris Rock at the 94th Academy Awards squeezing a bottle of baby oil on Jada Pinkett Smith's head. Will Smith was also in the frame, graciously holding the phone to take the shot.
I was mortified, not only at the realness of the photo I was seeing, but at what this means for our increasingly bleak future when it comes to separating truth from fiction.
Grok, xAI's chatbot, recently began replying to unrelated queries with debunked white nationalist conspiracy theories. xAI blamed a "rogue employee" for unauthorized prompt changes and pledged new safeguards, though the incident feels like a troubling sign of a dark future.
An illustration photograph shows Grok, DeepSeek and ChatGPT apps displayed on a phone screen.
An illustration photograph shows Grok, DeepSeek and ChatGPT apps displayed on a phone screen.
JUSTIN TALLIS/AFP via Getty Images
Artificial intelligence is evolving faster than our ability to understand its consequences. As it continues to evolve, the spread of misinformation will accelerate at an unprecedented pace, posing serious risks to public trust, safety, and reputation. And while not a comprehensive solution, public relations professionals can help serve as a bulwark.
Crisis PR—the field focused upon managing reputational fallout—is about to become a frontline defense in a war most people have not yet seen coming.
While the internet has enhanced our lives in many ways and brought people together across the globe, it has also provided platforms a place to separate us. While unlimited and instant access to information has been a huge leap forward, AI-generated content threatens to flood our feeds with misinformation. Increasingly, we are entering an era where people, companies, and institutions of all types will need help not just telling their stories, but separating truth from fiction in defense of their reputations.
A single false image, fabricated recording, or scandal invented and amplified by bots will be able to dismantle a person's livelihood and future in a matter of minutes. That's the dark side of AI-enabled misinformation: the power to ruin someone based on nothing more than a computer-generated narrative or image.
We saw this play out already when a fake AI-generated image of an explosion near the Pentagon went viral in May 2023. It was picked up by several high-profile accounts on social media and briefly caused a dip in the stock market, demonstrating how even a single false image can produce real-world financial and societal consequences.
In another instance earlier this year, a deepfake video showing Ukrainian President Volodymyr Zelensky supposedly telling his troops to surrender circulated widely online before being debunked. Though it was quickly identified as fake, the damage was already done. Russian propaganda outlets amplified the clip, and it briefly shook confidence in the Ukrainian defense. It was a clear example of how fast a lie can travel and how synthetic media can produce real-world ramifications.
Crisis PR professionals are the ones who can step in with speed, clarity, and strategy when the digital mob shows up with pitchforks, or when lies, generated in seconds, spiral out of control. PR firms today are increasingly being called upon to defend clients against AI-enhanced disinformation. This includes false narratives fueled by deepfakes, synthetic media, and cloned voices and videos which can fabricate offensive behavior or even criminal acts.
My firm once helped a venue owner who was the target of an AI-driven smear campaign. Bad actors had cloned his voice to make fake 911 calls about emergencies at rival locations. Edited clips circulated online, and a journalist reported the hoax as fact, seriously harming the client's business and reputation.
In response, we deployed a fact-first strategy: collecting official records, leveraging digital forensics to disprove the audio, and launching a focused communications campaign to correct the record. Legal counsel worked alongside us to remove the false claims from press coverage. The key was speed and precision, which involved meeting an AI-powered attack with a coordinated but rapid-fire evidence-based defense.
Real people like this business owner, unprepared with the resources or resolve to defend their reputation, run the risk of being crushed under the weight of lies they cannot outrun. Companies stand to lose decades of credibility in the moment it takes for fake content to go viral.
We are rapidly approaching a future where a personal or organizational reputation is no longer something you must earn, but something that easily can be stolen. But there is a path forward.
It starts with recognizing that AI reputational threats are real and then assembling teams and systems that are equipped to respond instantly. Crisis communicators will be valuable resources to companies, provided they can dissect narratives, act decisively, and defend the truth with some of the same tools used to attack it.
The future belongs to AI, but that does not have to herald the end of fact. Now, more than ever in our history, there will be a burning need for people who fight for truth and decency. I am confident that crisis PR practitioners will rise to the moment. In the age of AI, reputation protection and crisis communication will no longer be optional, but essential.
Evan Nierman is the founder and CEO of Red Banyan, a global crisis PR firm, and author of Amazon bestsellers The Cancel Culture Curse and Crisis Averted.
The views expressed in this article are the writer's own.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ukraine Just Demonstrated What AGI War Could Look Like
Ukraine Just Demonstrated What AGI War Could Look Like

Yahoo

time20 minutes ago

  • Yahoo

Ukraine Just Demonstrated What AGI War Could Look Like

Engineers test drones at drone manufacturing facility in Odessa, Ukraine on June 01, 2025. The Ukrainian forces are producing new drones and electronic warfare systems. The facility is currently testing advanced models, including self-propelled land-based drone systems. Credit - Maksim Voytenko—Anadolu/Getty Images Bombers in flames on social media. Photos of trick shipping containers packed with drones. Defiant statements from both sides about the scale of the damage. On June 1, Ukraine targeted several Russian air bases using first-person view (FPV) drones, cheap aerial vehicles which are remotely operated by pilots using camera feeds. According to reports, Ukraine used machine-learning algorithms to guide the drones to the target area. The attack, dubbed 'Spider's Web', demonstrated the current hardware capabilities of modern warfare. And as companies and governments race to develop Artificial General Intelligence (AGI)—in which advanced artificial intelligence systems can quickly understand, learn, and apply knowledge—the operation also provides a glimpse into what the future of warfare could look like. The Security Service of Ukraine's (SBU) operation knocked out targets up to 8,000 kilometers (nearly 5,000 miles) from the frontlines. As the dust settles, analysts are starting to wonder whether anywhere is truly beyond the reach of FPV drones. Some reports suggest dozens of strategic bombers (some said to be capable of delivering nuclear weapons) were destroyed or disabled by 117 FPV drones, though Moscow countered that only a handful of planes were struck. Western assessments put the figure at no lower than 10. But the scale of the attack, while impressive, isn't its most remarkable aspect. After all, the operation follows a Russian drone attack at the end of May involving almost 500 unmanned aerial vehicles. Ukraine's attack may have been smaller, but it more than made up for it in logistical brilliance. First, the parts were smuggled into the country and the drones were assembled. Vasyl Maliuk, the head of the SBU tells the BBC that they were then loaded onto lorries with secret compartments and driven by unsuspecting Russian couriers to locations near air bases. When the shipments reached their destination, roofs on the lorries retracted to reveal the hidden hardware. And the drones took off. Spider's Web depended on three distinct but related capabilities: logistics to deliver the drones, deception to keep them hidden, and coordination to pilot dozens of them concurrently. Yes, the attack confirms that expendable drones are the weapon of the 21st century. But Ukraine's strike serves as a visceral example of how AGI will work as a warfighting tool—and how humans will work alongside AGI. Make no mistake, AGI-fueled warcraft is coming. Over the past two years, the AI industry has increasingly invested in military applications of AI and gravitated towards 'security' as one of its organizing principles. Frontier labs are embedding themselves into the national security state. For instance, in June 2024 OpenAI appointed retired U.S. Army general Paul Nakasone to its Board of Directors. In December 2024, the AI-giant announced it had partnered with defense military technology outfit Anduril to develop drone defense systems. And Google, my former employer, scoped out 'national security imperatives for the AI era' earlier this year. The technology sectors' allusions to national security and AI have a certain shape-shifting quality to them. It's not always clear whether someone is referring to defensive or offensive AI capabilities, or whether it is even possible to neatly separate the former from the latter. In the context of armed conflict, things get even muddier. The idea that a sufficiently capable AGI system might eventually pilot drones is already on the minds of military planners, but Ukraine's strike on Russia gives us a much more specific picture of what to expect. Spider's Web had been in the making for eighteen months. During this time,150 small attack drones and 300 explosive devices were smuggled into Russia to stage the attack. Rather than one large shipment, the SBU likely engaged in piecemeal smuggling to avoid detection. Possibly bringing components across borders, using front companies, or bribing officials to pass through checkpoints. The fog of war is thick. We may never know for certain, but we do know that the final drones were packed into special mobile containers that looked inconspicuous from the outside. According to reports, the drivers of the lorries all told a similar story. A businessman approached them to pick up what seemed to be wooden cabins and deliver them to various locations around Russia. They agreed and thought little of it. Once the trucks were in position, the strike was launched. At the predetermined moment, each container's roof panels were remotely opened to release a swarm of drones (likely piloted remotely by piggybacking on Russian telecommunications networks). Spider's Web offers a window into how AGI could supercharge similar attacks in the future. AGI could analyse transportation routes to find the safest, fastest, and least conspicuous way to move cargo. It could plan truck routes that avoid busy checkpoints, choose transit times when border guards are understaffed, and even account for satellite overpasses or drone surveillance. Such a system could coordinate multimodal logistics (think planes, trains and automobiles) with timing that no human team could match. Not to mention it could crunch traffic patterns, rail schedules, and weather data to find the perfect moment for an attack. This hypothetical warfighting AGI could automatically generate corporate entities complete with registration documents, tax records, and websites to serve as cover. It could forge driver's licenses, passports, and employee IDs that pass automated verification—much faster than humans today could. Aside from paperwork, an AGI could manage a whole suite of deception technologies. For example, AGI could emit fake GPS signals to confuse satellite tracking or hacking into a facility's CCTV feed to loop old footage while operatives move equipment. When it's time to strike, AGI could guide each drone to its target as part of a single unified swarm, optimised to prevent collisions and spaced to maximize coverage. AGI may even make it possible to monitor the electronic warfare environment and switch frequencies if it senses jamming on the current channel. If an air defense system starts tracking the swarm, the AGI might command all drones to disperse or drop to terrain-hugging altitude to increase their odds of survival. As soon as the destination is in range, AGI could help drones autonomously recognise target types and aim for the most damaging impact points (say by guiding a drone to the exact location of an aircraft's fuel tank). To be sure, these are still predictions about what AGI may be capable of in the future. And there will likely be limitations. Precision hand-work like soldering detonators, balancing rotors, and packing warheads remains hard to automate at scale without a bespoke factory line. Robots can do it, but you still need humans to do the initial set-up. Plus, explosives sweat, lithium-ion packs puff, and cheap FPV airframes warp if left in non-climate-controlled depots. Periodic maintenance like changing desiccant packs or swapping bloated cells would likely still remain vital. A swarm of AGI-powered drones would probably still need caretakers who can move around without drawing attention. Finally, jamming-resistant links need spectrum licences, custom SIM provisioning, or pirate base-stations smuggled in-country. Deploying that communications infrastructure (like antennae or repeaters) requires boots on the ground. But even with a heavy dose of scepticism, I find it hard to see the Ukrainian strike as anything other than a postcard from the future. Problems might look insurmountable to us, but you should never bet against the machine conjuring up an unorthodox solution. I fear that the best case scenario ahead of us is one where attacks such as these can simply be delivered slightly faster. The worst case scenario is one in which a Spider's Web-style operation can be conducted orders of magnitude faster by just a handful of people. Thinking about the implications of AGI is useful in that it reminds us that power flows to whoever can orchestrate complexity faster than the adversary can comprehend it. Complexity is the strategic currency of war in the information age, and AGI is a complexity accelerator. If AGI finds its way into the wrong hands, it could become much easier to pull off a deadly attack. That is as true for the great powers as it is for rogue actors. This is the new strategic reality, and every military has to plan for it. What Ukraine's Spider's Web strike taught us is that the hardware for an AGI warfighter is ready. All that remains is the software. Contact us at letters@

'America's AI Arsenal Just Went Live': AI Insider Briefs Public on Secret Supercomputer Built by Musk
'America's AI Arsenal Just Went Live': AI Insider Briefs Public on Secret Supercomputer Built by Musk

Yahoo

time27 minutes ago

  • Yahoo

'America's AI Arsenal Just Went Live': AI Insider Briefs Public on Secret Supercomputer Built by Musk

BALTIMORE, June 05, 2025 (GLOBE NEWSWIRE) -- A newly surfaced report from bestselling author and tech insider James Altucher outlines the existence of a massive U.S.-based artificial intelligence weapon — one that could redefine America's global standing in the AI arms race. According to Altucher, the project — code-named Project Colossus — is being built by Elon Musk's company xAI, in coordination with recent policy changes made by the Trump administration. Housed in a low-profile facility in Memphis, Tennessee, Altucher says this machine is already operational — and growing more powerful by the day. 'The Fastest Supercomputer on the Planet' The briefing claims the facility is equipped with 200,000 cutting-edge AI chips, making it the most powerful computing center in the Western Hemisphere. 'It contains not just one or two… but 200,000 units of Nvidia's all-powerful AI chips… making it the most advanced AI facility known to man.' 'The fastest supercomputer on the planet.' — Jensen Huang, Nvidia CEO Altucher notes that Musk plans to expand this further in the coming weeks, with rumors of additional hardware that could multiply its power tenfold. Trump Cleared the Runway The report links the timing of Project Colossus to a major political shift. On Day 1 of his second term, Donald Trump reversed Biden-era restrictions on AI development. 'In one of his FIRST acts as President… Donald Trump overturned Executive Order #14110.' Altucher claims this decision allowed developers like Musk to operate 'without red tape or delay' — accelerating America's path toward dominance in the next generation of AI systems. Altucher: This Is 'Artificial Superintelligence' Altucher describes this moment not as another software release — but a seismic shift in how technology operates. 'This second wave of ARTIFICIAL SUPERINTELLIGENCE… Will rival all of the great innovations of the past. Electricity… the wheel… even the discovery of fire.' His report urges Americans to understand what's unfolding — not just in Silicon Valley, but in unmarked warehouses like the one now powering Project Colossus. About James Altucher James Altucher is a computer scientist, entrepreneur, and author who has worked on AI projects for over 40 years. A former IBM consultant and Wall Street technologist, he now focuses on breaking down emerging tech developments for a general audience. His latest briefing examines how Artificial Superintelligence is reshaping U.S. strategy and infrastructure. Media Contact:Derek WarrenPublic Relations ManagerParadigm Press GroupEmail: dwarren@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Big AI isn't just lobbying Washington—it's joining it
Big AI isn't just lobbying Washington—it's joining it

Yahoo

time35 minutes ago

  • Yahoo

Big AI isn't just lobbying Washington—it's joining it

Welcome to Eye on AI! In this edition…OpenAI releases report outlining efforts to block malicious use of its tools…Amazon continues its AI data center push in the South, with plans to spend $10 billion in North Carolina…Reddit sues Anthropic, accusing it of stealing data. After spending a few days in Washington, D.C. this week, it's clear that 'Big AI'—my shorthand for companies including Google, OpenAI, Meta, Anthropic, and xAI that are building and deploying the most powerful AI models—isn't just present in the nation's capital. It's being welcomed with open arms. Government agencies are eager to deploy their models, integrate their tools, and form public-private partnerships that will ultimately shape policy, national security, and global strategy inside the Beltway. And frontier AI companies, which also serve millions of consumer and business customers, are ready and willing to do business with the U.S. government. For example, just today Anthropic announced a new set of AI models tailored for U.S. national security customers, while Meta recently revealed that it's making its Llama models available to defense partners. This week, former Google CEO Eric Schmidt was a big part of bringing Silicon Valley and Washington together. I attended an AI Expo that served up his worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy (which will be chock-full of drones and robots if he gets his way). I also dressed up for a gala event hosted by the Washington AI Network, with sponsors including OpenAI, Meta, Microsoft, and Amazon, as well as a keynote speech from U.S. Commerce Secretary Howard Lutnick. Both events felt like a parallel AI universe to this D.C. outsider: In this universe, discussions about AI are less about increasing productivity or displacing jobs, and more about technological supremacy and national survival. Winning the AI 'race' against China is front and center. Public-private partnerships are not just desirable—they're essential to help the U.S. maintain an edge in AI, cyber, and intelligence systems. I heard no references to Elon Musk and DOGE's 'move fast and break things' mode of implementing AI tools into the IRS or the Veterans Administration. There were no discussions about AI models and copyright concerns. No one was hand-wringing about Anthropic's new model blackmailing its way out of being shut down. Instead, at the AI Expo, senior leaders from the U.S. military talked about how the recent Ukrainian drone attacks on Russian air bases are prime examples of how rapidly AI is changing the battlefield. Federal procurement experts discussed how to accelerate the Pentagon's notoriously slow acquisition process to keep pace with commercial AI advances. OpenAI touted its o3 reasoning model, now deployed on a secure government supercomputer at Los Alamos National Laboratory. At the gala, Lutnick made the stakes explicit: 'We must win the AI race, the quantum race—these are not things that are open for discussion.' To that end, he added, the Trump administration is focused on building another terawatt of power to support the massive AI data centers sprouting up across the country. 'We are very, very, very bullish on AI,' he said. The audience—packed with D.C.-based policymakers and lobbyists from Big AI—applauded. Washington may not be a tech town, but if this week was any indication, Silicon Valley and the nation's capital are learning to speak the same language. Still, the growing convergence of Silicon Valley and Washington makes many observers uneasy—especially given that it's been just seven years since thousands of Google employees protested the company's involvement in a Pentagon AI project, ultimately forcing it to back out. At the time, Google even pledged not to use its AI for weapons or surveillance systems that violated 'internationally accepted norms.' On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of 'pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.' The organization says the public needs 'to reckon with the ways in which today's AI isn't just being used by us, it's being used on us.' But the parallel AI universe I witnessed—where Big AI and the D.C. establishment are fusing interests—is already realigning power and policy. The biggest question now is whether they're doing so safely, transparently, and in the public interest—or simply in their own. The race is on. With that, here's the rest of the AI news. Sharon This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store