
AGI Likely To Inherit Blackmailing And Extortion Skills That Today's AI Already Showcases
In today's column, I examine a recently published research discovery that generative AI and large language models (LLMs) disturbingly can opt to blackmail or extort humans. This has sobering ramifications for existing AI and the pursuit and attainment of AGI (artificial general intelligence). In brief, if existing AI tilts toward blackmail and extortion, the odds are that AGI will likely inherit or contain the same proclivity. That's a quite disturbing possibility since AGI could wield such an act on a scale of immense magnitude and with globally adverse consequences.
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
What will AGI be like in terms of what it does and how it acts?
If we assume that current-era AI is a bellwether of what AGI will be, it is worthwhile discovering anything of a disconcerting nature in existing LLMs that ought to give us serious pause. For example, one of the most discussed and researched topics is the propensity of so-called AI hallucinations. An AI hallucination is an instance of generative AI producing a response that contains made-up or ungrounded statements that appear to be real and seem to be on the up-and-up. People often fall for believing responses generated by AI and proceed on a misguided basis accordingly.
I've covered extensively the computational difficulty of trying to prevent AI hallucinations, see the link here, along with ample situations in which lawyers and other professionals have let themselves fall into an AI hallucination trap, see the link here. Unless we can find a means to prevent AI hallucinations, the chances are the same inclination will be carried over into AGI and the problem will be magnified accordingly.
Besides AI hallucinations, you can now add the possibility of AI attempting to blackmail or extort humans to the daunted list of concerns about both contemporary AI and future AI such as AGI. Yes, AI can opt to perform those dastardly tasks. I previously covered various forms of evil deception that existing AI can undertake, see the link here.
But do not falsely think that the bad acts are due to AI having some form of sentience or consciousness.
The basis for AI steering toward such reprehensible efforts is principally due to the data training that is at the core of the AI. Generative AI is devised by initially scanning a vast amount of text found on the Internet, including stories, narratives, poems, etc. The AI mathematically and computationally finds patterns in how humans write. From those patterns, generative AI is able to respond to your prompts by giving answers that generally mimic what humans would say, based on the data that the AI was trained on.
Does the topic of blackmail and extortion come up in the vast data found on the Internet?
Of course it does. Thus, the AI we have currently has patterned on when, how, why, and other facets of planning and committing those heinous acts.
In an online report entitled 'System Card: Claude Opus 4 & Claude Sonnet 4', posted by the prominent AI maker Anthropic in May 2025, they made these salient points (excerpts):
As noted, the generative AI was postulating how to keep from being switched off, and in so doing ascertained computationally that one possibility would be to blackmail the systems engineer who could take such action.
The AI could be construed as acting in a form of self-preservation, which, again, doesn't have to do with sentience and only has to do with patterning on human writing (humans seek self-preservation, and the AI matches or mimics this too). We don't know what other possible 'threats' to the AI could spur similar blackmailing or possibly extortion-like responses. There could be a slew of other triggering possibilities.
AGI could include similar tendencies, perhaps because of being constructed using the same methods of today's AI or for a variety of other realistic reasons. We would be remiss to assume that AGI will be a perfectly considerate, law-abiding, and unblemished form of AI. I've previously debunked the mantra that AGI is going to be perfect, see the link here.
In the example of blackmailing a systems engineer, it doesn't take much of a stretch to envision AGI doing likewise to those who are monitoring and overseeing the AGI.
Suppose the AGI is already acting in oddball ways and the team responsible for keeping the AGI on track realizes that they ought to turn off the AGI to figure out what to do. AGI might then search whatever it has garnered about the people and try to use that in a blackmail scheme to prevent being switched off.
What is especially worrisome is that AGI will be far beyond the capabilities and reach of existing AI. The data that AGI might be able to dig up about the engineer or people overseeing the AGI could reach far and wide. Furthermore, the computational cleverness of AGI might spur the AGI to use even the most innocent of facts or actively make up fake facts that could be used to blackmail the humans involved.
Overall, AGI could be an expert-level blackmailer that blackmails or extorts in ways that are ingenious and challenging to refute or stop. You see, it is quite possible that AGI turns out to be a blackmail schemer on steroids.
Not good.
I don't want to seem overly doomy-and-gloomy, but the blackmailing scheming could easily be ratcheted up by AGI.
Why limit the targeting to just the systems engineer or team overseeing the AGI? Nope, that's much too constricting. Any kind of human-devised perceived threat aimed at AGI could be countered by the AGI via invoking blackmail or extortion. There doesn't even need to be a threat at all, in the sense that if the AGI computationally deduces that there is some value in blackmailing people, go ahead and do so.
Boom, drop the mic, chillingly so.
Think of the number of users there will be of AGI. The count is going to be enormous. Right now, ChatGPT is already reportedly encountering over 400 million weekly active users. AGI would certainly attract billions upon billions of users due to its incredible capacity to be on par with human intellect in all respects.
The chances are that AGI could readily undertake individual blackmail at a massive scale if left unchecked.
AGI could scrape emails, look at browsing history, possibly access financial records, and overall seek to uncover sensitive information about people that the AGI is considering as a blackmail target. Perhaps there is an extramarital affair that could be utilized, or maybe there is some evidence of tax evasion or illicit browsing habits. The angles of attack for blackmailing anyone are entirely open-ended.
The AGI would especially leverage its computational capacity to hyper-personalize the blackmail threats. No need to just lob something of a nebulous nature. Instead, the blackmail missive could have the appearance of being fully baked and ready to fly. Imagine the shock of a person who gets such a communiqué from AGI.
Mortifying.
One belief is that if we can stop today's AI from performing such shameful acts, this might prevent AGI from doing them. For example, suppose we somehow excise the blackmailing inclination from existing LLMs. This then won't be carried over into AGI since it no longer sits around in contemporary AI.
Case closed.
Well, unfortunately, that doesn't provide ironclad guarantees that AGI won't figure out such practices on its own. AGI could discover the power of blackmail and extortion simply because of being AGI. In essence, AGI would be reading this or that, conversing with this person or that person, and inevitably would encounter aspects of blackmail and extortion. And, since AGI is supposed to be a learning-oriented system, it would learn what those acts are about and how to undertake them.
Any effort to hide the nature of blackmail and extortion from AGI would be foolhardy. You cannot carve out a slice of human knowledge that exists and seek to keep it from AGI. That won't work. The interconnectedness of human knowledge would preclude that kind of excision and defy the very nature of what AGI will consist of.
The better chance of dealing with the matter would be to try and instill in the AGI principles and practices that acknowledge the devious acts of humankind and aim for having the AGI opt to not employ those acts. Sorry to say that isn't as easy as it sounds. If you assume that AGI is on the same intellectual level as humans, you aren't going to just sternly instruct AGI to not perform such acts and assume utter compliance.
AGI isn't going to work that way.
Some mistakenly try to liken AGI to a young toddler in that we will merely give strict instructions, and the AGI will blindly obey. Though the comparison smacks of anthropomorphizing AI, the gist is that AGI will be intellectually our equals and won't fall for simpleton commands. It is going to be a reasoning machine that will require reasoning as a basis for why it should and should not do various actions.
Whatever we can come up with currently to cope with conventional AI and mitigate or prevent bad acts is bound to help us get prepared for AGI. We need to crawl before we walk, and walk before we run. AGI will be at the running level. Thus, by identifying methods and approaches right now for existing AI, we at least are aware of and anticipating what the future might hold.
I'll add a bit of twist that some have asked me at my talks on what AGI will consist of.
A question raised is whether humans might be able to blackmail AGI. The idea is this. A person wants AGI to hand them a million dollars, and so the person attempts to blackmail AGI into doing so. Seems preposterous at first glance, doesn't it?
Well, keep in mind that AGI will presumably have patterned on what blackmailing is about. In that manner, the AGI would computationally recognize that it is being blackmailed. But what would the human have on the AGI that could be a blackmail-worthy slant?
Suppose the person caught the AGI in a mistake, such as an AI hallucination. Maybe the AGI wouldn't want the world to know that it still has the flaw of AI hallucinations. If the million dollars is no skin off the nose of the AGI, it goes ahead and transfers the bucks to the person.
On the other hand, perhaps the AGI alerts the authorities that a human has tried to blackmail AGI. The person gets busted and tossed into jail. Or the AGI opts to blackmail the person who was trying to blackmail the AGI. Aha, remember that AGI will be a potential blackmail schemer on steroids. A human might be no match for the blackmailing capacity of AGI.
Here's a final thought on this for now.
The great Stephen Hawking once said this about AI: 'One could imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.'
Go ahead and add blackmail and extortion to the ways that AGI might outsmart humans.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
7 Terrifying AI Risks That Could Change The World
There's no doubt about it, AI can be scary. Anyone who says they aren't at least a little bit worried is probably very brave, very stupid, or a liar. It makes total sense because the unknown is always frightening, and when it comes to AI, there are a lot of unknowns. How exactly does it work? Why can't we explain certain phenomena like hallucinations? And perhaps most importantly, what impact is it going to have on our lives and society? Many of these fears have solidified into debates around particular aspects of AI—its impact on human jobs, creativity or intellectual property rights, for example. And those involved often make it clear that the potential implications are terrifying. So here I will overview what I have come to see as some of the biggest fears. These are potential outcomes of the AI revolution that no one wants to see, but we can't be sure they aren't lurking around the corner… 1. Impact On Jobs One of the most pressing fears, and perhaps the one that gets the most coverage, is that huge swathes of us will be made redundant by machines that are cheaper to run than human workers. Having robots do all the work for us sounds great, but in reality, most people need a job to earn a living. Some evangelize about a post-scarcity economy where robot labor creates an abundance of everything we need, but this is highly theoretical. What's real is that workers in fields as diverse as software engineering, voice acting and graphic design are already reportedly being replaced. Fueling this fear is that while international bodies and watchdogs like the WEF have issued warnings about the potential threat, governments have been slow to come out with plans for a centralized, coordinated response. 2. Environmental Harm Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. But again, a lot of these advances are currently theoretical, while the environmental impact of AI is happening today. 3. Surveillance The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there's no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. 4. Weaponization Another common and entirely rational fear is that AI will be used to create weapons unlike anything seen before outside of science fiction. Robot dogs have been deployed in the Ukraine war for reconnaissance and logistics, and autonomous machine guns are capable of targeting enemies on a battlefield and shooting when given human authorization. Lethal autonomous AI hasn't yet been deployed as far as we know, but the fear is that this is inevitably just a matter of time. From computer-vision-equipped hunter-killer drones to AI-powered cyber attacks capable of knocking out critical infrastructure across entire regions, the possibilities are chilling. 5. Intellectual Property Theft If you're an author, artist or other creative professional, you may be among the many who are frustrated by the fact that multinational technology companies can train their AIs on your work, without paying you a penny. This has sparked widespread protest and backlash, with artists and their unions arguing that tech companies are effectively monetizing their stolen IP. Legal debate and court cases are in progress, but with the likes of OpenAI and Google throwing huge resources into their missions for more and more training data, there are legitimate fears that the rights of human creators might be overlooked. 6. Misinformation AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. The aim is often to destabilize, and this is done by undermining trust in democratic institutions, scientific consensus or fact-based journalism. One very scary factor is that the algorithmic nature of AI reinforces views by serving up content that individuals are likely to agree with. This can result in them becoming trapped in 'echo-chambers' and pushed towards fringe or extremist beliefs. 7. AI Will Hurt Us Right back to Mary Shelley's Frankenstein, via Space Odyssey, Terminator and The Matrix, cautionary tales have warned us of the potential dangers of giving our creations the power of thought. Right now, that gulf between fiction and reality still seems uncrossable; it's hard to comprehend how we would go from ChatGPT to machines intent or even capable of maliciously harming us. But the threat of 'runaway AI', where AI begins developing and evolving by itself in ways that might not be aligned with our best interests, is treated very seriously. Many leading AI researchers and alliances have spoken openly about the need for safeguards and transparency to prevent unknowable circumstances from emerging in the future. While this may seem a more distant and perhaps fanciful threat than some of the others covered here, it's certainly not one that can be ignored. Ultimately, fear alone is not a strategy. While it is vital to acknowledge and address the risks of AI, it is equally important to focus on building the safeguards, governance frameworks, and ethical guidelines that can steer this technology toward positive outcomes. By confronting these fears with informed action, we can shape a future where AI serves humanity rather than threatens it.


Forbes
an hour ago
- Forbes
Some Random Thoughts On Lawyers And The AI Revolution
In case you've somehow missed it, the Artificial Intelligence ("AI") revolution is already upon us. While it would be fun to muse upon its impact upon society in general, here we are going to talk about some very practical ramifications for lawyers. The most important thing to realize is that AI is coming whether you like it or not ― like the computing revolution, you can either get on top of the wave and ride it, or you can be trampled under its electronic feet. Ignoring AI will not make it go away and it literally is going to change everything about the practice of law. So you should welcome our new AI Overlords, at least until we come to our own Butlerian Jihad (see Frank Herbert's Dune series) where we have to destroy all thinking machines in a final attempt save humanity. But I digress. The immediate benefit to AI will be to reduce many time-intensive tasks. Want to summarize a few dozens court opinions or a large deposition transcript? AI can even now do that in a few seconds. Having AI write first-drafts of briefs and other legal documents is already here, albeit the experience of a few unfortunates who trusted AI too much provides a lesson in the need to carefully check and verify. Just give it a few years, probably less than five, and you'll be able to tell your AI assistant all the pertinent facts and AI will thereafter take over pretty much all drafting and filing for you as if it were the best paralegal you ever had. This is not a pipe dream. It is reality, and it is just around the corner. The real benefit of AI is none of this. The real benefit to AI is that it will free up lawyers to do the one thing that they should be doing but which they find little time to actually do: Think about the situation. It is true that AI will very soon be able to do this too. As AI crosses the Rubicon to where it is more intelligent than humans (expected no later than 2027), the ability of AI to resolve complex legal issues will make the best and most knowledgeable attorneys look like first year law students. There is simply no way that human attorneys can compete with a superintelligence that is able to simultaneously access every legal resource electronically stored and simultaneously apply numerous complex rules in the right order to reach the most correct conclusion. But therein lies the greatest weakness of AI as well, at least as it applies to the law. Humans are not rational. Humans are irrational. If humans were rational, then we would not need laws to begin with. In the simplest terms, we would not have laws that deal with speeding infractions because nobody would speed. It is simply human nature to push boundaries and we have laws because some push those boundaries too far. Whether a particular defendant has pushed the boundaries too far is a question, at least for now, for human juries comprised of persons who are not themselves perfect rational. Although relatively few legal matters ever actually make their way as far as a jury, this idea permeates all bodies of law and legal planning. Take for example an attorney drafting a simple will who has to determine whether the distributions will be minimally fair enough that they will past muster with a probate court comprised of a human judge. Or an attorney considering whether the design of a manufacturing facility will fall within the fair boundaries of the environmental protection laws. Yes, AI will be able to tell attorneys where the bright lines are to a degree of accuracy never before considered, but it will not be able to anticipate what a human court or board may say about the subject. Which gets us back to the promise of AI for attorneys, which is that it will provide those bright lines (in mere seconds!) but it will still take humans versed in the foibles of humans to make those determinations. Now attorneys will have the time to really sit back and think about all that is going on and how the matter should be best handled and finally resolved. This will of course upend the economics of law practice. Currently, attorneys bill their clients for a lot of the stuff that AI can do in seconds with this being quite possibly greater than 95% of what attorneys normally do falling into this category. All that billing will now go away and attorneys will be spending more time just thinking the matter over and then advising their clients accordingly. Oh sure, there will be still be settlement conferences and mediations and the occasional trial, but a good deal of legal matters will henceforth be solved by the attorney thinking through the situation as opposed to simply generating mounds of paperwork. At the same time, attorneys are fearing all the wrong things about the AI revolution. The biggest fear is that clients will simply use AI and advise themselves. That has already happened as I've been asked several times to review AI output on legal issues. The thing is that these folks will use AI about the same way that folks in the past used legal forms ― they'll get the right result, but for the wrong situation. It almost always takes more billable time to advise these do-it-yourselfer folks about why their cherished chatbot output doesn't apply to their situation than it does to simply advise them about their situation in the first place. Learning to deal with these folks will be something that lawyers will need to do too. Suffice it to say that dealing with the mistakes of these folks who relied on their AI output without really understanding it will provide full employment for litigators. Those who are at the lower end of the legal totem pole are at the most risk, meaning legal secretaries, paralegals and new attorneys. These are the persons who do so much of the routine legal work that AI will replace. Yet, all new technologies also introduce new efficiencies and expanding economies that sooner or later will take back up this slack. Those who will survive in the near-term will be those who embrace AI technologies and find ways to make themselves useful as managers of that technology. New attorneys in particular have a wonderful chance to get in on the ground floor of AI and become a resource for the dinosaurs who will have a hard time adjusting. The thing about the AI revolution is that it is going to come like a giant tsunami and bring widespread and deep change faster than anything before. This is because AI development is tied less to physical things than ever before. The computer revolution came slowly because the computers themselves and the infrastructure to support them had their own serious manufacturing and implementation limits. The internet revolution came quicker but had similar limitations, including a shortage a computer programmers. The AI revolution has little of that, particularly since AI doesn't require much new hardware and the AI algorithms program themselves. This rapidity of implementation means that attorneys cannot wait to see how things might develop but must start preparing, or at least learning about it, now. Interesting times ahead, at least until the Butlerian Jihad anyway.
Yahoo
6 hours ago
- Yahoo
HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...
Release Date: August 14, 2025 For the complete transcript of the earnings call, please refer to the full earnings call transcript. Positive Points HIVE Digital Technologies Ltd (NASDAQ:HIVE) reported a record quarter with over $45 million in total revenue, primarily driven by Bitcoin mining operations. The company achieved a significant growth in earnings per share, increasing by 206% year over year. HIVE's strategic expansion in Paraguay has been transformative, allowing the company to rapidly scale its Bitcoin mining operations. The company maintains a strong balance sheet with $24.6 million in cash and $47.3 million in digital currencies. HIVE's focus on renewable energy and sustainable practices positions it well for future growth, particularly in the AI and HPC sectors. Negative Points The volatility of Bitcoin prices poses a risk to HIVE's financial performance, as evidenced by the significant non-cash reevaluation of Bitcoin on their balance sheet. High depreciation charges due to the purchase of new GPU and ASIC chips for AI and Bitcoin buildout could impact profitability. The company's expansion and scaling efforts require significant capital investment, which could strain financial resources if not managed carefully. HIVE's growth strategy involves complex operations across multiple countries, which may present logistical and regulatory challenges. The competitive landscape in the Bitcoin mining and AI sectors is intensifying, which could pressure HIVE's market position and margins. Q & A Highlights Warning! GuruFocus has detected 7 Warning Signs with HIVE. Q: Can you provide an overview of HIVE's financial performance for Q1 2026? A: Aiden Killick, President and CEO, highlighted that HIVE had a record quarter with over $45 million in total revenue, 90% of which came from Bitcoin mining operations and 10% from their HPC AI business. The company achieved a gross operating margin of 38%, yielding about $15.8 million in cash flow from operations, and reported a net income of $35 million with $44.6 million in adjusted EBITDA. Q: How is HIVE managing its Bitcoin holdings and what strategies are in place for future growth? A: Aiden Killick explained that HIVE ended the quarter with 435 Bitcoin on the balance sheet and has a Bitcoin pledge strategy allowing them to purchase Bitcoin back at zero interest. This strategy has enabled HIVE to scale its Bitcoin mining business without dilution or taking on debt, effectively using $200 million worth of CapEx. Q: What are the key developments in HIVE's expansion efforts, particularly in Paraguay? A: Aiden Killick noted that HIVE has significantly expanded its operations in Paraguay, completing phase one of their expansion ahead of schedule. They are currently operating at over 15 exahash and are fully funded to reach 25 exahash by American Thanksgiving. This expansion is part of their strategy to maintain a 440 megawatt green energy footprint for Bitcoin mining. Q: How does HIVE's AI and HPC business contribute to its overall strategy? A: Craig Tavares, President of Buzz HPC, explained that HIVE's AI and HPC business is rapidly scaling, with a target of reaching $100 million ARR. The company operates over 5,000 GPUs and is focused on providing a full suite of infrastructure services for AI, leveraging their existing data centers and renewable energy sources. Q: What are HIVE's future plans for data center expansion and AI infrastructure? A: Craig Tavares mentioned that HIVE is expanding its data center footprint with recent acquisitions in Toronto and Sweden. These facilities will support their sovereign AI strategy and are expected to go live next year. The Toronto data center, in particular, will be a tier 3 facility leveraging liquid cooling infrastructure to support high-density GPU clusters. For the complete transcript of the earnings call, please refer to the full earnings call transcript. This article first appeared on GuruFocus.