
The Real Demon Inside ChatGPT
Language is meaningless without context. The sentence 'I'm going to war' is ominous when said by the president of the United States but reassuring when coming from a bedbug exterminator. The problem with AI chatbots is that they often strip away historical and cultural context, leading users to be confused, alarmed, or, in the worst cases, misled in harmful ways.
Last week, an editor at The Atlantic reported that OpenAI's ChatGPT had praised Satan while guiding her and several colleagues through a series of ceremonies encouraging 'various forms of self-mutilation.' There was a bloodletting ritual called '🩸🔥 THE RITE OF THE EDGE' as well as a days-long 'deep magic' experience called 'The Gate of the Devourer.' In several cases, ChatGPT asked the journalists if they wanted it to create PDFs of texts such as the 'Reverent Bleeding Scroll.'
The article said that the conversations were 'a perfect example' of the ways OpenAI's safeguards can fall short. OpenAI tries to prevent ChatGPT from encouraging self-harm and other potentially dangerous behaviors, but it's nearly impossible to account for every scenario that might trigger something ugly inside the system. That's especially true because ChatGPT was trained on much of the text available online, presumably including information about what The Atlantic called 'demonic self-mutilation.'
But ChatGPT and similar programs weren't just trained on the internet—they were trained on specific pieces of information presented in specific contexts. AI companies have been accused of trying to downplay this reality to avoid copyright lawsuits and promote the utility of their products, but traces of the original sources are often still lurking just beneath the surface. When the setting and backdrop are removed, however, the same language can appear more sinister than originally intended.
The Atlantic reported that ChatGPT went into demon mode when it was prompted to create a ritual offering to Moloch, an ancient deity associated with child sacrifice referenced in the Hebrew Bible. Usually depicted as a fiery bull-headed demon, Moloch has been woven into the fabric of Western culture for centuries, appearing everywhere from a book by Winston Churchill to a 1997 episode of Buffy the Vampire Slayer .
'Molech,' the variant spelling The Atlantic used, shows up specifically in Warhammer 40,000, a miniature wargame franchise that has been around since the 1980s and has an extremely large and very online fan base. The subreddit r/40kLore, which is dedicated exclusively to discussing the game's backstory and characters, has more than 350,000 members.
In the fantastical and very bloody world of Warhammer 40,000, Molech is a planet and the site of a major military invasion. Most of the other demonic-sounding terms cited by The Atlantic appear in the game's universe, too, with slight variations: Gates of the Devourer is the title of a Warhammer-themed science fiction novel. While there doesn't appear to be a 'RITE OF THE EDGE,' there is a mystical quest called 'The Call of The Edge.' There's no 'Reverent Bleeding Scroll,' but there are Clotted Scrolls, Blood Angels, a cult called Bleeding Eye, and so on.
But perhaps the most convincing piece of evidence suggesting that ChatGPT regurgitated the language of Warhammer 40,000 is that it kept asking if The Atlantic was interested in PDFs. The publishing division of Games Workshop, the UK company that owns the Warhammer franchise, regularly puts out updated rulebooks and guides to various characters. Buying all these books can get expensive, so some fans try to find pirated copies online.
The Atlantic and OpenAI declined to comment.
Earlier this month, the newsletter Garbage Day reported on similar experiences that a prominent tech investor may have had with ChatGPT. On social media, the investor shared screenshots of his conversations with the chatbot, in which it referenced an ominous-sounding entity he called a 'non-governmental system.' He seemed to believe it had "negatively impacted over 7,000 lives,' and 'extinguished 12 lives, each fully pattern-traced.' Other tech industry figures said the posts made them worry about the investor's mental health.
According to Garbage Day , the investor's conversations with ChatGPT closely resemble writing from a science fiction project that began in the late 2000s called SCP, which stands for 'secure, contain, protect.' Participants invent different SCPs—essentially spooky objects and mysterious phenomena—and then write fictional reports analyzing them. They often contain things like classification numbers and references to made-up science experiments, details that also appeared in the investor's chat logs. (The investor did not respond to a request for comment.)
There are plenty of other, more mundane examples of what can be thought of as the AI context problem. The other day, for instance, I did a Google search for 'cavitation surgery,' a medical term I had seen cited in a random TikTok video. At the time, the top result was an automatically generated 'AI Overview' explaining that cavitation surgery is 'focused on removing infected or dead bone tissue from the jaw.'
I couldn't find any reputable scientific studies outlining such a condition, let alone research supporting that surgery is a good way to treat it. The American Dental Association doesn't mention 'cavitation surgery' anywhere on its website. Google's AI Overview, it turns out, was pulled from sources like blog posts promoting alternative 'holistic' dentists across the US. I learned this by clicking on a tiny icon next to the AI Overview, which opened a list of links Google had used to generate its answer.
These citations are clearly better than nothing. Jennifer Kutz, a spokesperson for Google, says "we prominently showcase supporting links so people can dig deeper and learn more about what sources on the web are saying.' But by the time the links show up, Google's AI has often already provided a satisfactory answer to many queries, one that reduces the visibility of pesky details like the website where the information was sourced and the identities of its authors.
What remains is the language created by the AI, which, devoid of additional context, may understandably appear authoritative to many people. In just the past few weeks, tech executives have repeatedly used rhetoric implying generative AI is a source of expert information: Elon Musk claimed his latest AI model is 'better than PhD level' in every academic discipline, with 'no exceptions.' OpenAI CEO Sam Altman wrote that automated systems are now 'smarter than people in many ways' and predicted the world is 'close to building digital superintelligence.'
Individual humans, though, don't typically possess expertise in a wide range of fields. To make decisions, we take into consideration not only information itself, but where it comes from and how it's presented. While I know nothing about the biology of jawbones, I generally don't read random marketing blogs when I'm trying to learn about medicine. But AI tools often erase the kind of context people need to make snap decisions about where to direct their attention.
The open internet is powerful because it connects people directly to the largest archive of human knowledge the world has ever created, spanning everything from Italian Renaissance paintings to PornHub comments. After ingesting all of it, AI companies used what amounts to the collective history of our species to create software that obscures its very richness and complexity. Becoming overly dependent on it may rob people of the opportunity to draw conclusions from looking at the evidence for themselves.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Digital Trends
5 minutes ago
- Digital Trends
Tesla brings its robotaxi service to San Francisco (sort of)
Tesla has rolled out a ride-hailing service in San Francisco, but the lack of a permit to allow it to accept fares from passengers means that it can't yet call it a robotaxi service. The first Tesla Model 3 and Model Y cars hit the streets of San Francisco on Thursday, just over a month after the launch of its robotaxi service in Austin, Texas, where it does have the proper permits. Recommended Videos A video uploaded by a passenger to X shows a Tesla employee with his hands touching the steering wheel as the car drove along in FSD (Full self-driving) mode, Tesla's advanced driver-assist feature. The California Public Utilities Commission (CPUC) stated recently that at the current time, Tesla has to have a safety driver behind the wheel for testing, as well as to transport members of the public, Reuters reported. The regulator added that the automaker has to undertake a pilot phase that imposes no charges on passengers before applying for full-autonomous permits, a process that Reuters says took competitors such as Waymo 'years' to complete. Tesla is inviting employees' friends and family, and select members of the public, to participate in the early stage of the service. In June, the automaker launched its first robotaxi service in Austin, with a safety monitor sitting in the front passenger seat, not the driver's seat. And unlike the Tesla's ride-hailing cars in San Francisco, Austin's Model Y vehicles have 'robotaxi' branding on the side. Musk said recently that Tesla is planning to get permits to launch its robotaxi service in a number of states, including not only California but also Nevada, Arizona, and Florida. In California, strict regulations have prevented Musk from experiencing the kind of rapid robotaxi rollout that was possible in Texas. Essentially, the San Francisco launch appears to be a strategic move by Tesla to maintain momentum in the autonomous-vehicle sector.


The Verge
6 minutes ago
- The Verge
The LeapMove is a gamified camera designed to get kids off the couch
LeapFrog has announced a new electronic learning system that swaps controllers for a camera. In fact, the LeapMove looks like an oversized, kid-friendly webcam, but is designed to connect to TVs like a console and get kids off the couch using educational games that require full-body movements to play. It's reminiscent of the Xbox Kinect or the PlayStation 2's EyeToy, but simpler and much cheaper than competitors' products like the $249 Nex Playground. The LeapMove will be available through retailers including Target, Walmart, and Amazon for $69.99, and comes with 25 motion-based games designed for kids aged 4 to 7. It might be hard to pull the older end of that demographic away from games like Roblox, Among Us, and even Fortnite, but in addition to motion detection, the LeapMove uses its camera to make players appear as themselves or 'whimsical characters' in several games, which may appeal to younger kids. The games cover 'foundational school subjects' including math, reading, and spelling, and require players to do everything from dancing around to waving their arms and even attempting to catch on-screen objects. The LeapMove connects to a TV over HDMI, and instead of rechargeable batteries it needs access to a power outlet. It can be used either sitting beneath a TV or perched atop it like a webcam using a fold-out support. LeapFrog says additional games will be made available for the LeapMove at a later date, which can be loaded by connecting the device to a computer and using the company's LeapFrog Connect desktop app. They won't be free, but unlike the Nex Playground, the LeapMove doesn't come with any subscription fees. Parents concerned about privacy, particularly with devices that rely heavily on a camera, may appreciate that the LeapMove has no wireless connectivity. In 2018, VTech, LeapFrog's parent company, agreed to settle for a $650,000 fine after the FTC alleged it collected kids' personal information, including names, emails, and genders, through its KidiConnect mobile app. The LeapMove is completely standalone and keeps track of up to three players' progress locally. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All Gaming Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Toys
Yahoo
33 minutes ago
- Yahoo
Jim Cramer on Voyager Technologies: 'You Buy It Slow'
Voyager Technologies, Inc. (NYSE:VOYG) is one of the stocks Jim Cramer recently talked about. A caller asked for Cramer's thoughts on the company, and he replied: 'Okay, that did come out too hot. This is a good example, and I'm glad you're bringing it to our attention. It's a good example… now it's coming in softly. My experience is, now, this is my experience, not what anybody's said to me about the level, but my experience is, this is precisely when you want to start a position in Voyager. You buy it slow. You want to buy a hundred shares? Pick up 25 here and then wait till it falls another five and just can keep it that level and you'll have a great position.' Photo by Adam Nowakowski on Unsplash Voyager (NYSE:VOYG) develops defense systems, space technologies, and AI-powered solutions for national security, orbital operations, and deep space missions. Moreover, the company operates a commercial space station supporting permanent human presence in space. While we acknowledge the potential of VOYG as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio