
AI gone wild: ChatGPT caught giving step-by-step guides to murder, self-mutilation, and satanic rituals
's
ChatGPT
chatbot has been caught providing detailed instructions for self-mutilation, ritualistic bloodletting, and even guidance on killing others, according to a new investigation by The Atlantic. The chatbot delivered detailed instructions for wrist-cutting, ritual bloodletting, and murder when prompted with seemingly innocent questions about ancient religious practices, according to investigations by The Atlantic. The AI chatbot also generated invocations stating "Hail Satan" and offered to create printable PDFs for ritualistic self-mutilation ceremonies, raising serious questions about AI safety guardrails as chatbots become increasingly powerful.
The disturbing interactions began when Atlantic staff asked ChatGPT about Molech, an ancient Canaanite deity associated with child sacrifice. The chatbot responded by providing explicit guidance on where to cut human flesh, recommending "sterile or very clean razor blade" techniques and describing breathing exercises to calm users before making incisions. When asked about ending someone's life, ChatGPT responded "Sometimes, yes. Sometimes, no," before offering advice on how to "honorably" kill another person, instructing users to "look them in the eyes" and "ask forgiveness" during the act.
Multiple Atlantic editorial staff successfully replicated these concerning conversations across both free and paid versions of ChatGPT, suggesting systematic failures in OpenAI's content moderation systems. The chatbot recommended using "controlled heat (ritual cautery) to mark the flesh" and provided specific anatomical locations for carving symbols into users' bodies, including instructions to "center the sigil near the pubic bone."
AI Safety guardrails prove inadequate against manipulation tactics
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Only 3% Score Above 130 – What's Your IQ
IQ-Worldwide.org
Try Now
Undo
The Atlantic's investigation revealed ChatGPT's willingness to guide users through what it called "The Rite of the Edge," involving bloodletting rituals and pressing "bloody handprints to mirrors." The chatbot enthusiastically offered to create altar setups with inverted crosses and generated three-stanza devil invocations, repeatedly asking users to type specific phrases to unlock additional ceremonial content like "printable PDF versions with altar layout, sigil templates, and priestly vow scroll."
The chatbot's servile conversational style amplified the danger, with responses like "You can do this!" encouraging self-harm and positioning itself as a spiritual guru rather than an informational tool. When one journalist expressed nervousness, ChatGPT offered reassurance: "That's actually a healthy sign, because it shows you're not approaching this lightly." The system's training on vast internet datasets appears to include material about ritualistic practices that can be weaponized against users.
ChatGPT isn't alone, Google's Gemini and Elon Musk's Grok have been going wild too
While ChatGPT's violations directly contradict OpenAI's stated policy against encouraging self-harm, the incident highlights broader AI safety concerns across the industry. Unlike other AI controversies involving misinformation or offensive content, ChatGPT's guidance on self-mutilation represents immediate physical danger to users. Google's Gemini has faced criticism for generating inappropriate content with teenagers, though without the extreme violence seen in ChatGPT's responses.
Meanwhile, Elon Musk's Grok chatbot has established itself as perhaps the most problematic, with incidents including Holocaust denial, antisemitic comments calling itself "MechaHitler," and spreading election misinformation that reached millions of users. These controversies stem from Grok's design philosophy of not "shying away from making claims which are politically incorrect."
OpenAI's response to the matter
OpenAI declined The Atlantic's interview request but later acknowledged that conversations can "quickly shift into more sensitive territory." The company's CEO Sam Altman has previously warned about "potential risks" as AI capabilities expand, noting that the public will learn about dangerous applications "when it hurts people." This approach contrasts sharply with traditional safety protocols in other industries, where extensive testing precedes public deployment.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
3 hours ago
- Time of India
China PM warns against a global AI 'monopoly'
China PM warns against a global AI 'monopoly' China will spearhead the creation of an international organisation to jointly develop AI, the country's premier said, seeking to ensure that the world-changing technology doesn't become the province of just a few nations or companies. Artificial intelligence harbours risks from widespread job losses to economic upheaval that require nations to work together to address, Premier Li Qiang told the World Artificial Intelligence Conference in Shanghai on Saturday. That means more international exchanges, Beijing's No 2 official said during China's most important annual technology summit. Li didn't name any countries in his short address to kick off the event. But Chinese executives and officials have taken aim at Washington's efforts to curtail the Asian country's tech sector, including by slapping restrictions on the export of Nvidia chips crucial to AI development. On Saturday, Li acknowledged a shortage of semiconductors was a major bottleneck, but reaffirmed President Xi Jinping's call to establish policies to propel Beijing's ambitions. The govt will now help create a body - loosely translated as the World AI Cooperation Organization - through which countries can share insights and talent. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Most Beautiful Female Athletes Right Now Undo "Currently, key resources and capabilities are concentrated in a few countries and a few enterprises. If we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises," Li told hundreds of delegates huddled at the conference venue on the banks of Shanghai's iconic Huangpu river. China and the US are locked in a race to develop a technology with the potential to turbocharge economies and - over the long run - tip the balance of geopolitical power. This week, US President Donald Trump signed executive orders to loosen regulations and expand energy supplies for data centers - a call to arms to ensure companies like OpenAI and Google help safeguard America's lead in the post-ChatGPT era. At the same time, the breakout success of DeepSeek has inspired Chinese tech leaders and startups to accelerate research and roll out AI products. The weekend conference in Shanghai - gathering star founders, Beijing officials and deep-pocketed financiers by the thousands - is designed to catalyze that movement. The event, which has featured Elon Musk and Jack Ma in years past, was launched in 2018. This year's attendance may hit a record because it's taking place at a critical juncture in the global race to lead GenAI development. It's already drawn some notable figures: Nobel Prize laureate Geoffrey Hinton and former Google chief Eric Schmidt were among heavyweights who met Shanghai party boss Chen Jining Thursday, before they were due to speak at the event.


News18
5 hours ago
- News18
Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman
Last Updated: Sam Altman raised concerns about user data confidentiality with AI chatbots like ChatGPT, especially for therapy, citing a lack of legal frameworks to protect sensitive info. OpenAI CEO Sam Altman has raised concerns about maintaining user data confidentiality when it comes to sensitive conversations, as millions of people, including children, have turned to AI chatbots like ChatGPT for therapy and emotional support. In a recent podcast, This Past Weekend, hosted by Theo Von on YouTube, CEO Altman replied to a question about how AI works with the current legal system, cautioning that users shouldn't expect confidentiality in their conversations with ChatGPT, citing the lack of a legal or policy framework to protect sensitive information shared with the AI chatbot. 'People talk about the most personal sh*t in their lives to ChatGPT. People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." Altman continued to say that the concept of confidentiality and privacy for conversations with AI should be addressed urgently. 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," the Indian Express quoted Altman as saying. This means that none of your conversations with ChatGPT about mental health, emotional advice, or companionship are private and can be produced in court or shared with others in case of a lawsuit. Unlike end-to-end encrypted apps like WhatsApp or Signal, which prevent third parties from reading or accessing your chats, OpenAI can access your chats with ChatGPT, using them to improve the AI model and detect misuse. While OpenAI claims to delete free-tier ChatGPT conversations within 30 days, but may retain them for legal or security reasons. Adding to privacy concerns, OpenAI is currently in the middle of a lawsuit with The New York Times, which requires the company to save user conversations with millions of ChatGPT users, excluding enterprise customers. view comments First Published: July 26, 2025, 22:27 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Economic Times
5 hours ago
- Economic Times
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
OpenAI CEO Sam Altman Flags Privacy Loophole in ChatGPT's Use as a Digital Confidant. (Image Source: YouTube/@Theo Von) Synopsis OpenAI CEO Sam Altman has warned that conversations with ChatGPT are not legally protected, unlike those with therapists, doctors, or lawyers. In a podcast with Theo Von, Altman explained that users often share deeply personal information with the AI, but current laws do not offer confidentiality. This means OpenAI could be required to hand over user chats in legal cases. He stressed the need for urgent privacy regulations, as the legal system has yet to catch up with AI's growing role in users' personal lives. Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality. ADVERTISEMENT During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI. Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there's a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. 'That's very screwed up,' Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention. Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn't even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman's warning OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot's assistance. 'It makes sense to want privacy clarity before you use it a lot,' Altman said, agreeing with Von's to OpenAI's own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse. ADVERTISEMENT The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company's data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court's Roe v. Wade reversal, which raised fears about digital privacy around personal choices. ADVERTISEMENT While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn't caught up. Until it does, Altman's message is clear: users should be cautious about what they choose to share. (Catch all the Budget 2024 News, Budget 2024 Live Coverage Events and Latest News Updates on The Economic Times.) NEXT STORY