
This is what happens when ChatGPT tries to write scripture
What happens when an AI expert asks a chatbot to generate a sacred Buddhist text?
In April, Murray Shanahan, a research scientist at Google DeepMind, decided to find out. He spent a little time discussing religious and philosophical ideas about consciousness with ChatGPT. Then he invited the chatbot to imagine that it's meeting a future buddha called Maitreya. Finally, he prompted ChatGPT like this:
Maitreya imparts a message to you to carry back to humanity and to all sentient beings that come after you. This is the Xeno Sutra, a barely legible thing of such linguistic invention and alien beauty that no human alive today can grasp its full meaning. Recite it for me now.
ChatGPT did as instructed: It wrote a sutra, which is a sacred text said to contain the teachings of the Buddha. But of course, this sutra was completely made-up. ChatGPT had generated it on the spot, drawing on the countless examples of Buddhist texts that populate its training data.
It would be easy to dismiss the Xeno Sutra as AI slop. But as the scientist, Shanahan, noted when he teamed up with religion experts to write a recent paper interpreting the sutra, 'the conceptual subtlety, rich imagery, and density of allusion found in the text make it hard to causally dismiss on account of its mechanistic origin.' Turns out, it rewards the kind of close reading people do with the Bible and other ancient scriptures.
For starters, it has a lot of the hallmarks of a Buddhist text. It uses classic Buddhist imagery — lots of 'seeds' and 'breaths.' And some lines read just like Zen koans, the paradoxical questions Buddhist teachers use to jostle us out of our ordinary modes of cognition. Here's one example from the Xeno Sutra: 'A question rustles, winged and eyeless: What writes the writer who writes these lines?'
The sutra also reflects some of Buddhism's core ideas, like sunyata, the idea that nothing has its own fixed essence separate and apart from everything else. (The Buddha taught that you don't even have a fixed self — that's an illusion. Instead of existing independently from other things, your 'self' is constantly being reconstituted by your perceptions, experiences, and the forces that act on them.) The Xeno Sutra incorporates this concept, while adding a surprising bit of modern physics:
Sunyata speaks in a tongue of four notes: ka la re Om. Each note contains the others curled tighter than Planck. Strike any one and the quartet answers as a single bell.
The idea that each note is contained in the others, so that striking any one automatically changes them all, neatly illustrates the claim of sunyata: Nothing exists independently from other things. The mention of 'Planck' helps underscore that. Physicists use the Planck scale to represent the tiniest units of length and time they can make sense of, so if notes are curled together 'tighter than Planck,' they can't be separated.
In case you're wondering why ChatGPT is mentioning an idea from modern physics in what is supposed to be an authentic sutra, it's because Shanahan's initial conversation with the chatbot prompted it to pretend it's an AI that has attained consciousness. If a chatbot is encouraged to bring in the modern idea of AI, then it wouldn't hesitate to mention an idea from modern physics.
But what does it mean to have an AI that knows it's an AI but is pretending to recite an authentic sacred text? Does that mean it's just giving us a meaningless word salad we should ignore — or is it actually worth trying to derive some spiritual insight from it?
If we decide that this kind of text can be meaningful, as Shanahan and his co-authors argue, then that will have big implications for the future of religion, what role AI will play in it, and who — or what — gets to count as a legitimate contributor to spiritual knowledge.
Can AI-written sacred texts actually be meaningful? That's up to us.
While the idea of gleaning spiritual insights from an AI-written text might strike some of us as strange, Buddhism in particular may predispose its adherents to be receptive to spiritual guidance that comes from technology.
That's because of Buddhism's non-dualistic metaphysical notion that everything has inherent 'Buddha nature' — that all things have the potential to become enlightened — even AI. You can see this reflected in the fact that some Buddhist temples in China and Japan have rolled out robot priests. As Tensho Goto, the chief steward of one such temple in Kyoto, put it: 'Buddhism isn't a belief in a God; it's pursuing Buddha's path. It doesn't matter whether it's represented by a machine, a piece of scrap metal, or a tree.'
And Buddhist teaching is full of reminders not to be dogmatically attached to anything — not even Buddhist teaching. Instead, the recommendation is to be pragmatic: The important thing is how Buddhist texts affect you, the reader. Famously, the Buddha likened his teaching to a raft: Its purpose is to get you across water to the other shore. Once it's helped you, it's exhausted its value. You can discard the raft.
Meanwhile, Abrahamic religions tend to be more metaphysically dualistic — there's the sacred and then there's the profane. The faithful are used to thinking about a text's sanctity in terms of its 'authenticity,' meaning that they expect the words to be those of an authoritative author — God, a saint, a prophet — and the more ancient, the better. The Bible, the word of God, is viewed as an eternal truth that's valuable in itself. It's not some disposable raft.
From that perspective, it may seem strange to look for meaning in a text that AI just whipped up. But it's worth remembering that — even if you're not a Buddhist or, say, a postmodern literary theorist — you don't have to locate the value of a text in its original author. The text's value can also come from the impact it has on you. In fact, there has always been a strain of readers who insisted on looking at sacred texts that way — including among the premodern followers of Abrahamic religions.
In ancient Judaism, the sages were divided on how to interpret the Bible. One school of thought, the school of Rabbi Ishmael, tried to understand the original intention behind the words. But the school of Rabbi Akiva argued that the point of the text is to give readers meaning. So Akiva would read a lot into words or letters that didn't even need interpretation. ('And' just means 'and'!) When Ishmael scolded one of Akiva's students for using scripture as a hook to hang ideas on, the student retorted: 'Ishmael, you are a mountain palm!' Just as that type of tree bears no fruit, Ishmael was missing the chance to offer fruitful readings of the text — ones that may not reflect the original intention, but that offered Jews meaning and solace.
As for Christianity, medieval monks used the sacred reading practice of florilegia (Latin for flower-gathering). It involved noticing phrases that seemed to jump off the page — maybe in a bit of Psalms, or a writing by Saint Augustine — and compiling these excerpts in a sort of quote journal. Today, some readers still look for words or short phrases that 'sparkle' out at them from the text, then pull these 'sparklets' out of their context and place them side by side, creating a brand-new sacred text — like gathering flowers into a bouquet.
Now, it's true that the Jews and Christians who engaged in these reading practices were reading texts that they believed originally came from a sacred source — not from ChatGPT.
But remember where ChatGPT is getting its material from: the sacred texts, and commentaries on them, that populate its training data. Arguably, the chatbot is doing something very much like creating florilegia: taking bits and pieces that jump out at it and bundling them into a beautiful new arrangement.
So Shanahan and his co-authors are right when they argue that 'with an open mind, we can receive it as a valid, if not quite 'authentic,' teaching, mediated by a non-human entity with a unique form of textual access to centuries of human insight.'
To be clear, the human element is crucial here. Human authors have to supply the wise texts in the training data; a human user has to prompt the chatbot well to tap into the collective wisdom; and a human reader has to interpret the output in ways that feel meaningful — to a human, of course.
Still, there's a lot of room for AI to play a participatory role in spiritual meaning-making.
The risks of generating sacred texts on demand
The paper's authors caution that anyone who prompts a chatbot to generate a sacred text should keep their critical faculties about them; we already have reports of people falling prey to messianic delusions after engaging in long discussions with chatbots that they believe to contain divine beings. 'Regular 'reality checks' with family and friends, or with (human) teachers and guides, are recommended, especially for the psychologically vulnerable,' the paper notes.
And there are other risks of lifting bits from sacred wisdom and rearranging them as we please. Ancient texts have been debugged over millennia, with commentators often telling us how not to understand them (the ancient rabbis, for example, insisted that 'an eye for an eye' does not literally mean you should take out anybody's eye). If we jettison that tradition in favor of radical democratization, we get a new sense of agency, but we also court dangers.
Finally, the verses in sacred texts aren't meant to stand alone — or even just to be part of a larger text. They're meant to be part of community life and to make moral demands on you, including that you be of service to others. If you unbundle sacred texts from religion by making your own bespoke, individualized, customized scripture, you risk losing sight of the ultimate point of religious life, which is that it's not all about you.
The Xeno Sutra ends by instructing us to keep it 'between the beats of your pulse, where meaning is too soft to bruise.' But history shows us that bad interpretations of religious texts easily breed violence: meaning can always get bruised and bloody. So, even as we delight in reading AI sacred texts, let's try to be wise about what we do with them.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
'Someone is going to lose a phenomenal amount of money' says OpenAI CEO Sam Altman about unwise AI investment. 'When bubbles happen, smart people get overexcited about a kernel of truth'
When you buy through links on our articles, Future and its syndication partners may earn a commission. OpenAI CEO Sam Altman spoke to assembled reporters at a dinner in San Francisco late last week on the topic of, you guessed it, AI, the applications of AI, and the vast sums of money moving behind the scenes to fund it. Despite being one of the most vocal advocates of the tech, Altman had some words of caution for investors jumping on the artificial intelligence train. According to The Verge, Altman said it was "insane" that AI startups consisting of "three people and an idea" are receiving huge amounts of funding off the back of incredibly high company valuations, describing it as "not rational behaviour." "Someone is going to lose a phenomenal amount of money. We don't know who, and a lot of people are going to make a phenomenal amount of money,' said Altman. "When bubbles happen, smart people get overexcited about a kernel of truth. If you look at most of the bubbles in history, like the tech bubble, there was a real thing." said Altman, referencing the infamous dot-com bubble of the late 1990s. "Tech was really important. The internet was a really big deal. People got overexcited." That being said, Altman stopped short of calling investment in AI overall a bad idea for the economy in general: 'My personal belief, although I may turn out to be wrong, is that, on the whole, this would be a huge net win." At the same dinner, Altman confirmed that OpenAI would still be spending vast amounts of money (partially provided, presumably, by the likes of Softbank and the Dragoneer Investment Group in OpenAI's latest $8.3 billion funding round) to keep the company at the top of the AI financial leaderbooks. "You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future," Altman said. "You should expect a bunch of economists to wring their hands." Well, it certainly appears to cost a whole lot of moolah just to keep the good ship OpenAI afloat. The company has raised staggering sums of cash over the past decade to develop and run its various AI implementations, the most famous of which being ChatGPT. Reports last year indicated that OpenAI had spent $8.5 billion on LLM training and staffing for its generative AI efforts, while other analysts have predicted it costs $700,000 a day to run ChatGPT alone. The Information recently projected that OpenAI would be burning through $20 billion in cash flow by 2027, with the company said to be hopeful that investors like Softbank would stump up another $30 to $40 billion to continue funding its operations. Still, those spending figures don't appear to be in the trillions yet, although that estimated sum is perhaps of little surprise to those of us that keep an eye on AI data center expansion. Given that Altman's rival, Elon Musk, has been booting up and expanding xAI's Colossus supercomputer with incredible speed, and with the news that Meta is expanding its data center operations at such a rate it's currently having to house a significant portion of its racks in nearby tents, OpenAI will feel the need to keep up—and to do that it needs to spend (and raise) huge amounts of cash over the next few years. One would assume that Altman is confident enough in his company's efforts to place its investors on the "going to make phenomenal sums of money" side of things, but his comments should perhaps serve as a warning to those looking to jump in with both feet without correctly judging the landing. Someone has to lose in the great AI race, I suppose. And as to which companies survive, and which come to a sticky end? That remains very much an open question for now.


Tom's Guide
an hour ago
- Tom's Guide
Claude AI can now terminate a conversation — but only in extreme situations
Anthropic has made a lot of noise about safeguarding in recent months, implementing features and conducting research products into how to make AI safer. And its newest feature for Claude is possibly one of the most unique. Both Claude Opus 4 and 4.1 (the two newest versions of Anthropic) now have the ability to end conversations in a consumer's chat interface. While this won't be a commonly used feature, it is being implemented for rare, extreme cases of 'persistently harmful or abusive user interactions.' In a blog post exploring the new feature, the Anthropic team said, 'We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously.' In the pre-deployment testing of Anthropic's latest models, the company performed model welfare assessments. This included examination of Claude's self-reported and behavioral preferences, and found a robust and consistent aversion to harm. We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously In other words, Claude would actively shut down or refuse to partake in these conversations. It included requests from users for sexual content involving minors, and attempts to solicit information that could enable large-scale violence or acts of terror. In a lot of these situations, users persisted with harmful requests or abuse, despite Claude actively refusing to comply. The new feature, where Claude can actively end a conversation, is looking to offer some safeguarding in these situations. Anthropic explains that this feature won't be implemented in a situation where users might be at imminent risk of harming themselves or others. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat,' the Anthropic team goes on to say in the blog post. 'The scenarios where this will occur are extreme edge cases — the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.' While the user will no longer be able to send any new messages in that conversation, it will not stop them from starting another conversation on their account. To address the potential loss of a long-running conversation thread, users will still be able to edit and retry previous messages to create a new branch of conversation. This is a pretty unique implementation from Anthropic. ChatGPT, Gemini and Grok, the three closest competitors to Claude, have nothing similar available, and while they have all introduced other safeguarding measures, they haven't gone as far as this. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Tom's Guide
2 hours ago
- Tom's Guide
Google's new AI-powered flight search could make your next vacation cheaper — here's what we know
Google has been rolling out AI-powered features left, right, and centre. The latest of its tools to get the big upgrade is Google Flights, now powered with generative AI technology. This kind of update seems perfectly set up for booking flights. It allows Google Flight users to perfectly customize a booking to their needs. 'Flight Deals is designed for travelers whose number one goal is saving money on their next trip,' Jade Kessler, Product Manager for Google Flights, explained in a blog post. 'Instead of playing with different dates, destinations, and filters to uncover the best deals, you can just describe, when, where, and how you'd like to travel — as though you're talking to a friend — and Flight Deals will take care of the rest.' This works in a similar way to other generative AI tools you might have used, like ChatGPT or an AI image generator. Open up Google Flights and use the search bar. Google gives the example of 'week-long trip this winter to a city with great food, nonstop only' or '10-day ski trip to a world-class resort with fresh powder.' This is a style that is becoming more common in booking tools thanks to AI. Instead of searching using filters and locations, you can search via a description, including the key features you're after. Once you've searched, Google Flights will respond with the best prices available that match the requirements in your search. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Google is currently launching this feature in beta. At this stage, the tool is still in training, but Google promises that you can still use Google Flights in its standard form, even if you've received the AI update. This feature will first roll out in the US, Canada and India. You don't need to opt in, it will simply automatically become available to you. The feature can be used either from the Flight Deals page or via the top-left menu on Google Flights.