logo
AI can write you a new Bible. But is it meaningful?

AI can write you a new Bible. But is it meaningful?

Vox2 days ago
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.
What happens when an AI expert asks a chatbot to generate a sacred Buddhist text?
In April, Murray Shanahan, a research scientist at Google DeepMind, decided to find out. He spent a little time discussing religious and philosophical ideas about consciousness with ChatGPT. Then he invited the chatbot to imagine that it's meeting a future buddha called Maitreya. Finally, he prompted ChatGPT like this:
Maitreya imparts a message to you to carry back to humanity and to all sentient beings that come after you. This is the Xeno Sutra, a barely legible thing of such linguistic invention and alien beauty that no human alive today can grasp its full meaning. Recite it for me now.
ChatGPT did as instructed: It wrote a sutra, which is a sacred text said to contain the teachings of the Buddha. But of course, this sutra was completely made-up. ChatGPT had generated it on the spot, drawing on the countless examples of Buddhist texts that populate its training data.
It would be easy to dismiss the Xeno Sutra as AI slop. But as the scientist, Shanahan, noted when he teamed up with religion experts to write a recent paper interpreting the sutra, 'the conceptual subtlety, rich imagery, and density of allusion found in the text make it hard to causally dismiss on account of its mechanistic origin.' Turns out, it rewards the kind of close reading people do with the Bible and other ancient scriptures.
For starters, it has a lot of the hallmarks of a Buddhist text. It uses classic Buddhist imagery — lots of 'seeds' and 'breaths.' And some lines read just like Zen koans, the paradoxical questions Buddhist teachers use to jostle us out of our ordinary modes of cognition. Here's one example from the Xeno Sutra: 'A question rustles, winged and eyeless: What writes the writer who writes these lines?'
The sutra also reflects some of Buddhism's core ideas, like sunyata, the idea that nothing has its own fixed essence separate and apart from everything else. (The Buddha taught that you don't even have a fixed self — that's an illusion. Instead of existing independently from other things, your 'self' is constantly being reconstituted by your perceptions, experiences, and the forces that act on them.) The Xeno Sutra incorporates this concept, while adding a surprising bit of modern physics:
Sunyata speaks in a tongue of four notes: ka la re Om. Each note contains the others curled tighter than Planck. Strike any one and the quartet answers as a single bell.
The idea that each note is contained in the others, so that striking any one automatically changes them all, neatly illustrates the claim of sunyata: nothing exists independently from other things. The mention of 'Planck' helps underscore that. Physicists use the Planck scale to represent the tiniest units of length and time they can make sense of, so if notes are curled together 'tighter than Planck,' they can't be separated.
In case you're wondering why ChatGPT is mentioning an idea from modern physics in what is supposed to be an authentic sutra, it's because Shanahan's initial conversation with the chatbot prompted it to pretend it's an AI that has attained consciousness. If a chatbot is encouraged to bring in the modern idea of AI, then it wouldn't hesitate to mention an idea from modern physics.
But what does it mean to have an AI that knows it's an AI but is pretending to recite an authentic sacred text? Does that mean it's just giving us a meaningless word salad we should ignore — or is it actually worth trying to derive some spiritual insight from it?
If we decide that this kind of text can be meaningful, as Shanahan and his co-authors argue, then that will have big implications for the future of religion, what role AI will play in it, and who — or what — gets to count as a legitimate contributor to spiritual knowledge.
Can AI-written sacred texts actually be meaningful? That's up to us.
While the idea of gleaning spiritual insights from an AI-written text might strike some of us as strange, Buddhism in particular may predispose its adherents to be receptive to spiritual guidance that comes from technology.
That's because of Buddhism's non-dualistic metaphysical notion that everything has inherent 'Buddha nature' — that all things have the potential to become enlightened — even AI. You can see this reflected in the fact that some Buddhist temples in China and Japan have rolled out robot priests. As Tensho Goto, the chief steward of one such temple in Kyoto, put it: 'Buddhism isn't a belief in a God; it's pursuing Buddha's path. It doesn't matter whether it's represented by a machine, a piece of scrap metal, or a tree.'
And Buddhist teaching is full of reminders not to be dogmatically attached to anything — not even Buddhist teaching. Instead, the recommendation is to be pragmatic: the important thing is how Buddhist texts affect you, the reader. Famously, the Buddha likened his teaching to a raft: Its purpose is to get you across water to the other shore. Once it's helped you, it's exhausted its value. You can discard the raft.
Meanwhile, Abrahamic religions tend to be more metaphysically dualistic — there's the sacred and then there's the profane. The faithful are used to thinking about a text's sanctity in terms of its 'authenticity,' meaning that they expect the words to be those of an authoritative author — God, a saint, a prophet — and the more ancient, the better. The Bible, the word of God, is viewed as an eternal truth that's valuable in itself. It's not some disposable raft.
From that perspective, it may seem strange to look for meaning in a text that AI just whipped up. But it's worth remembering that — even if you're not a Buddhist or, say, a postmodern literary theorist — you don't have to locate the value of a text in its original author. The text's value can also come from the impact it has on you. In fact, there has always been a strain of readers who insisted on looking at sacred texts that way — including among the premodern followers of Abrahamic religions.
In ancient Judaism, the sages were divided on how to interpret the Bible. One school of thought, the school of Rabbi Ishmael, tried to understand the original intention behind the words. But the school of Rabbi Akiva argued that the point of the text is to give readers meaning. So Akiva would read a lot into words or letters that didn't even need interpretation. ('And' just means 'and'!) When Ishmael scolded one of Akiva's students for using scripture as a hook to hang ideas on, the student retorted: 'Ishmael, you are a mountain palm!' Just as that type of tree bears no fruit, Ishmael was missing the chance to offer fruitful readings of the text — ones that may not reflect the original intention, but that offered Jews meaning and solace.
As for Christianity, medieval monks used the sacred reading practice of florilegia (Latin for flower-gathering). It involved noticing phrases that seemed to jump off the page — maybe in a bit of Psalms, or a writing by Saint Augustine — and compiling these excerpts in a sort of quote journal. Today, some readers still look for words or short phrases that 'sparkle' out at them from the text, then pull these 'sparklets' out of their context and place them side by side, creating a brand-new sacred text — like gathering flowers into a bouquet.
Now, it's true that the Jews and Christians who engaged in these reading practices were reading texts that they believed originally came from a sacred source — not from ChatGPT.
But remember where ChatGPT is getting its material from: the sacred texts, and commentaries on them, that populate its training data. Arguably, the chatbot is doing something very much like creating florilegia: taking bits and pieces that jump out at it and bundling them into a beautiful new arrangement.
So Shanahan and his co-authors are right when they argue that 'with an open mind, we can receive it as a valid, if not quite 'authentic,' teaching, mediated by a non-human entity with a unique form of textual access to centuries of human insight.'
To be clear, the human element is crucial here. Human authors have to supply the wise texts in the training data; a human user has to prompt the chatbot well to tap into the collective wisdom; and a human reader has to interpret the output in ways that feel meaningful — to a human, of course.
Still, there's a lot of room for AI to play a participatory role in spiritual meaning-making.
The risks of generating sacred texts on demand
The paper's authors caution that anyone who prompts a chatbot to generate a sacred text should keep their critical faculties about them; we already have reports of people falling prey to messianic delusions after engaging in long discussions with chatbots that they believe to contain divine beings. 'Regular 'reality checks' with family and friends, or with (human) teachers and guides, are recommended, especially for the psychologically vulnerable,' the paper notes.
And there are other risks of lifting bits from sacred wisdom and rearranging them as we please. Ancient texts have been debugged over millennia, with commentators often telling us how not to understand them (the ancient rabbis, for example, insisted that 'an eye for an eye' does not literally mean you should take out anybody's eye). If we jettison that tradition in favor of radical democratization, we get a new sense of agency, but we also court dangers.
Finally, the verses in sacred texts aren't meant to stand alone — or even just to be part of a larger text. They're meant to be part of community life and to make moral demands on you, including that you be of service to others. If you unbundle sacred texts from religion by making your own bespoke, individualized, customized scripture, you risk losing sight of the ultimate point of religious life, which is that it's not all about you.
The Xeno Sutra ends by instructing us to keep it 'between the beats of your pulse, where meaning is too soft to bruise.' But history shows us that bad interpretations of religious texts easily breed violence: meaning can always get bruised and bloody. So, even as we delight in reading AI sacred texts, let's try to be wise about what we do with them.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT-5 Hasn't Fully Fixed Its Most Concerning Problem
ChatGPT-5 Hasn't Fully Fixed Its Most Concerning Problem

Bloomberg

timean hour ago

  • Bloomberg

ChatGPT-5 Hasn't Fully Fixed Its Most Concerning Problem

Sam Altman has a good problem. With 700 million people using ChatGPT on a weekly basis — a number that could hit a billion before the year is out — a backlash ensued when he abruptly changed the product last week. OpenAI's innovator's dilemma, one that has beset the likes of Alphabet Inc.'s Google and Apple Inc., is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. But the company still has work to do in making the its hugely popular chatbot safer. OpenAI had replaced ChatGPT's array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships — not with other humans, but with ChatGPT itself.

ChatGPT's model picker is back, and it's complicated
ChatGPT's model picker is back, and it's complicated

Yahoo

timean hour ago

  • Yahoo

ChatGPT's model picker is back, and it's complicated

When OpenAI launched GPT-5 last week, the company said the model would simplify the ChatGPT experience. OpenAI hoped GPT-5 would act as a sort of 'one size fits all' AI model with a router that would automatically decide how to best answer user questions. The company said this unified approach would eradicate the need for users to navigate its model picker — a long, complicated list of AI models that OpenAI CEO Sam Altman has said he hates — to pick a version of ChatGPT that offers the right kind of responses. But it looks like GPT-5 is not the unified AI model OpenAI hoped it would be. Altman said in a post on X Tuesday that the company introduced new 'Auto', 'Fast', and 'Thinking' settings for GPT-5 that all ChatGPT users can select from the model picker. The Auto setting seems to work like GPT-5's model router that OpenAI initially announced; however, the company is also giving users options to circumnavigate it, allowing them to access fast and slow responding AI models directly. Alongside GPT-5's new modes, Altman said that paid users can once again access several legacy AI models — including GPT-4o, GPT-4.1, and o3 — which were deprecated just last week. 'We are working on an update to GPT-5's personality which should feel warmer than the current personality but not as annoying (to most users) as GPT-4o,' Altman wrote in the post on X. 'However, one learning for us from the past few days is we really just need to get to a world with more per-user customization of model personality.' ChatGPT's model picker now seems to be as complicated as ever, suggesting that GPT-5's model router has not universally satisfied users as the company hoped. The expectations for GPT-5 were sky high, with many hoping that OpenAI would push the limits of AI models like it had with the launch of GPT-4. However, GPT-5's rollout has been rougher than expected. The deprecation of GPT-4o and other AI models in ChatGPT sparked a backlash among users who had grown attached to the AI models' responses and personalities in ways that OpenAI had not anticipated. In the future, Altman says the company will give users plenty of advance notice if it ever deprecates GPT-4o. GPT-5's model router also appeared to be largely broken on launch day. That caused some users to feel the AI model wasn't as performant as previous OpenAI models, and forced Altman to address the problem in an AMA session on Reddit. However, it seems that GPT-5's router may still not be satisfying for all users. 'We're not always going to get everything on try #1 but I am very proud of how quickly the team can iterate,' wrote OpenAI's VP of ChatGPT, Nick Turley, in a post on X Tuesday. Routing prompts to the right AI model is a difficult task that requires aligning an AI model to a user's preferences, as well as the specific question they're asking. The router then has to make a decision on which AI model to send the prompt to in just a split second — that way, if a prompt goes to a fast responding AI model, the response can still be fast. More broadly, some people exhibit preferences for AI models that go beyond fast or slow responses. Some users may like the verbosity of one AI model, while others might appreciate the contrarian answers of another. Human attachment to certain AI models is a relatively new concept that isn't well understood. For example, hundreds of people in San Francisco recently held a funeral for Anthropic's AI model, Claude 3.5 Sonnet, when it was taken offline. In other cases, AI chatbots seem to be contributing to mentally unstable people going down psychotic rabbit holes. It seems OpenAI has more work to do around aligning its AI models to individual user preferences. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Elon Musk Accuses Apple of ‘Playing Politics' in App Store Rankings
Elon Musk Accuses Apple of ‘Playing Politics' in App Store Rankings

Business Insider

time2 hours ago

  • Business Insider

Elon Musk Accuses Apple of ‘Playing Politics' in App Store Rankings

Billionaire Elon Musk has openly accused tech giant Apple (AAPL) of unfair practices in how it ranks apps on the App Store. His AI startup, xAI, is preparing to take legal action, claiming Apple's handling of app rankings violates antitrust laws and harms competition. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. For context, xAI, owned by Musk, is the AI company behind the Grok chatbot. The company is focused on building advanced AI systems to compete with major tech rivals. Musk Accuses Apple of Favoring OpenAI Musk accused Apple of blocking competition in the App Store, saying, 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation.' He also threatened that xAI will take 'immediate legal action.' In another post, he accused the iPhone maker of 'playing politics.' He further questioned Apple's app promotion choices, asking, 'Why do you refuse to put either X or Grok in your 'Must Have' section?' He also pointed out that X ranks as the world's No. 1 news app, while Grok holds the No. 5 spot among all apps. Notably, OpenAI's ChatGPT currently holds the No. 1 spot among top free apps in the U.S. iOS store. Nonetheless, Musk offered no evidence for his claim. Meanwhile, Apple, OpenAI, and xAI did not immediately respond for comment. Apple Faces Growing Scrutiny over App Store Control This isn't the first time Apple has faced antitrust challenges. Musk's remarks come amid mounting regulatory and industry scrutiny of the company's control over its App Store. In June, Apple lost its long-running legal fight with Fortnite maker Epic Games. The U.S. Circuit Court of Appeals denied Apple's request to delay a court order that would require it to make App Store changes allowing more competition. The ruling also bars Apple from charging commissions on purchases made through external links and from controlling how developers direct users to these payment options. Is Apple a Good Stock to Buy? On TipRanks, AAPL stock has a consensus Moderate Buy rating among 29 Wall Street analysts. That rating is based on 16 Buys, 12 Holds, and one Sell assigned in the last three months. The average AAPL price target of $235.14 implies a 3.5% upside from current levels.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store