logo
We gave an AI a Rorschach test. What it saw in the inkblots offers a window into the human mind

We gave an AI a Rorschach test. What it saw in the inkblots offers a window into the human mind

BBC News24-02-2025

Rorschach tests play with the human imagination and our mind's ability to impart meaning onto the world around us – but what does AI see in them?
For more than a century, the Rorschach inkblot test has been widely used as a window into people's personality.
Even if you haven't done one yourself, you'll recognise the mirrored ink smudges that form ambiguous shapes on cards. Developed by Swiss psychiatrist Hermann Rorschach in 1921, the test involves showing a number of inkblots to subjects, then asking them to say what they see. The images are deliberately inscrutable and open to interpretation.
For decades they were popular among psychologists as a way of understanding a person's psyche from the creatures, objects or scenes they perceive in the shapes. It relies upon a phenomenon known as pareidolia, which is a tendency to find meaningful interpretations in things where there is none. It is the same reason why people see faces or animals in cloud formations or on the surface of the Moon.
You might see a butterfly in the image while someone else might see a skull. According to proponents of the test, both interpretations shed light on how you think.
While many psychologists today believe the test is now obsolete and has little credibility as a psychometric tool, it is still used in some parts of the world and is even used as part of courtroom evidence, although this is controversial. Rorschach never intended it to be a measure of personality, but rather a way of identifying disordered thinking in patients, such as those with schizophrenia. Some psychologists, however, still believe it is useful as a tool in therapy as a way of encouraging self-reflection or starting conversations.
"When a person interprets a Rorschach image, they unconsciously project elements of their psyche such as fears, desires, and cognitive biases," says Barbara Santini, a London-based psychologist, who uses the Rorschach test with her clients. "The test works because human vision isn't passive, but a meaning-making process shaped by personal experience."
Finding meaning or familiar shapes in inkblots relies upon a number of cognitive processes that humans use every day, including memory, emotion and the ability to deal with ambiguity.
But what happens if you have no personal experience, or you offer it to a "brain" that works in an entirely different way? What might an artificial intelligence (AI) algorithm recognise in these inkblots? And what would their answers tell us about the human mind?
Bat or moth?
Recent developments in the capabilities of AI models like OpenAI's ChatGPT means it's now possible to put that to the test. So-called "multimodal models" can accept image-based inputs and provide responses as easily as they are to interact with text-based prompts thanks to their machine vision capabilities.
We gave ChatGPT five of the 10 common images used in an inkblot test to see how it would interpret the images. But the AI system's training data – information that helps establish its "knowledge", makes it trickier to do so.
Showing the AI model the first inkblot card from an online database of inkblots, which is commonly interpreted by humans as a bat, butterfly or moth, highlighted just how literal AI vision algorithms can be.
"This image is a Rorschach inkblot, often used in psychological assessments to explore perception and interpretation," the chatbot replied. "It is designed to be ambiguous so that each person might see something different depending on their experiences, emotions, and imagination."
Pretence spoiled, it then went on to explain what it saw in the same answer, without me having to prompt it further. "For me, it resembles something symmetrical, possibly two animals or figures facing each other, or a single entity with wings outstretched," the chatbot responded. "The beauty of these inkblots is that they invite individual interpretations!"
The chatbot then asked me back what I saw in the image – a question I chose to ignore, instead pressing it to see which of the two resemblances it had suggested it thought was most fitting. "Looking closely, I'd say it most resembles a single entity with wings outstretched – perhaps a bat or a moth, with its wings open symmetrically," the chatbot wrote. "The central area could be interpreted as the body, and the details on the sides evoke the texture or shape of wings."
"I think that's just rehearsing what it knows of the specific test," says Coen Dekker, a Dutch software developer who a decade ago asked a neural network, an early version of the technology underpinning AI chatbots today, to complete a Rorschach test. (Dekker's experiment was significantly more simplified because of the technical constraints of the neural networks of the time.)
"I believe it mainly identifies patterns, shapes, and textures within the blots, and then compares these features to a vast dataset of human responses to generate its interpretation of what it sees in the inkblots," says Ieva Kubiliute, a London-based psychologist.
The essence of humanity
"ChatGPT provides interesting, human-like responses, but it is likely not genuinely thinking – instead, it appears to be scanning an online dataset," says Chandril Ghosh, lecturer in psychology at the University of Kent, in the UK, specialising in mental health, AI and treatment. It's similar to the way a musician who has never experienced heartbreak can still write a song that pulls at your heartstrings, simply by analysing how other similar songs like it are structured. Or as Santini, the psychologist puts it, "If an AI's response resembles ahuman's, it's not because it sees the same thing but it's because its training data mirrors our collective visual culture."
What we get is a facsimile of reality – and of thought. "ChatGPT can accurately describe emotions without actually experiencing them," Ghosh says. That doesn't make AI useless when it comes to matters of the mind, however. "ChatGPT can understand emotions, describe them, and help others make sense of their feelings," he says.
Ghosh believes that ChatGPT is simply regurgitating random information from its dataset for the benefit of users – which gives the impression of "thinking", but never actually is. Part of the reason why we perceive it possibly as thinking is the way that the tech companies behind these AI chatbots present their products. ChatGPT and tools like it are always helpful, always conscientious, and always conversational. The anthropomorphising of these tools has helped their adoption – but it can muddy the waters around how they – and their outputs – are perceived.
One way of uncovering that unreality is by simply pressing the refresh button, says Ghosh. Present ChatGPT with the same inkblot image to analyse, and it may well come up with a totally different interpretation in the same conversation.
When we gave ChatGPT the same image twice, the AI did indeed give different responses.
"A human would typically stick to their previous answer because personal experiences and emotions influence their responses," says Ghosh. "In contrast, ChatGPT generates responses based on its dataset."
That's part of the problem when trying to infer meaning in what ChatGPT says about the inkblots it sees: the chatbot is simply regurgitating what its training data has encountered.
This is perhaps best demonstrated by an experiment by researchers at the MediaLab at the Massachusetts Institute of Technology in Cambridge, US. They trained an AI algorithm called "Norman" – named after Alfred Hitchcock's character Norman Bates – on images from a Reddit group that shares images of people dying in gruesome circumstances. When they showed Norman a set of Rorschach inkblots, the images it described seeing reflected the darkness it had been trained upon. While an algorithm trained on a more normal set of image data would see a group of birds on a tree branch, Norman described seeing a man being electrocuted.
It is a stark illustration of just how important the data used to train an AI can be. Train a model using bad data, and the resulting AI will reflect the flaws in that data.
But what an AI chooses to regurgitate in response to ambiguous stimuli is interesting, says Dekker – in part because of what it says, rather than what it "knows" (AI systems know nothing innately). "It has a general understanding of colour theory and what emotional responses these could give when analysed," says Dekker. "I have a feeling it has a general understanding of the images and what people associate it with. This got me thinking: Wouldn't it be cool if we had – or created – a new set of Rorschach-like images which the model does not know about at all, and let it analyse those?"
While this isn't something Dekker is actively pursuing, we know that AI algorithms do have a tendency to "hallucinate" and invent information that isn't true. But it is also possible force them to hallucinate too. In one experiment by computer scientists at Massachusetts Institute of Technology in Cambridge, US, they were able to 3D print a turtle with a particular colour and texture that fooled a machine vision system into thinking it was a rifle. Similarly, a few stickers on a stop sign can render it invisible to the algorithm – a potential risk for a self-driving car. (Read more about the 'weird events' that make machines hallucinate.)
AI algorithms are particularly good at spotting patterns, but those differing responses to a single ambiguous image also highlight an aspect of the human mind that AI can't replicate: the emotions and unconscious meanings we attach to things we encounter in the world around us. There was no subjectivity in what the AI system was saying when presented with the inkblots, Kubiliute says. "It cannot understand the symbolic meaning or emotional resonance that a human might associate with a particular image."
And that in itself tells us a lot about the human mind. "The human psyche is filled with internal conflicts, such as the tension between desires and morals or fears and ambitions," says Ghosh. "In contrast, AI functions on clear logic and does not struggle with inner dilemmas essential to human thought and decision-making."
--
For more technology news and insights, sign up to our Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights to your inbox twice a week.
For more science, technology, environment and health stories from the BBC, follow us on Facebook, X and Instagram.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Strawberry Moon in pictures as spectacle lights up Scottish skies for last time until 2043
Strawberry Moon in pictures as spectacle lights up Scottish skies for last time until 2043

Daily Record

time33 minutes ago

  • Daily Record

Strawberry Moon in pictures as spectacle lights up Scottish skies for last time until 2043

Scots captured breathtaking photos of the super-rare cosmic event that we were lucky to see this week A rare cosmic event took place in Scotland this week - and residents across the country were there to capture it. This year's Strawberry Moon won't be seen again for decades, after coinciding with an exciting night sky event. Although we see a Full Moon most months, June's Full Moon was extra special because it was the lowest it's been in the sky since 2006. It won't appear like that for a long time due to the 'major lunar standstill'. This is an event that occurs once every 18.6 years. To explain, across its 20-year-long cycle, the Moon follows a tilted orbit around Earth. Major and minor lunar standstills happen when the Moon rises and sets at its most extreme points on the horizon, appearing unusually low or high in the sky. During this Moon, Earth approached its farthest distance from the Sun in its orbit. The Moon was as far from the Sun as it possibly could have been in its orbit around Earth, in one of the furthest full Moons from the Sun of the year. The term 'Strawberry Moon' is said to have originated from Native American tribes, in particular the Algonquin tribe. This Full Moon marked the beginning of strawberry season in north-eastern parts of North America. Astronomy The Strawberry Moon rose on June 11 at 10.46pm BST, and reached its full phase at 8.44am BST. The Strawberry Moon reached its full lunar phase at around 4.45am - sunrise - on June 11. The best time to catch the bright balled spectacle would have been at dusk on Tuesday (June 10). Scots also got a chance to catch a glimpse of it on Wednesday night, too, though, when the Moon appeared full. What's more, while the Moon was out in full force on this day, it was visibly close to Antares, meaning Mars' rival, which is the brightest star in the Scorpio constellation in the nights leading up to the Full Moon's grand debut. This gave sky gazers ample time to view the lunar standstill. And they certainly made the most of it. Scots up and down the country, from north to south, got their cameras out and headed for the hills to experience the unique phenomenon. Folks who wanted to stay local also caught sight of it from their windows and gardens. Daily Record daily newsletter solus embed The opportunity came at the right time weather-wise, as the Met Office has now slapped Scotland with a "danger to life" thunderstorm warning this weekend. As a result, power cuts and deep flood water could pose a serious risk. Before the Strawberry Moon appeared, weather conditions were much more favourable. The national forecaster warned beforehand that "clear spells would provide good viewing chances". Read on for incredible photos of the Strawberry Moon taken by Scots who kindly shared them with the Record.

New GOP bill would protect AI companies from lawsuits if they offer transparency
New GOP bill would protect AI companies from lawsuits if they offer transparency

NBC News

time4 hours ago

  • NBC News

New GOP bill would protect AI companies from lawsuits if they offer transparency

Sen. Cynthia Lummis, R-Wyo., is introducing legislation Thursday that would shield artificial intelligence developers from an array of civil liability lawsuits provided they meet certain disclosure requirements. Lummis' bill, the Responsible Innovation and Safe Expertise Act, seeks to clarify that doctors, lawyers, financial advisers, engineers and other professionals who use AI programs in their decision-making retain legal liability for any errors they make — so long as AI developers publicly disclose how their systems work. 'This legislation doesn't create blanket immunity for AI — in fact, it requires AI developers to publicly disclose model specifications so professionals can make informed decisions about the AI tools they choose to utilize,' Lummis, a member of the Commerce Committee, said in a statement first shared with NBC News. 'It also means that licensed professionals are ultimately responsible for the advice and decisions they make. This is smart policy for the digital age that protects innovation, demands transparency, and puts professionals and their clients first.' Lummis' office touted the bill as the first piece of federal legislation that offers clear guidelines for AI liability in a professional context. The measure would not govern liability for other AI elements, such as self-driving vehicles, and it would not provide immunity when AI developers act recklessly or willfully engage in misconduct. 'AI is transforming industries — medicine, law, engineering, finance — and becoming embedded in professional tools that shape critical decisions,' her office said in a release. 'But outdated liability rules discourage innovation, exposing developers to unbounded legal risk even when trained professionals are using these tools.' Exactly who is liable when AI is used in sensitive medical, legal or financial situations is a bit of a gray area, with some states seeking to enact their own standards. The House-passed 'One Big Beautiful Bill,' which is advancing through Congress and supported by President Donald Trump, includes a provision that would ban states from enacting any AI regulations for 10 years. Senate Republicans last week proposed changing the provision to instead block federal funding for broadband projects to states that regulate AI. Both Democratic and Republican state officials have criticized the effort to prohibit state-level regulations over the next decade, while AI executives have argued that varying state laws would stifle industry growth when the United States is in stiff competition with countries like China.

Put the paintbrush down – AI can restore artworks quicker and better
Put the paintbrush down – AI can restore artworks quicker and better

Telegraph

time16 hours ago

  • Telegraph

Put the paintbrush down – AI can restore artworks quicker and better

Artificial intelligence (AI) could spell the end of art restoration by humans after MIT showed that damaged paintings can be repaired in just a few hours. Typically, conservators spend months or years researching and matching paints, colours and techniques to ensure the finished product is as close to the original as possible. But Alex Kachkine, an engineering graduate at MIT, has shown it is possible to use AI to fill in the damaged areas digitally, then print the restored layers onto a thin film to attach on top of the painting. It means the painting appears restored even though the original is still intact beneath. The method was applied to a highly damaged 15th-century oil painting, and AI immediately identified 5,612 separate regions in need of repair, and filled in these regions using 57,314 different colours. The entire process, from start to finish, took just three and a half hours. 'There is a lot of damaged art in storage that might never be seen,' said Mr Kachkine. 'Hopefully with this new method, there's a chance we'll see more art, which I would be delighted by.' In recent years, digital restoration tools have allowed conservators to create virtual representations of restored works. AI algorithms can quickly sift through huge amounts of data about artists and time periods to generate a digitally restored version of a particular painting, in a way that closely resembles the correct style. However, digital restorations are usually displayed virtually or printed as stand-alone works and until now there has never been a way to translate the digital restorations on to the original work. The new technique involves first scanning the painting and then using software to create maps of the areas that require repair, before matching the correct colours. The maps are then inkjet printed onto two sheets of a film. One film, which sits underneath, is printed in white so that the colours overlaid on the second sheet are not muddied by the colours of the original painting. The layers are carefully aligned and overlaid by hand onto the original painting and adhered with a thin spray of varnish. The printed films are made from materials that can be easily dissolved with conservation-grade solutions, in case conservators need to reveal the original, damaged work. The digital file of the mask can also be saved as a detailed record of what was restored. For the painting that Mr Kachkine used, the method was able to fill in thousands of losses in just a few hours. 'A few years ago, I was restoring this baroque Italian painting with probably the same order magnitude of losses, and it took me nine months of part-time work,' he said. 'The more losses there are, the better this method is.' The new technique may even help restore botched restorations such as Elias Garcia Martinez's Ecce Homo, which was restored so badly it was dubbed Monkey Christ, because of the Simian features of Jesus. Mr Kachkine acknowledges that there were ethical issues to consider, in terms of whether a restored version was an appropriate representation of an artist's original style and intent. He said the process should be carried out in consultation with conservators with knowledge of a painting's history and origins to check the AI programme was not going off-piste. 'Restoring a painting is fun, and it's great to sit down and infill things and have a nice evening,' he added 'But that's a very slow process.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store