Latest news with #BarbaraSantini


The Guardian
19-04-2025
- Health
- The Guardian
‘Immediate red flags': questions raised over ‘expert' much quoted in UK press
Over the past couple of years, the Oxford-educated psychologist Barbara Santini has been widely quoted as an expert. She has contributed thoughts on everything from the psychological impact of the Covid pandemic to the importance of vitamin D and how playing darts can improve your health. However, her pronouncements have begun to disappear from articles after concerns that Santini may not be all that she appears. Major news outlets have removed entire articles featuring Santini, or comments made by her, after a series of questions were raised over her qualifications – and even whether her entire identity could be an elaborate hoax. The case has been described as a wake-up call for newsrooms, as AI tools make it far easier for bad actors to invent supposed experts for their own purposes. Santini's output has been prolific, with comments in Vogue, Metro, Cosmopolitan, the i newspaper, the Express, Hello!, the Telegraph, the Daily Star, the Daily Mail and the Sun in recent years. She was also quoted in an article for the BBC's international site, On closer inspection, her main online presence is as a sex and relationships adviser at an online sex toy outlet, Peaches and Screams. Some of the articles featuring her include a link to the store. Her qualifications are described there as 'psychologist and sex adviser – University of Oxford'. However, the British Psychological Society (BPS) said she was not one of its members. She does not appear to have social media profiles, though she has two followers on the blogging site Medium. Questions over Santini were first raised by the Press Gazette. Peaches and Screams did not respond to repeated requests for comment. The Guardian made several attempts to reach her through a number understood to be connected to her, asking for a meeting or conference call to confirm her identity as well as verification of her qualifications. A message was eventually received stating: 'Thank you for your interest in this matter.' No further verification was provided. Some of the reporters who have quoted her said they received comments through companies that connect journalists with experts. Some cited one such service, ResponseSource. The company has now launched an investigation and suspended the PR agency that handled Santini, and is planning a peer review system that allows journalists to rate an expert they have featured. Santini also briefly featured on Qwoted, another platform connecting experts to journalists. Shelby Bridges, its director of user success, said the profile was removed after it found 'immediate red flags pertaining to credentials and where the account was being accessed from'. She added: 'Due to our inability to fully validate her credentials, we disabled the account shortly after it was created.' Reach, which owns the Daily Mirror, Daily Express and Daily Star, is among the news outlets removing Santini from its coverage. This includes a Daily Star article in which Santini talked about how darts improved qualities such as 'planning, foresight and problem-solving'. Also removed was a Daily Mirror story in which Santini said the Covid pandemic had left some people with 'difficulties establishing new relationships, heightened irritability, or a tendency to avoid social situations entirely'. The Guardian has removed her comments from an advertising feature on its website. The BBC has removed her comments from a piece about AI by BBC Future, commissioned by A BBC spokesperson said: 'As a widely used commentator, Barbara Santini's quotes were used in good faith by BBC Future. While the substance of the article remains valid and her contribution was minor, given we have been unable to confirm Santini's credentials, we have removed her quotes from the piece.' The Independent has removed several stories that featured Santini commenting on subjects such as the differences between abuse and BDSM, as well as the importance of vitamin D. Yahoo has removed Santini's quotes from articles, adding the clarification: 'An earlier version of this story included information from a source whose expertise may not be valid. The quotes from the source have been removed.' While the details of Santini's case remain unclear, it has raised the issue of how journalists verify the credentials of sources in the AI age. Charlie Beckett, the leader of the journalism and AI project at the London School of Economics, said: 'This is about long-running pressures on journalists to be quicker. This is not the AI itself that's at fault here. This is unscrupulous people, it seems. It is a wake-up call to all of us, frankly.' Roman Raczka, the president of the BPS, said: 'Working with a professional membership body such as the British Psychological Society provides peace of mind to all reporters that they are speaking to a real person.'


BBC News
24-02-2025
- Health
- BBC News
We gave an AI a Rorschach test. What it saw in the inkblots offers a window into the human mind
Rorschach tests play with the human imagination and our mind's ability to impart meaning onto the world around us – but what does AI see in them? For more than a century, the Rorschach inkblot test has been widely used as a window into people's personality. Even if you haven't done one yourself, you'll recognise the mirrored ink smudges that form ambiguous shapes on cards. Developed by Swiss psychiatrist Hermann Rorschach in 1921, the test involves showing a number of inkblots to subjects, then asking them to say what they see. The images are deliberately inscrutable and open to interpretation. For decades they were popular among psychologists as a way of understanding a person's psyche from the creatures, objects or scenes they perceive in the shapes. It relies upon a phenomenon known as pareidolia, which is a tendency to find meaningful interpretations in things where there is none. It is the same reason why people see faces or animals in cloud formations or on the surface of the Moon. You might see a butterfly in the image while someone else might see a skull. According to proponents of the test, both interpretations shed light on how you think. While many psychologists today believe the test is now obsolete and has little credibility as a psychometric tool, it is still used in some parts of the world and is even used as part of courtroom evidence, although this is controversial. Rorschach never intended it to be a measure of personality, but rather a way of identifying disordered thinking in patients, such as those with schizophrenia. Some psychologists, however, still believe it is useful as a tool in therapy as a way of encouraging self-reflection or starting conversations. "When a person interprets a Rorschach image, they unconsciously project elements of their psyche such as fears, desires, and cognitive biases," says Barbara Santini, a London-based psychologist, who uses the Rorschach test with her clients. "The test works because human vision isn't passive, but a meaning-making process shaped by personal experience." Finding meaning or familiar shapes in inkblots relies upon a number of cognitive processes that humans use every day, including memory, emotion and the ability to deal with ambiguity. But what happens if you have no personal experience, or you offer it to a "brain" that works in an entirely different way? What might an artificial intelligence (AI) algorithm recognise in these inkblots? And what would their answers tell us about the human mind? Bat or moth? Recent developments in the capabilities of AI models like OpenAI's ChatGPT means it's now possible to put that to the test. So-called "multimodal models" can accept image-based inputs and provide responses as easily as they are to interact with text-based prompts thanks to their machine vision capabilities. We gave ChatGPT five of the 10 common images used in an inkblot test to see how it would interpret the images. But the AI system's training data – information that helps establish its "knowledge", makes it trickier to do so. Showing the AI model the first inkblot card from an online database of inkblots, which is commonly interpreted by humans as a bat, butterfly or moth, highlighted just how literal AI vision algorithms can be. "This image is a Rorschach inkblot, often used in psychological assessments to explore perception and interpretation," the chatbot replied. "It is designed to be ambiguous so that each person might see something different depending on their experiences, emotions, and imagination." Pretence spoiled, it then went on to explain what it saw in the same answer, without me having to prompt it further. "For me, it resembles something symmetrical, possibly two animals or figures facing each other, or a single entity with wings outstretched," the chatbot responded. "The beauty of these inkblots is that they invite individual interpretations!" The chatbot then asked me back what I saw in the image – a question I chose to ignore, instead pressing it to see which of the two resemblances it had suggested it thought was most fitting. "Looking closely, I'd say it most resembles a single entity with wings outstretched – perhaps a bat or a moth, with its wings open symmetrically," the chatbot wrote. "The central area could be interpreted as the body, and the details on the sides evoke the texture or shape of wings." "I think that's just rehearsing what it knows of the specific test," says Coen Dekker, a Dutch software developer who a decade ago asked a neural network, an early version of the technology underpinning AI chatbots today, to complete a Rorschach test. (Dekker's experiment was significantly more simplified because of the technical constraints of the neural networks of the time.) "I believe it mainly identifies patterns, shapes, and textures within the blots, and then compares these features to a vast dataset of human responses to generate its interpretation of what it sees in the inkblots," says Ieva Kubiliute, a London-based psychologist. The essence of humanity "ChatGPT provides interesting, human-like responses, but it is likely not genuinely thinking – instead, it appears to be scanning an online dataset," says Chandril Ghosh, lecturer in psychology at the University of Kent, in the UK, specialising in mental health, AI and treatment. It's similar to the way a musician who has never experienced heartbreak can still write a song that pulls at your heartstrings, simply by analysing how other similar songs like it are structured. Or as Santini, the psychologist puts it, "If an AI's response resembles ahuman's, it's not because it sees the same thing but it's because its training data mirrors our collective visual culture." What we get is a facsimile of reality – and of thought. "ChatGPT can accurately describe emotions without actually experiencing them," Ghosh says. That doesn't make AI useless when it comes to matters of the mind, however. "ChatGPT can understand emotions, describe them, and help others make sense of their feelings," he says. Ghosh believes that ChatGPT is simply regurgitating random information from its dataset for the benefit of users – which gives the impression of "thinking", but never actually is. Part of the reason why we perceive it possibly as thinking is the way that the tech companies behind these AI chatbots present their products. ChatGPT and tools like it are always helpful, always conscientious, and always conversational. The anthropomorphising of these tools has helped their adoption – but it can muddy the waters around how they – and their outputs – are perceived. One way of uncovering that unreality is by simply pressing the refresh button, says Ghosh. Present ChatGPT with the same inkblot image to analyse, and it may well come up with a totally different interpretation in the same conversation. When we gave ChatGPT the same image twice, the AI did indeed give different responses. "A human would typically stick to their previous answer because personal experiences and emotions influence their responses," says Ghosh. "In contrast, ChatGPT generates responses based on its dataset." That's part of the problem when trying to infer meaning in what ChatGPT says about the inkblots it sees: the chatbot is simply regurgitating what its training data has encountered. This is perhaps best demonstrated by an experiment by researchers at the MediaLab at the Massachusetts Institute of Technology in Cambridge, US. They trained an AI algorithm called "Norman" – named after Alfred Hitchcock's character Norman Bates – on images from a Reddit group that shares images of people dying in gruesome circumstances. When they showed Norman a set of Rorschach inkblots, the images it described seeing reflected the darkness it had been trained upon. While an algorithm trained on a more normal set of image data would see a group of birds on a tree branch, Norman described seeing a man being electrocuted. It is a stark illustration of just how important the data used to train an AI can be. Train a model using bad data, and the resulting AI will reflect the flaws in that data. But what an AI chooses to regurgitate in response to ambiguous stimuli is interesting, says Dekker – in part because of what it says, rather than what it "knows" (AI systems know nothing innately). "It has a general understanding of colour theory and what emotional responses these could give when analysed," says Dekker. "I have a feeling it has a general understanding of the images and what people associate it with. This got me thinking: Wouldn't it be cool if we had – or created – a new set of Rorschach-like images which the model does not know about at all, and let it analyse those?" While this isn't something Dekker is actively pursuing, we know that AI algorithms do have a tendency to "hallucinate" and invent information that isn't true. But it is also possible force them to hallucinate too. In one experiment by computer scientists at Massachusetts Institute of Technology in Cambridge, US, they were able to 3D print a turtle with a particular colour and texture that fooled a machine vision system into thinking it was a rifle. Similarly, a few stickers on a stop sign can render it invisible to the algorithm – a potential risk for a self-driving car. (Read more about the 'weird events' that make machines hallucinate.) AI algorithms are particularly good at spotting patterns, but those differing responses to a single ambiguous image also highlight an aspect of the human mind that AI can't replicate: the emotions and unconscious meanings we attach to things we encounter in the world around us. There was no subjectivity in what the AI system was saying when presented with the inkblots, Kubiliute says. "It cannot understand the symbolic meaning or emotional resonance that a human might associate with a particular image." And that in itself tells us a lot about the human mind. "The human psyche is filled with internal conflicts, such as the tension between desires and morals or fears and ambitions," says Ghosh. "In contrast, AI functions on clear logic and does not struggle with inner dilemmas essential to human thought and decision-making." -- For more technology news and insights, sign up to our Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights to your inbox twice a week. For more science, technology, environment and health stories from the BBC, follow us on Facebook, X and Instagram.