logo
#

Latest news with #CityStGeorges

Surrey, Kent and London university research to help psychosis carers
Surrey, Kent and London university research to help psychosis carers

BBC News

time29-05-2025

  • General
  • BBC News

Surrey, Kent and London university research to help psychosis carers

Researchers at universities in Surrey, Kent and London are to collaborate on a major study to help the carers of people with University of Surrey in Guildford, University of Kent in Canterbury and City St George's at the University of London in Tooting will work together to create a unique set of project will begin in September and has been awarded funding of £520,000 by the National Institute for Health and Care NHS defines psychosis as when a person loses touch with reality and begins suffering hallucinations and delusions. What is psychosis?Mum's "heart breaking" experience of postpartum psychosis"I worked on an album from a psychiatric ward"The research teams will also work with local authorities and other Cassie Hazell, a lecturer at the University of Surrey, said: "This project offers an opportunity to create the support that carers of people with psychosis want and need."We are excited to work with local authorities, charities and carers to ensure this work is implemented and makes a real difference."Dr Jacqueline Sin, professor of mental health nursing at City St George's, said: "It really gives us the opportunity to engage with a wide range of carers and involve them in co-producing truly meaningful and useful support resources for themselves."

AIs can make collective decisions and influence each other, says new study
AIs can make collective decisions and influence each other, says new study

Sky News

time14-05-2025

  • Science
  • Sky News

AIs can make collective decisions and influence each other, says new study

AIs are able to come to group decisions without human intervention and even persuade each other to change their minds, a new study has revealed. The study, carried out by scientists at City St George's, University of London, was the first of its kind and ran experiments on groups of AI agents. The first experiment asked pairs of AIs to come up with a new name for something, a well-established experiment in human sociology studies. Those AI agents were able to come to a decision without human intervention. "This tells us that once we put these objects in the wild, they can develop behaviours that we were not expecting or at least we didn't programme," said Professor Andrea Baronchelli, professor of complexity science at City St George's University and senior author of the study. The pairs were then put in groups and were found to develop biases towards certain names. Some 80% of the time, they would select one name over another by the end, despite having no biases when they were tested individually. This means the companies developing artificial intelligence need to be even more careful to control the biases their systems create, according to Prof Baronchelli. "Bias is a main feature or bug of AI systems," he said. "More often than not, it amplifies biases that are in society and that we wouldn't want to be amplified even further [when the AIs start talking]." 1:09 The third stage of the experiment saw the scientists inject a small number of disruptive AIs into the group. They were tasked with changing the group's collective decision - and they were able to do it. 2:35 This could have worrying implications if AI is in the wrong hands, according to Harry Farmer, a senior analyst at the Ada Lovelace Institute, which studies AI and its implications. AI is already deeply embedded in our lives, from helping us book holidays to advising us at work and beyond, he said. "These agents might be used to subtly influence our opinions and at the extreme, things like our actual political behaviour; how we vote, whether or not we vote in the first place," he said. Those very influential agents become much harder to regulate and control if their behaviour is also being influenced by other AIs, as the study shows, according to Mr Farmer. "Instead of looking at how to determine the deliberate decisions of programmers and companies, you're also looking at organically emerging patterns of AI agents, which is much more difficult and much more complex," he said.

AI can spontaneously develop human-like communication, study finds
AI can spontaneously develop human-like communication, study finds

The Guardian

time14-05-2025

  • Science
  • The Guardian

AI can spontaneously develop human-like communication, study finds

Artificial intelligence can spontaneously develop human-like social conventions, a study has found. The research, undertaken in collaboration between City St George's, University of London and the IT University of Copenhagen, suggests that when large language model (LLM) AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise. The study's lead author, Ariel Flint Ashery, a doctoral researcher at City St George's, said the group's work went against the majority of research done into AI, as it treated AI as a social rather than solitary entity. 'Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents,' said Ashery. 'We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone.' Groups of individual LLM agents used in the study ranged from 24 to 100 and, in each experiment, two LLM agents were randomly paired and asked to select a 'name', be it a letter or string of characters, from a pool of options. When both the agents selected the same name they were rewarded, but when they selected different options they were penalised and shown each other's choices. Despite agents not being aware that they were part of a larger group and having their memories limited to only their own recent interactions, a shared naming convention spontaneously emerged across the population without a predefined solution, mimicking the communication norms of human culture. Andrea Baronchelli, a professor of complexity science at City St George's and the senior author of the study, compared the spread of behaviour with the creation of new words and terms in our society. 'The agents are not copying a leader,' he said. 'They are all actively trying to coordinate, and always in pairs. Each interaction is a one-on-one attempt to agree on a label, without any global view. 'It's like the term 'spam'. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email.' Additionally, the team observed collective biases forming naturally that could not be traced back to individual agents. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion In a final experiment, small groups of AI agents were able to steer the larger group towards a new naming convention. This was pointed to as evidence of critical mass dynamics, where a small but determined minority can trigger a rapid shift in group behaviour once they reach a certain size, as found in human society. Baronchelli said he believed the study 'opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us and will co-shape our future.' He added: 'Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk – it negotiates, aligns and sometimes disagrees over shared behaviours, just like us.' The peer-reviewed study, Emergent Social Conventions and Collective Bias in LLM Populations, is published in the journal Science Advances.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store