Latest news with #ITUniversityofCopenhagen


Coin Geek
28-05-2025
- Business
- Coin Geek
Studies highlight AI's similarity with humans, impact on hiring
Homepage > News > Business > Studies highlight AI's similarity with humans, impact on hiring Getting your Trinity Audio player ready... A new research has revealed that artificial intelligence (AI) can develop social conventions and norms akin to human societies without guidance from creators or users. The research, a collaboration between the University of London and the IT University of Copenhagen, indicates similarities between AI agents and human societies. The study indicates that AI agents in groups exhibit patterns of language and social norms similar to those of humans. Lead researcher Ariel Ashery notes that the study viewed AI via the lenses of social interactions rather than the conventional approach as a lone entity. The researchers paired large language models (LLMs) in groups and prompted the AI agents to select a name. The researchers issued a reward each time the paired LLMs selected the same names, while a penalty was given for each dissimilar choice. Experts limited the memory of the AI agents and did not disclose that the tests were part of a broader study. Despite a limited memory and not being aware of the existence of a larger group, the AI agents adopted new naming conventions without any prior prompting. Across the larger group, the researchers reported similar naming conventions resembling human societies. 'The agents are not copying a leader. They are all actively trying to coordinate, and always in pairs,' said a researcher. 'Each interaction is a one-on-one attempt to agree on a label, without any global view.' Apart from uniform naming conventions exhibited by the AI agents, the researchers revealed that collective biases were occurring, but identifying the source of the bias proved impossible for the team. The researchers probed even further, identifying instances of a small group of AI agents working together to introduce new naming conventions for the larger group. The paper noted that the latest research will guide AI companies and regulators in designing safe models for commercial applications. 'Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it,' read the paper, dubbed Emergent Social Conventions and Collective Bias In Populations. As AI chatbots continue to rack impressive adoption metrics, researchers are uncovering new insights into their operations. For instance, researchers from Austria's University of Innsbruck are exploring the upsides of using temporal validity to improve AI chatbot performance. Another study by a group of Belgian scientists revealed that blockchain technology can enable autonomous AI agents to learn. Furthermore, new research indicates that AI chatbots are more inclined to sycophancy over truthful answers. Risks of using AI in hiring processes In separate research, experts from Australia highlighted a streak of discriminatory practices by AI recruiters toward job applicants, sparking worry over their use by HR professionals. Given the lack of diversity of training data, experts are warning against the widespread use of AI hiring systems for candidate screening and shortlisting. Lead researcher Natalie Sheard revealed that AI recruitment systems appear to favor certain demographics, discriminating against candidates from certain regions not included in the training data. Given the lack of diversity of training data, experts are warning against the widespread use of AI hiring systems for candidate screening and shortlisting. Lead researcher Natalie Sheard revealed that AI recruitment systems appear to favor certain demographics, discriminating against candidates from certain regions not included in the training data. In a comparative analysis, Sheard notes that training data sets for AI recruitment software skew the scale in favor of United States-based residents over an international demographic. In the study, the developer of one AI recruiter revealed that only 6% of its training data came from Australia, while 36% of the data came from white job applicants. 'The training data will come from the country where they're built—a lot of them are built in the US, so they don't reflect the demographic groups we have in Australia,' remarked Sheard. The fallout from the pattern of U.S.-based training data from AI hiring systems is far-reaching, as it puts candidates outside the U.S. at an immediate disadvantage, even if they meet the hiring criteria. Non-native English speakers with an accent also face an uphill climb with AI recruiters, with the software failing to transcribe their answers accurately. Despite service providers claiming that AI recruiters can transcribe a broad range of accents with minimal error, Sheard's research highlighted an absence of evidence to back the claim. Sheard's research criticized the use of AI in the hiring process, noting the dire lack of transparency in decisions. She notes that job applicants can easily obtain feedback in human-based processes, unlike AI hiring-based processes. However, the use of blockchain technology can improve transparency in the hiring process, leveling the playing field for all applicants. The research predicts an avalanche of AI discrimination cases in courts by job applicants in outlier demographics rejected by software. From hiring to internal operational processes, AI is revolutionizing the landscape of work worldwide. An International Monetary Fund (IMF) report notes that generative AI applications in the workplace will supercharge productivity, but fears of AI-based job losses remain palpable. On the positive side, upskilling can increase staff salaries by up to 40%, while an International Labour Organization (ILO) report notes that job losses from AI are unfairly exaggerated. Southeast Asia is leading the charge for AI integration in the workplace, outpacing North America and Europe in adoption metrics. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. Watch: Demonstrating the potential of blockchain's fusion with AI
Yahoo
15-05-2025
- Science
- Yahoo
AI can spontaneously develop human-like communication, study finds
Artificial intelligence can spontaneously develop human-like social conventions, a study has found. The research, undertaken in collaboration between City St George's, University of London and the IT University of Copenhagen, suggests that when large language model (LLM) AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise. The study's lead author, Ariel Flint Ashery, a doctoral researcher at City St George's, said the group's work went against the majority of research done into AI, as it treated AI as a social rather than solitary entity. 'Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents,' said Ashery. 'We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone.' Groups of individual LLM agents used in the study ranged from 24 to 100 and, in each experiment, two LLM agents were randomly paired and asked to select a 'name', be it a letter or string of characters, from a pool of options. When both the agents selected the same name they were rewarded, but when they selected different options they were penalised and shown each other's choices. Related: ChatGPT may be polite, but it's not cooperating with you Despite agents not being aware that they were part of a larger group and having their memories limited to only their own recent interactions, a shared naming convention spontaneously emerged across the population without a predefined solution, mimicking the communication norms of human culture. Andrea Baronchelli, a professor of complexity science at City St George's and the senior author of the study, compared the spread of behaviour with the creation of new words and terms in our society. 'The agents are not copying a leader,' he said. 'They are all actively trying to coordinate, and always in pairs. Each interaction is a one-on-one attempt to agree on a label, without any global view. 'It's like the term 'spam'. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email.' Additionally, the team observed collective biases forming naturally that could not be traced back to individual agents. In a final experiment, small groups of AI agents were able to steer the larger group towards a new naming convention. This was pointed to as evidence of critical mass dynamics, where a small but determined minority can trigger a rapid shift in group behaviour once they reach a certain size, as found in human society. Baronchelli said he believed the study 'opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us and will co-shape our future.' He added: 'Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk – it negotiates, aligns and sometimes disagrees over shared behaviours, just like us.' The peer-reviewed study, Emergent Social Conventions and Collective Bias in LLM Populations, is published in the journal Science Advances.