
University of Hertfordshire team compete in robot contest
RoboCup, which first started in 1997, has long served as a proving ground for AI and robotics researchers and this year's competition involves 250 teams from 37 countries.
"We have been playing with humanoid robots which look like humans and the robots we use are not allowed to use anything that humans don't have," said Mr Polani, who is also on the Board of Trustees of the RoboCup Federation."The robots are independent - they are not remote controlled because it is a competition where AI does everything," he said. The only remote aspect is the whistle to stop and start the game, he added.Mr Polani said the idea behind the competition was if you want to make intelligent machines you have to put them in the real world and "if they mess up they mess up themselves"."It is a really difficult task to kick and not fall down and you have to contend with 22 different robots working in a coordinated fashion," he added.
The University of Hertfordshire sent its first team to RoboCup in 2002, as it believed it was "where the future of robotics will lie", said Mr Polani.He said the French and Japanese teams were good, but he did not think that they would do well this year.RoboCup takes place from 17-21 July alongside other competitions such as where robots are tested in rescue situations and perform household tasks.Organisers said the competition was expected to attract 150,000 spectators.Livestream coverage was also available throughout the event on Twitch and YouTube.
Follow Beds, Herts and Bucks news on BBC Sounds, Facebook, Instagram and X.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
10 minutes ago
- The Guardian
Staff at UK's top AI institute complain to watchdog about its internal culture
Staff at the UK's leading artificial intelligence institute have raised concerns about the organisation's governance and internal culture in a whistleblowing complaint to the charity watchdog. The Alan Turing Institute (ATI), a registered charity with substantial state funding, is under government pressure to overhaul its strategic focus and leadership after an intervention last month from the technology secretary, Peter Kyle. In a complaint to the Charity Commission, a group of current ATI staff raise eight points of concern and say the institute is in danger of collapse due to government threats over its funding. The complaint alleges that the board of trustees, chaired by the former Amazon UK boss Doug Gurr, has failed to fulfil core legal duties such as providing strategic direction and ensuring accountability, with staff alleging a letter of no confidence was delivered last year and not acted upon. A spokesperson for ATI said the Charity Commission had not been in touch with the institute about any complaints that may have been sent to the organisation. They added that a whistleblower complaint had been filed last year to the government's UK Research and Innovation body, which funds ATI, and a subsequent independent investigation found no concerns. The complaint comes after ATI, which is undergoing a restructuring, notified about 50 staff – or approximately 10% of its workforce – that they were at risk of redundancy. It claims ATI's funding is at risk, citing 'privately raised concerns' from unnamed industry partners, while warning that Kyle has made clear that future government support is contingent on improved delivery and leadership change. In a letter to Gurr this month, Kyle called for a switch in focus to defence and national security at ATI, as well as leadership changes. While the letter stated ATI should 'continue to receive the funding needed to implement reforms', it said its 'longer-term funding arrangement' could be reviewed next year. The complaint claims there has been no internal or external accountability for how ATI funds have been used. It alleges there is an internal culture of 'fear, exclusion, and defensiveness'. It also alleges the board has not provided adequate oversight of a series of senior leadership departures under the chief executive, Jean Innes, nor of senior leadership appointments, and that ATI's credibility with 'staff, funders, partners, and the wider public has been significantly undermined', as shown by the letter of no confidence and Kyle's intervention. The Guardian has also learned that ATI is shutting projects related to online safety, tackling the housing crisis and reducing health inequality as part of its restructuring, which is resulting in the closure or mothballing of multiple strands of research. The restructuring has triggered internal upheaval at ATI, with more than 90 staff sending a letter to the board last year warning that cost cuts were putting the organisation's reputation at risk. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion Among the projects slated for closure are work on developing AI systems to detect online harms, producing AI tools that can help policymakers tackle issues such as inequality and affordability in the housing market and measuring the impact in health inequality of major policy decisions like lockdowns. Other projects expected to close include an AI-based analysis of how the government and media interact. A project looking at social bias in AI outcomes will also be dropped. Projects being paused include a study into how AI might affect human rights and democracy, as well as research into creating a global approach to AI ethics. A spokesperson for ATI said: 'We're shaping a new phase for the Turing, and this requires substantial organisational change to ensure we deliver on the promise and unique role of the UK's national institute for data science and AI. As we move forward, we're focused on delivering real-world impact across society's biggest challenges, including responding to the national need to double down on our work in defence, national security and sovereign capabilities.' A Charity Commission spokesperson said the organisation could not confirm or deny whether it had received a complaint, in order to protect the identity of any whistleblowers.


The Guardian
an hour ago
- The Guardian
Digital resurrection: fascination and fear over the rise of the deathbot
Rod Stewart had a few surprise guests at a recent concert in Charlotte, North Carolina. His old friend Ozzy Osbourne, the lead singer of Black Sabbath who died last month, was apparently beamed in from some kind of rock heaven, where he was reunited with other departed stars including Michael Jackson, Tina Turner and Bob Marley. The AI-generated images divided Stewart's fans. Some denounced them as disrespectful and distasteful; others found the tribute beautiful. At about the same time, another AI controversy erupted when Jim Acosta, a former CNN White House correspondent, interviewed a digital recreation of Joaquin Oliver, who was killed at the age of 17 in a 2018 high school shooting in Florida. The avatar of the teenager was created by his parents, who said it was a blessing to hear his voice again. In June, Alexis Ohanian, a co-founder of Reddit, posted on X an animation of his late mother hugging him when he was a child, created from a photograph. 'Damn, I wasn't ready for how this would feel. We didn't have a camcorder, so there's no video of me with my mom … This is how she hugged me. I've rewatched it 50 times,' he wrote. These are just three illustrations of a growing phenomenon of 'digital resurrection' – creating images and bots of people who have died using photographs, videos, voice messages and other material. Companies offering to create 'griefbots' or 'deathbots' abound, and questions about exploitation, privacy and their impact on the grieving process are multiplying. 'It's vastly more technologically possible now because of large language models such as ChatGPT being easily available to the general public and very straightforward to use,' said Elaine Kasket, a London-based cyberpsychologist. 'And these large language models enable the creation of something that feels really plausible and realistic. When someone dies, if there are enough digital remains – texts, emails, voice notes, images – it's possible to create something that feels very recognisable.' Only a few years ago, the idea of 'virtual immortality' was futuristic, a techno-dream beyond the reach of ordinary people. Now, interactive avatars can be created relatively easily and cheaply, and demand looks set to grow. A poll commissioned by the Christian thinktank Theos and carried out by YouGov in 2023 found that 14% of respondents agreed they would find comfort in interacting with a digital version of a loved one who had died. The younger the respondent, the more likely they were to be open to the idea of a deathbot. The desire to preserve connections with dead loved ones is not new. In the past, bereaved people have retained precious personal items that help them feel close to the person they have lost. People pore over photos, watch videos, replay voice messages and listen to music that reminds them of the person. They often dream of the dead, or imagine they glimpse them across a room or in the street. A few even seek contact via seances. 'Human beings have been trying to relate to the dead ever since there were humans,' said Michael Cholbi, a professor of philosophy at the University of Edinburgh and the author of Grief: A Philosophical Guide. 'We have created monuments and memorials, preserved locks of hair, reread letters. Now the question is: does AI have anything to add?' Louise Richardson, of York university's philosophy department and a co-investigator on a four-year project on grief, said bereaved people often sought to 'maintain a sense of connection and closeness' with a dead loved one by visiting their grave, talking to them or touching items that belonged to them. 'Deathbots can serve the same purpose, but they can also be disruptive to the grieving process,' she said. 'They can get in the way of recognising and accommodating what has been lost, because you can interact with a deathbot in an ongoing way.' For example, people often wonder what a dead loved one might have done or said in a specific situation. 'Now it feels like you are able to ask them.' But deathbots may also provide 'sanitised, rosy' representations of a person, said Cholbi. For example, someone creating a deathbot of their late granny may choose not to include her casual racism or other unappealing aspects of her personality in material fed into an AI generator. There is also a risk of creating a dependency in the living person, said Nathan Mladin, the author of AI and the Afterlife, a Theos report published last year. 'Digital necromancy is a deceptive experience. You think you're talking to a person when you're actually talking to a machine. Bereaved people can become dependent on a bot, rather than accepting and healing.' The boom in digital clones of the dead began in the far east. In China, it can cost as little as 20 yuan (£2.20) to create a digital avatar of a loved one, but according to one estimate the market was worth 12bn yuan (£1.2bn) in 2022 and was expected to quadruple by 2025. More advanced, interactive avatars that move and converse with a client can cost thousands of pounds. Fu Shou Yuan International Group, a major funeral operator, has said it is 'possible for the dead to 'come back to life' in the virtual world'. According to the China Funeral Association, the cost is about 50,000 yuan per deceased person. The exploitation of grief for private profit is a risk, according to Cholbi, although he pointed to a long history of mis-selling and upselling in the funeral business. Kasket said another pitfall was privacy and rights to digital remains. 'A person who's dead has no opportunity to consent, no right of reply and no control.' The fraudulent use of digital material to create convincing avatars for financial gain was another concern, she added. Some people have already begun stipulating in their wills that they do not want their digital material to be used after their death. Interactive avatars are not just for the dead. Abba Voyage, a show that features digital versions of the four members of the Swedish pop group performing in their heyday, has been a runaway success, making about £1.6m each week. Audiences thrill – and sing along – to the exhilarating experience while the band's members, now aged between 75 and 80, put their feet up at home. More soberly, the UK's National Holocaust Centre and Museum launched a project in 2016 to capture the voices and images of Holocaust survivors to create interactive avatars capable of answering questions about their experiences in the Nazi death camps long into the future. According to Cholbi, there is an element of 'AI hype' around deathbots. 'I don't doubt that some people are interested in this, and I think it could have some interesting therapeutic applications. It could be something that people haul out periodically – I can imagine they bring out the posthumous avatar of a deceased relative at Christmas dinner or on their birthday. 'But I doubt that people will try to sustain their relationships with the dead through this technology for very long. At some point, I think most of us reconcile ourselves with the fact of death, the fact that the person is dead. 'This isn't to say that some people might really dive into this, but it does seem to be a case where maybe the prospects are not as promising as some of the commercial investors might hope.' For Mladin, the deathbot industry raises profound questions for ethicists and theologians. The interest in digital resurrection may be a consequence of 'traditional religious belief fading, but those deeper longings for transcendence, for life after death, for the permanence of love are redirected towards technological solutions,' he said. 'This is an expression of peak modernity, a belief that technology will conquer death and will give us life everlasting. It's symptomatic of the kind of culture we inhabit now.' Kasket said: 'There's no question in my mind that some people create these kinds of phenomena and utilise them in ways that they find helpful. But what I'm concerned about is the way various services selling these kinds of things are pathologising grief. 'If we lose the ability to cope with grief, or convince ourselves that we're unable to deal with it, we are rendered truly psychologically brittle. It is not a pathology or a disease or a problem for technology to solve. Grief and loss are part of normal human experience.'


Reuters
2 hours ago
- Reuters
China wants US to relax AI chip-export controls for trade deal, FT reports
Aug 10 (Reuters) - China wants the United States to ease export controls on chips critical for artificial intelligence as part of a trade deal before a possible summit between Presidents Donald Trump and Xi Jinping, the Financial Times reported on Sunday. Chinese officials have told experts in Washington that Beijing wants the Trump administration to relax export restrictions on high-bandwidth memory chips, the newspaper reported, citing unnamed people familiar with the matter. The White House, State Department and China's foreign ministry did not immediately respond to requests for comment on the report. HBM chips, which help perform data-intensive AI tasks quickly, are closely watched by investors due to their use alongside AI graphic processors, particularly Nvidia's (NVDA.O), opens new tab. The FT said China is concerned because the U.S. HBM controls hamper the ability of Chinese companies such as Huawei to develop their own AI chips. Successive U.S. administrations have curbed exports of advanced chips to China, looking to stymie Beijing's AI and defence development. While this has impacted U.S. firms' ability to fully address booming demand from China, one of the world's largest semiconductor markets, it still remains an important revenue driver for American chipmakers.