
Overhaul algorithms and age checks or face fines, tech firms told
Websites will have to change the algorithms that recommend content to young people and introduce beefed-up age checks or face big fines, the UK media regulator has confirmed.
Ofcom says its 'Children's Codes' – the final versions of which have now been published – will offer 'transformational new protections'.
Platforms which host pornography, or offer content which encourages self-harm, suicide or eating disorders are among those which must take more robust action to prevent children accessing their content.
Ofcom boss Dame Melanie Dawes said it was a 'gamechanger' but critics say the restrictions do not go far enough and were 'a bitter pill to swallow'.
Ian Russell, chairman of the Molly Rose Foundation, which was set up in memory of his daughter – who took her own life aged 14 – said he was 'dismayed by the lack of ambition' in the codes.
But Dame Melanie told BBC Radio 4's Today programme that age checks were a first step as 'unless you know where children are, you can't give them a different experience to adults.
'There is never anything on the internet or in real life that is fool proof… [but] this represents a gamechanger.'
She admitted that while she was 'under no illusions' that some companies 'simply either don't get it or don't want to', but emphasised the Codes had legal force.
'If they want to serve the British public and if they want the privilege in particular in offering their services to under 18s, then they are going to need to change the way those services operate.'
Prof Victoria Baines, a former safety officer at Facebook told the BBC it is 'a step in the right direction'.
Talking to the Today Programme, she said: 'Big tech companies are really getting to grips with it , so they are putting money behind it, and more importantly they're putting people behind it.'
Technology Secretary Peter Kyle said key to the rules was tackling the algorithms which decide what children get shown online.
'The vast majority of kids do not go searching for this material, it just lands in their feeds,' he told BBC Radio 5 Live.
Kyle told The Telegraph he was separately looking into a social media curfew for under-16s, but would not 'act on something that will have a profound impact on every single child in the country without making sure that the evidence supports it'.
The new rules for platforms are subject to parliamentary approval under the Online Safety Act.
The regulator says they contain more than 40 practical measures tech firms must take, including: Algorithms being adjusted to filter out harmful content from children's feeds
Robust age checks for people accessing age-restricted content
Taking quick action when harmful content is identified
Making terms of service easy for children to understand
Giving children the option to decline invitations to group chats which may include harmful content
Providing support to children who come across harmful content
A 'named person accountable for children's safety'
Management of risk to children reviewed annually by a senior body
If companies fail to abide by the regulations, Ofcom said it has 'the power to impose fines and – in very serious cases – apply for a court order to prevent the site or app from being available in the UK.' Read More Google settles $5bn lawsuit for 'private mode' tracking
Children's charity the NSPCC broadly welcomed the Codes, calling them 'a pivotal moment for children's safety online.'
But they called for Ofcom to go further, especially when it came to private messaging apps which are often encrypted – meaning platforms cannot see what is being sent.
READ SOURCE
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 hours ago
- Yahoo
Nightlife crisis sees British ticket app snapped up by US rival
A major live music ticketing app has been bought by a US rival after narrowly avoiding administration, laying bare the difficulties faced by the UK's late-night sector. Dice FM, which sells tickets to concerts, nightclubs and other cultural events, has been acquired by rival platform Fever, just days after filing an official notice that it intended to appoint administrators. Companies do this when they are at risk of going bust and need protection from their creditors while they restructure their finances. A source close to the situation said Dice FM had taken the step as a precaution. The deal will mean that Dice, which runs one of the UK's biggest ticketing apps, becomes part of New York-headquartered Fever. Dice FM sells tickets as QR codes, which can be exchanged or returned through the app. Users can sync their Spotify and Apple Music accounts to the app to receive recommendations and alerts for when acts are touring. The app grew in popularity as traditional ticketing platforms faced increased scrutiny over their practices. The British company, which was founded in 2014, has raised nearly $200m (£147m) from investors in recent years. Dice FM says it charges fewer fees and does not allow for tickets to be sold on any secondary market, effectively eliminating scalping, where tickets are bought in bulk and sold on for profit. Its backers have included the investment firm Softbank, the French billionaire telecoms mogul Xavier Niel and Tony Fadell, the American engineer and businessman who became known as the 'father of the iPod' when he was a senior executive at Apple. Mr Fadell joined the board of Dice FM in 2021. Details of the deal or how much was paid for Dice FM have not been revealed. However, the signs that Dice risked administration will add fuel to growing worries over the future of Britain's late-night and cultural industries. Thousands of nightclubs and independent music venues have closed since the pandemic. This has been blamed on a combination of soaring costs, burdensome red tape and licensing laws, cost of living pressures and a growing trend for people going home early and drinking less. Ministers have said they want to slash red tape for hospitality firms and help restore Britain's diminishing nightlife. Sir Sadiq Khan has been handed fresh powers to 'call in' blocked planning applications in London, while industry chiefs are being quizzed on ways to boost the sector. Dice FM's accounts have been overdue for almost a year. It was due to file documents for the year to Dec 31 2023 by June 23 last year, according to Companies House, but never did. In 2023, the company enacted a round of lay-offs, saying at the time it had 'made the difficult decision to restructure parts of our business to ensure we can focus on our most important initiatives'. Last year, it was first reported that Dice FM was exploring a potential sale. Softbank was said to be eager to sell its stake at the time. Fever was founded in New York in 2014 and offers ticketing services in 200 cities across the world. It is the partner of many major music festivals, including Primavera Sound. Phil Hutcheon, founder and chief executive of Dice, said the deal would allow the company 'to scale even faster' and expand into new cities. The company said there would be no change to how people use the app. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.
Yahoo
6 hours ago
- Yahoo
What Happens When People Don't Understand How AI Works
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book—The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' [Read: What 'Silicon Valley' knew about tech-bro paternalism] These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. [Read: Life really is better without the internet] This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences. When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic. Article originally published at The Atlantic


CNET
6 hours ago
- CNET
He Got Us Talking to Alexa. Now He Wants to Kill Off AI Hallucinations
If it weren't for Amazon, it's entirely possible that instead of calling out to Alexa to change the music on our speakers, we might have been calling out to Evi instead. That's because the tech we know today as Amazon's smart assistant started out life with the name of Evi (pronounced ee-vee), as named by its original developer, William Tunstall-Pedoe. The British entrepreneur and computer scientist was experimenting with artificial intelligence before most of us had even heard of it. Inspired by sci-fi, he "arrogantly" set out to create a way for humans to talk to computers way back in 2008, he said at SXSW London this week. Arrogant or not, Tunstall-Pedoe's efforts were so successful that Evi, which launched in 2012 around the same time as Apple's Siri, was acquired by Amazon and he joined a team working on a top-secret voice assistant project. What resulted from that project was the tech we all know today as Alexa. That original mission accomplished, Tunstall-Pedoe now has a new challenge in his sights: to kill off AI hallucinations, which he says makes the technology highly risky for all of us to use. Hallucinations are the inaccurate pieces of information and content that AI generates out of thin air. They are, said Tunstall-Pedoe, "an intrinsic problem" of the technology. Through the experience he had with Alexa, he learned that people personify the technology and assume that when it's speaking back to them it's thinking the way we think. "What it's doing is truly remarkable, but it's doing something different from thinking," said Tunstall-Pedoe. "That sets expectations… that what it's telling you is true." Innumerable examples of AI generating nonsense show us that truth and accuracy are never guaranteed. Tunstall-Pedoe was concerned that the industry isn't doing enough to tackle hallucinations, so formed his own company, Unlikely AI, to tackle what he views as a high-stakes problem. Anytime we speak to an AI, there's a chance that what it's telling us is false, he said. "You can take that away into your life, take decisions on it, or you put it on the internet and it gets spread by others, [or] used to train future AIs to make the world a worse place." Some AI hallucinations have little impact, but in industries where the cost of getting things wrong – in medicine, law, finance and insurance, for example – inaccurately generated content can have severe consequences. These are the industries that Unlikely AI is targeting for now, said Tunstall-Pedoe Unlikely AI uses a mix of deep tech and proprietary tech to ground outputs in logic, minimizing the risk of hallucinations, as well as to log the decision-making process of algorithms. This makes it possible for companies to understand where things have gone wrong, when they inevitably do. Right now, AI can never be 100% accurate due to the underlying tech, said Tunstall-Pedoe. But advances currently happening in his own company and others like it mean that we're moving towards a point where accuracy can be achieved. For now, Unlikely AI is mainly being used by business customers, but eventually Tunstall-Pedoe believes it will be built into services and software all of us use. The change being brought about by AI, like any change, presents us with risks, he said. But overall he remains "biased towards optimism" that AI will be a net positive for society.