logo
Call for ban on AI apps creating naked images of children

Call for ban on AI apps creating naked images of children

Yahoo28-04-2025

The children's commissioner for England is calling on the government to ban apps which use artificial intelligence (AI) to create sexually explicit images of children.
Dame Rachel de Souza said a total ban was needed on apps which allow "nudification" - where photos of real people are edited by AI to make them appear naked - or can be used to create sexually explicit deepfake images of children.
She said the government was allowing such apps to "go unchecked with extreme real-world consequences".
A government spokesperson said child sexual abuse material was illegal and that there were plans for further offences for creating, possessing or distributing AI tools designed to create such content.
Deepfakes are videos, pictures or audio clips made with AI to look or sound real.
In a report published on Monday, Dame Rachel said the technology was disproportionately targeting girls and young women with many bespoke apps appearing to work only on female bodies.
Girls are actively avoiding posting images or engaging online to reduce the risk of being targeted, according to the report, "in the same way that girls follow other rules to keep themselves safe in the offline world - like not walking home alone at night".
Children feared "a stranger, a classmate, or even a friend" could target them using technologies which could be found on popular search and social media platforms.
Dame Rachel said: "The evolution of these tools is happening at such scale and speed that it can be overwhelming to try and get a grip on the danger they present.
"We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives."
Dame Rachel also called for the government to:
impose legal obligations on developers of generative AI tools to identify and address the risks their products pose to children and take action in mitigating those risks
set up a systemic process to remove sexually explicit deepfake images of children from the internet
recognise deepfake sexual abuse as a form of violence against women and girls
Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns.
He said: "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it."
It is illegal in England and Wales under the Online Safety Act to share or threaten to share explicit deepfake images.
The government announced in February laws to tackle the threat of child sexual abuse images being generated by AI, which include making it illegal to possess, create, or distribute AI tools designed to create such material.
It said at the time that the Internet Watch Foundation - a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% increase.
Media regulator Ofcom published the final version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children.
Websites must introduce beefed-up age checks or face big fines, the regulator said.
Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety".
A government spokesperson said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal".
"Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added.
"The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material."
Deepfaked: 'They put my face on a porn video'
'I was deepfaked by my best friend'
AI-generated child sex abuse images targeted with new laws

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What Happens When People Don't Understand How AI Works
What Happens When People Don't Understand How AI Works

Yahoo

time14 minutes ago

  • Yahoo

What Happens When People Don't Understand How AI Works

The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book—The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' [Read: What 'Silicon Valley' knew about tech-bro paternalism] These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. [Read: Life really is better without the internet] This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences. ​​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic. Article originally published at The Atlantic

Xcel president: Minnesota can meet data center energy demands and 2040 carbon-free mandate
Xcel president: Minnesota can meet data center energy demands and 2040 carbon-free mandate

Yahoo

time14 minutes ago

  • Yahoo

Xcel president: Minnesota can meet data center energy demands and 2040 carbon-free mandate

An aerial view of the Prairie Island Nuclear Power Plant, near Red Wing, Minnesota along the Mississippi River. The two pressurized water reactors produce approximately 1,100 megawatts. (BanksPhotos via Getty Images) The intra-DFL rift between labor unions whose members build data centers and progressives skeptical of corporate giveaways was on full display at the Capitol last week as lawmakers considered extending generous tax breaks for data centers' purchases of computers, software and energy equipment. It was also evident in a May 28 energy webinar featuring top utility executives, state officials and representatives from regional labor, agriculture and environmental groups. The conversation came as Minnesota utilities weigh proposals for thousands of megawatts of new data center capacity, representing new electric consumption equal to millions of homes. As other construction sectors falter amid high interest rates and sluggish demand, union laborers and tradespeople see an opportunity in building data centers and the power plants to run them. 'We need to be involved in the next iteration of energy development here in Minnesota,' said Joe Fowler, business manager for Laborers International Union of North America Local 563. To labor, that means building not only the wind, solar and battery plants that will form the backbone of Minnesota's future electric grid, but large industrial facilities to soak up the power they produce. Others in the left-of-center coalition say unfettered data center growth could jeopardize progress toward the state's statutory target of 100% carbon-free electricity by 2040 while threatening grid reliability and raising costs for ordinary utility customers. 'We want to bring on large users like data centers, but not to the exclusion of others,' said Margaret Cherne-Hendrick, CEO of St. Paul-based Fresh Energy, a policy and communications shop focused on clean energy. Though the data center boom was the elephant in the room, the conversation touched on some of the broader challenges issues facing Minnesota electric utilities, workers and customers as a dysfunctional state legislative session limps to a close and federal policymakers get closer to passing a budget bill that cuts taxes for the rich and Medicaid for the poor while, experts say, raising power prices for everyone. Developers have proposed nearly 9,000 megawatts of new data center capacity across Xcel Energy's eight-state territory, CEO Bob Frenzel said in October, or almost 9 million homes' worth of electricity consumption. Data centers alone account for about half of Xcel's expected 5% annual sales growth through 2029. Xcel expects Minnesota's share of that growth to be about 1,300 megawatts over the next seven to eight years, said Ryan Long, Xcel's president for Minnesota and the Dakotas. That's up from 60 megawatts of total capacity as of early last year. 'The curve is up and to the right,' Long told Energy Futures Initiative Foundation CEO and former U.S. Energy Secretary Ernest Moniz on the webinar. 'It's shifted from us trying to attract (data center) companies to Minnesota to them knocking on our doors.' Smaller utilities like Dakota Electric Association are also gearing up for massive amounts of data center development, CEO Ryan Hentges said later on the webinar. Projects proposed for Dakota Electric territory include a 12-building, 340-acre Farmington campus that residents are suing to stop. But the demand won't hit all at once, Hentges said. 'One gigawatt is not all going to happen next year,' he said. 'It's going to happen over time, and that gives us more time to plan.' The short answer is yes, according to Long. Even with the influx predicted over the next five years, Xcel is on track to shut down its three remaining Minnesota coal units by 2030 and meet the interim state goal of 80% clean power by 2030, he said. 'These are aging, somewhat inefficient plants and we are blessed to live in a region that has excellent renewable resources,' he said. Xcel could partner on future 'clean firm' power projects with big tech companies, which have their own sustainability goals, Long added. Google, utility NV Energy and power developer Fervo Energy recently announced a geothermal power partnership in Nevada, while Meta, Amazon and Microsoft have all inked splashy nuclear deals. Nuclear and geothermal both produce carbon-free power without relying on variable weather conditions. Those partnerships could eventually help wean Minnesota off natural gas power despite uncertainty around federal support for cleaner technologies, said Sydnie Lieb, assistant commissioner with the Minnesota Department of Commerce. 'In the absence of the federal government continuing to push development of clean firm resources, we are thinking about what the state can do,' she said. Big data centers could also cover at least some of the cost of new transmission infrastructure needed to serve them, easing the burden on existing ratepayers, Hentges added. But the state needs to ensure Minnesota data centers fully decarbonize their operations over time, including onsite backup generators that today generally run on natural gas or diesel, Cherne-Hendrick said. It also needs to push data centers to pay into state-administered equity programs facing sharp federal funding cuts, like the Low Income Home Energy Assistance Program, she said. Also no, Long said, despite the Trump administration's claims to the contrary. Last month, Trump invoked an obscure law to order a Michigan coal plant to operate past its planned May 31 retirement date. Moniz, the former U.S. Energy Secretary, asked whether he could do the same in another Midwestern state committed to transitioning off coal. Xcel is 'obviously paying a lot of attention' to the issue but isn't changing its coal retirement plans, which have been in the works for years and won't affect system reliability, Long said. He allowed that Minnesota will need gas power plants for many years, though they'll increasingly serve as backup for renewables, nuclear and long-duration batteries. Xcel plans to build a new 'hydrogen-capable' gas plant in southwestern Minnesota that will likely operate past 2040. Building new wind, solar and battery plants has been cheaper than running existing coal plants for years, in part because renewable power requires far less labor to operate and maintain. Fowler said that's a challenge for unions like LIUNA, whose work in the power sector increasingly focuses on facilities that more or less run themselves. And LIUNA members are uneasy about the future. 'Our job is to work ourselves out of a job — we build something and then move onto the next project,' he said. A February settlement between Xcel and Minnesota's utility regulator pushes the utility to expand training opportunities for underrepresented populations and work with labor on workforce transitions at retiring power plants. The training partnership has already produced around 100 graduates who can now work on new power plant or data center construction projects, Fowler said. 'There are real benefits the state will see from … having citizens who feel like their job is waiting out there,' he said. Wind and solar development is a double-edged sword for rural communities, where income-earning opportunities for landowners clash with concerns about removing prime farmland from production, said Anne Schwagerl, vice president with Minnesota Farmers Union. To demystify the issue and strengthen members' negotiating position with power developers, Minnesota Farmers Union plans to update its five-year-old 'farmers' guide' to renewable energy. But the best way to ensure durable rural support for clean energy is to give farm communities more skin in the game, Schwagerl said. Right now, for example, conglomerates barge most of the fertilizer used on Minnesota farms up the Mississippi River from massive factories on the Gulf Coast. Minnesota Farmers Union wants to see more local production, ideally led by rural cooperatives using excess wind power with support from federal and state green fertilizer grants, Schwagerl said. 'Our thinking is that the green transition is happening,' she said. 'We're seeing it in agriculture as in energy, and it would be a big bummer to us if it ended up being owned by the same multinational megacorporations.'

UBTECH Teams Up with HKU to Advance AI Education across the Greater Bay Area
UBTECH Teams Up with HKU to Advance AI Education across the Greater Bay Area

Yahoo

time15 minutes ago

  • Yahoo

UBTECH Teams Up with HKU to Advance AI Education across the Greater Bay Area

HONG KONG, June 6, 2025 /PRNewswire/ -- On June 6, UBTECH Education and the Centre for Information Technology in Education (CITE), part of the Faculty of Education at the University of Hong Kong (HKU), hosted the official launch of the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) Artificial Intelligence Education Development Initiative at HKU. Held under the theme "AI Empowers Future Education, Technology Drives Innovation in the Greater Bay Area," the event highlighted the region's commitment to integrating AI into next-generation educational systems. The inauguration of Artificial Intelligence Education & Teacher Development Center was also held in conjunction with the event. Through collaboration with CITE, UBTECH Education is working to build a pipeline of AI-competent educators. The joint initiative focuses on cultivating AI fluency among teachers and supporting talent development in STEM and innovation through both local and global professional development programs for GBA-based educators. Building an AI Education Infrastructure in the GBALaunch of the Greater Bay Area AI School Alliance In recent years, the Hong Kong Special Administrative Region (HKSAR) Government has prioritized artificial intelligence in its development roadmap. The Hong Kong Education Bureau has introduced a dedicated AI Curriculum Module in middle schools, mandating 10 to 14 hours of AI education for students in Secondary 1 through 3 within the ICT curriculum. As part of this broader effort, the HKSAR Government's AI Education Initiative targets reaching 95% of the region's schools by 2025. To date, 82% of primary and secondary schools in Hong Kong have already integrated AI into their teaching programs. In line with Hong Kong's educational policies, the UBTECH Education-CITE partnership is establishing a collaborative academic-industry platform for AI teaching content and educator training. Plans include building AI demonstration labs in Hong Kong's primary and secondary schools, with further expansion across Guangdong, Hong Kong, Macao—and ultimately into international markets—positioning Hong Kong as a global reference point in AI education. Both organizations will work together to establish AI education and research centers across Hong Kong, with the broader goal of creating a global AI talent certification network that spans more than 100 countries and regions, covering both K-12 and vocational learning pathways. This initiative is designed to support educator professional growth and drive improvements in AI education quality throughout the GBA. Embodied AI as a Catalyst for STEM and Innovation Learning Debuts in Hong Kong Tien Kung—the world's first humanoid robot to complete a half-marathon—was showcased at the event. Serving as a powerful symbol of embodied AI, humanoid robots are reshaping the future of education by enabling new forms of experiential and research-based learning. In partnership with the Beijing Humanoid Robot Innovation Center, UBTECH Education is advancing the deployment of embodied intelligence technologies for educational and research applications through an integrated suite of solutions. Anchored by the Tien Kung humanoid robot platform, UBTECH has rolled out the "Scientific Research and Co-Creation Program," already adopted by Fudan University, Shanghai Jiao Tong University, Tianjin University and other top-level institutions, with over 100 units ordered. Tien Kung's debut at HKU marks a significant step toward broader adoption across Hong Kong's universities. At the K-12 level, UBTECH Education is applying its humanoid robotics expertise to enhance public STEM education and innovation capabilities. These efforts are designed to accelerate AI curriculum integration and link scientific instruction with real-world applications. The partnership with CITE also marks the official launch of UBTECH's new instructional model, "Embodied Intelligence Empowering Science Education and Innovation," within Hong Kong's education system. During the roundtable forum at the event, participants from HKU, industry leaders, and educators from schools across the GBA, engaged in a strategic dialogue on the "Development and Internationalization of AI Education in the Greater Bay Area." The alliance between UBTECH Education and the CITE represents the first dual track education-technology collaboration designed to build a robust ecosystem for AI education across Guangdong, Hong Kong, and Macao. The joint effort aims to position Hong Kong as a global leader in AI curriculum development and talent export. Leveraging the GBA as a strategic launchpad, the program seeks to build a transnational AI education and innovation network aligned with the Belt and Road Initiative and broader international efforts. Additionally, the program will support cross-border talent mobility, enhance workforce readiness in AI-related fields across the GBA, and contribute meaningfully to the advancement of Hong Kong's broader innovation and technology agenda. CONTACT: Hua He, View original content to download multimedia: SOURCE UBTECH

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store