
Plastic wet wipe ban by government is urgent
A charity has called for the government to commit to a date when plastic will be banned in wet wipes, to help prevent the effect they are having on the River Thames. Thames21 says wet wipes are not only creating artificial islands, harming wildlife and impacting water quality; they are reshaping the waterway itself. The charity said although plans were announced last year to ban the sale of plastic wet wipes, progress had halted following the general election. A spokesperson for the Department for Environment, Food and Rural Affairs (Defra) said: "Plastic wet wipes clog up our sewers, pollute our waterways and harm our treasured wildlife. That is why the government will ban them."
Liz Gyekye from Thames21 said: "Wet wipes are a massive problem, it's devastating."The principal challenge is that people flush the wet wipes down the toilets, then you get sewage overflows after heavy rain that chuck them into the river."They then destroy wildlife because it ingests these microplastics when they break down."Ms Gyekye said the charity wanted the government to act "urgently". "We had the previous Conservative government last year commit to banning plastic in wet wipes, and now we're calling on this government to implement this ban," she said. Asked why the public were still flushing wet wipes despite the obvious damage being caused, Ms Gyekye said: "I think the issue is over labels - some labels say they are 'flushable' - but there is no marine biodegradable standard out there - so they should all just go in the bin."She added: "Consumers need to do their part and dispose of their waste correctly, flushing down only the 3 Ps (pee, poo, and paper)."
The director of sustainability at the Port of London Authority (PLA), Grace Rawnsley, said the new Thames super sewer "should help" cope with flushed wet wipes, but said the ban on plastic in wet wipes was "key" to achieving a cleaner river. Volunteer Janice Bruce-Brande said that although the wet wipe island she was surveying was "soul destroying", she had noticed a possible improvement since the introduction of the super sewer. But she said it was still "so disheartening" to see the wet wipe problem.
'We will ban them'
In response to Thames21's calls, a Defra spokesperson told the BBC: "Plastic wet wipes clog up our sewers, pollute our waterways and harm our treasured wildlife."That is why the government will ban them."This is part of our wider plan to clean up our rivers. We have passed our landmark Water Act, introducing two-year prison sentences for polluting water bosses, and banning unfair million-pound bonuses."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Rhyl Journal
11 minutes ago
- Rhyl Journal
Ban on advertising and safeguard for child patients added to Assisted Dying Bill
The new parts to the Terminally Ill Adults (End of Life) Bill were voted in on Friday as a second day of debate on various amendments came to a close. It is expected the next major vote on the overall Bill could take place next Friday, which could see it either fall or pass through to the Lords. Impassioned debate heard the Bill described by Conservative MP Kieran Mullan as a 'deeply consequential and highly contentious piece of legislation for our society'. He argued not enough time has been allocated for debate on such a divisive issue, but health minister Stephen Kinnock said there had been more than 90 hours of parliamentary time spent so far, and more than 500 amendments had been considered at committee stage earlier this year. On Friday a majority of MPs approve a new clause, tabled by Labour MP Dame Meg Hillier, to ensure medics cannot raise the topic of assisted dying with under-18s. Her separate amendment to prevent health workers from bringing up the issue with adults patients before they have raised it was voted down. The amendment on child patients was hailed as a 'first major Commons defeat' by opposition campaigners Care Not Killing which welcomed 'MPs removing the ability of doctors to raise unprompted assisted suicide with children'. A group of Labour MPs opposed to the proposed legislation called it an '11th hour rejection of the claims made about the safety of this Bill' which 'proves that confidence is slipping away from it'. They also cautioned that MPs might not have a copy of the final Bill by the time they vote 'on this life and death issue' next week, as some outstanding amendments will still be being considered on Friday morning. A ban on advertising assisted dying should the Bill pass into law has also been approved. An amendment, by fellow Labour MP Paul Waugh, to limit exceptions on that ban did not pass. He said the ban as it stands has 'unspecified exceptions, which could make the ban itself worthless', warning online harms from ads about assisted dying on TikTok 'could be a reality without the tighter safeguards in my amendment'. A number of other amendments were passed, including a provision for assisted dying deaths to not automatically be referred to a coroner and around the regulation of substances for use in assisted dying. Other issues debated included an amendment requiring the Health Secretary to publish an assessment of the availability, quality and distribution of palliative and end-of-life care one year after the Bill passing into law. Pledging her support for the amendment, which was tabled by Liberal Democrat Munira Wilson, Kim Leadbeater said MPs should not have to choose between supporting assisted dying or palliative care as it is not an 'either/or' conversation for dying people. She said palliative care and assisted dying 'can and do work side by side to give terminally-ill patients the care and choice they deserve in their final days', and urged MPs to support 'all options available to terminally ill people'. Ms Wilson's amendment is supported by Marie Curie, which said it is 'desperately needed as the end-of-life care system is in crisis, with huge gaps in services and a lack of NHS leadership on this vital part of our health and care system'. It is expected that amendment could be voted on next Friday. One MP, who became emotional as she recalled the death of her husband who she said had been 'in extreme pain' with terminal cancer, urged her colleagues to 'mind our language' after words like 'murder' were used. Liberal Democrat MP Caroline Voaden, whose husband died of oesophageal cancer, said it is 'so wrong' to use such language. She said: 'This is about helping people die in a civilised way and helping their families not go through a horrendous experience of watching a loved one die in agony.' The beginning of Friday's session saw MPs add a new opt-out clause to the Bill. The amendment, meaning no person including all health and social care professionals, can be obliged to take part in assisted dying had been debated and approved last month, but has now been formally added to the Bill. The Bill passed second reading stage by a majority of 55 during a historic vote in November which saw MPs support the principle of assisted dying. Demonstrators both for and against a change in the law once again gathered outside Parliament to make their views known on the Bill. Sarah Wootton, chief executive of Dignity in Dying which is in favour of a change in the law, said: 'Our country is closer than ever before to the safe, compassionate, and tightly regulated assisted dying law that so many people want, from all walks of life and every part of the country.' But former MP Caroline Ansell, from Christian Action Research and Education (Care), which opposes assisted dying, urged parliamentarians to vote against the Bill. She said: 'It is irredeemably flawed in principle and in detail. Parliament should close the door to assisted suicide and focus on truly compassionate and life-affirming forms of support.' As it stands, the proposed legislation would allow terminally-ill adults in England and Wales, with fewer than six months to live, to apply for an assisted death, subject to approval by two doctors and a panel featuring a social worker, senior legal figure and psychiatrist. MPs are entitled to have a free vote on the Bill and any amendments, meaning they vote according to their conscience rather than along party lines.

South Wales Argus
22 minutes ago
- South Wales Argus
Bowie challenges Tories to ‘step up' against Miliband's ‘eco-zealotry'
The Scottish Conservative MP criticised both Labour and the SNP over their opposition to new oil and gas developments in the North Sea. Accusing the UK Government of 'overseeing the wilful deindustrialisation of this nation', Mr Bowie hit out at the 'frankly dangerous eco-zealotry of Ed Miliband', the Energy Secretary. Speaking at the Scottish Conservative conference at Murrayfield in Edinburgh Mr Bowie told his party: 'We must step up. Britain needs us more than ever.' The Tory insisted: 'The future of Scotland and Britain is at stake, our country's security depends on a strong Conservative Party to stand up for what is right.' He recalled how former US president Ronald Reagan had 'once said the first duty of government is to protect' – but added that 'on every front the SNP and Labour are failing to do that'. Attacking both Labour and the SNP, Mr Bowie, who is also his party's shadow Scottish secretary, said: 'They haven't protected everyone's economic security, by raising taxes, or ripping away their winter fuel payment, even if they are now apparently going to hand it back to them. 'They haven't protected our energy security by insisting on no new oil and gas developments.' The Conservative MP continued: 'We can all see what is happening in the world, there is more risk out there, we as a country are more vulnerable. 'That is why the decisions of this Labour Government are so gravely concerning. Their economic incompetence, coupled with their frightening ineptitude when it comes to our energy security is making the United Kingdom more vulnerable.' He attacked the Labour Government over its 'madcap drive to clean power by 2030', as he said ministers were 'actively accelerating the decline of our North Sea'. This, he said, was 'forcing us to become increasingly exposed to over-reliance on imports from overseas, imports that are shipped in diesel-chugging tankers across the Atlantic from America or from Norwegian wells'. The Tory said the opposition to new oil and gas developments meant 'investment is drying up, work is being put on pause, companies are literally shutting up shop and jobs are being lost'. But he added: 'This hostility for our oil and gas workers is not simply the preserve of the zealots in the Labour Party. 'The SNP have their fingerprints all over the job losses, the well closures.' Mr Bowie added: 'We need Conservative leadership because we know where the SNP and Labour will take us.' He also used his speech to attack the 'snake oil salesmen' in Reform UK, insisting that Nigel Farage's party do not 'care one jot for Scotland, or for our United Kingdom'. The Tory said: 'Let me be clear. Reform is quite simply not a conservative party, not a unionist party, frankly they are not a serious party.'


New Statesman
32 minutes ago
- New Statesman
Does the UK need an AI Act?
Photo by Charles McQuillan / Getty Images Britain finds itself at a crossroads with AI. The stakes are heightened by the fact that out closest allies appear to be on diverging paths. Last year, the EU passed its own AI act, seeking controlled consensus on how to regulate new technologies. The US, meanwhile, is pursuing a lighter-touch approach to AI – perhaps reflecting the potential financial rewards its Big Tech companies could lose if stifled by regulation. Prime Minister Keir Starmer and Science Secretary Peter Kyle seem to be mirroring the US strategy. In the January launch of the government's AI Opportunities Action Plan, Kyle wants Britain to 'shape the AI revolution rather than wait to see how it shapes us'. Many have called for the government to bring forward an AI act, to lay the foundation for such leadership. Does Britain need one, and if so, how stringent should it be? Spotlight reached out to sectoral experts to give their views. 'An AI act would signal that Britain is serious about making technology work for people' Gina Neff – Professor of responsible AI at Queen Mary University of London This government is betting big on AI, making promises about turbo-charging innovation and investment. But regulatory safeguards are fragmented, public trust remains uncertain, and real accountability is unclear. Charging forward without a clear plan means AI will be parachuted into industries, workplaces, and public services with little assurance that it will serve the people who rely on it. An AI act would signal that Britain is serious about making AI work for people, investing in the places that matter for the country, and harnessing the power of AI for good. An AI act would create oversight where there is ambiguity, insisting on transparency and accountability. An AI act could provide the foundation to unlock innovation for public benefit by answering key questions: who is liable when AI fails? When AI systems discriminate? When AI is weaponised? Starmer's government borrows from Silicon Valley's logic, positioning AI regulation as the opposite of innovation. Such logic ignores a crucial fact: the transition to AI will require a major leap for workers, communities and societies. Government must step in where markets won't or can't: levelling the playing field so powerful companies do not dominate our future, investing in education and skills so more people can benefit from opportunities, ensuring today's laws and regulations continue to be fit for purpose, and building digital futures with companies and civil society. Subscribe to The New Statesman today from only £8.99 per month Subscribe Under Conservative governments, the UK took a 'proportionate', 'proinnovation' approach outlined in the AI White Paper, suggesting responsibility for safe and trustworthy AI rests with the country's existing 90 regulators. That was always envisioned to be a wait-and-see stop-gap before new measures. The AI Opportunities Action Plan sketches out support for the UK's AI industry, but does not go far enough on how to manage the social, cultural and economic transitions that we face. With worries about the impact on entry-level jobs, on our children, on information integrity, on the environment, on the UK's creative sector, on growing inequality, on fair yet efficient public services: there is a long list of jobs now for government to do. Lack of action will only create confusion for businesses and uncertainty about rights and protections for workers, consumers and citizens. Without an AI act to help shore it up, the good work that is already happening in the UK won't be able to fully power benefits for everyone. An AI act must go beyond data protections to establish transparency requirements and accountability provisions, outline safeguards for intellectual property, set clearer rules around and recourse for automated decision-making. These are responsibilities that tech companies are largely evading. Who can blame them? They have cornered global markets and will gain handsomely with our new investments in AI. A UK AI act could empower regulators with stronger enforcement tools to right the imbalance of power between British society and the world's biggest players in this sector. An AI act would give real structure to this country's ambitions for AI. The UK needs clarity on what AI can and cannot do, and that won't come from piecemeal guidance – it will come from leaders with vision helping us build the society that we all so rightly deserve. 'The government's hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow' Marina Jirotka and Keri Grieman – Professor of human-centred computing at the University of Oxford; Research associate, RoboTIPS project. The EU AI act entered into force not even a year ago, and there is already serious discussion on whether to reduce enforcement and simplify requirements on small and medium enterprises in order to reduce burdens on companies in a competitive international marketplace. The US House of Representatives has narrowly approved a bill that blocks states from enforcing AI regulations for ten years, while forwarding one bipartisan federal act that criminalises AI deepfakes but does not address AI on a broader level. Large language model updates are rolled out faster than the speed of subscription model billing. AI is invading every corner of our lives, from messaging apps to autonomous vehicles – some used to excellent effect, others to endless annoyance. The British government has chosen a policy of investment in AI – investing in the industry itself, in skill-building education and in inducing foreign talent. Its hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow. However, this leaves the regulatory burden on individual sectors: piecemeal, often siloed and without enough regulatory AI experts to go around, with calls coming from inside the house – the companies themselves – for a liability system. The UK needs clarity: for industry, for public trust and for the prevention of harm. There are problems that transcend individual industries: bias,discrimination, over-hype, environmental impact, intellectual property and privacy concerns, to name a few. A regulator is one way to tackle these issues, but can have varying levels of impact depending on structure: coordinating between industry bodies or taking a more direct role; working directly with companies or at arm's length; cooperative investigation or more bare-bones enforcement. But whatever the UK is to do, it needs to provide regulatory clarity sooner rather than later: the longer the wait, the more we fail to address potential harms, but we also fall behind in market share as companies choose not to bet the bank on a smaller market with an unclear regulatory regime. 'Growth for whom? Efficiency to what end?' Baroness Beeban Kidron – House of Lords member and digital rights activist All new technology ends up being regulated. On arrival greeted with awe. Claims made for its transformative nature and exceptionality. Early proponents build empires and make fortunes. But sooner or later, those with responsibilities for our collective good have a say. So here we are again with AI. Of course we will regulate, but it seems that the political will has been captured. Those with their hands on the technology are dictating the terms – terms that waver between nothing meaningful to almost nothing at all. While government valorises growth and efficiency without asking: growth for whom? Efficiency to what end? In practical terms, an AI act should not seek to regulate AI as a technology but rather regulate its use across domains: in health (where it shows enormous benefit); in education (where its claims outweigh its delivery by an unacceptable margin); in transport (where insurers are calling the shots); and in information distribution (where its deliberate manipulation, unintended hallucination and careless spread damages more than it explains). If we want AI to be a positive tool for humanity then it must be subject to the requirements of common goods. But in a world of excess capital restlessly seeking the next big thing, governments bent over to do the bidding of the already-too-powerful, and lobbyists who simultaneously claim it is too soon and too late, we see the waning of political will. Regulation can be good or bad, but we are in troubling times where the limit of our ambition is to do what we can, not what we should – which gives it a bad name. And governments – including our own – legislate to hardwire the benefits of AI into the ever-increasing concentration of power and wealth of Silicon Valley. Tech companies, AI or otherwise, are businesses. Why not subject them to corporate liability, consumer rights, product safety, anti-trust laws, human and children's rights? Why exempt them from tax, or the full whack for their cost to planet and society? It's not soon and it is not too late – but it needs independence and imagination to make AI a public good, not wilful blindness to an old-school playbook of obfuscation and denial while power and money accumulate. Yes, we need regulation, but we also need political will. 'The real test of a bill will be if it credibly responds to the growing list of everyday harms we see' Michael Birstwistle – Associate director, Ada Lovelace Institute AI is everywhere: our workplaces, public services, search engines, our social media and messaging apps. The risks of these systems are made clear in the government's International AI Safety Report. Alongside long-standing harms like discrimination and 'hallucination' (where AI confidently generates false information), systemic harms such as job displacement, environmental costs and the capacity of newer 'AI agents' to misinform and manipulate are rapidly coming to the fore. But there is currently no holistic body of law governing AI in the UK. Instead, developers, deployers and users must comply with a fragmented patchwork of rules, with many risks going unmanaged. Crucially, our current approach disincentivises those building AI systems from taking responsibility for harms they are best placed to address; regulation tends to only look at downstream users. Our recent national survey showed 88 per cent of people believe it's important that the government or regulators have powers to stop the use of a harmful AI product. Yet more than two years on from the Bletchley summit and its commitments, it's AI developers deciding whether to release unsafe models, according to criteria they set themselves. The government's own market research has said this 'wild west' is lowering business confidence to adopt. These challenges can only be addressed by legislation, and now is a crucial time act. The government has announced an AI bill, but its stated ambition (regulating 'tomorrow's models not today's') is extremely narrow. For those providing scrutiny in parliament, press and beyond, the real test of a bill will be whether it credibly responds to the growing list of everyday harms we see today — such as bias, misinformation, fraud and malicious content — and whether it equips government to manage them upstream at source. 'There's a temptation to regulate AI with sweeping, catch-all Bills. That impulse is mistaken' Jakob Mökander – Director of science and technology policy, Tony Blair Institute for Global Change As AI transforms everything from finance to healthcare, the question is not whether to regulate its design and use – but how to do it well. Rapid advances in AI offer exciting opportunities to boost economic growth and improve social outcomes. However, AI poses risks, from information security to surveillance and algorithmic discrimination. Managing these risks will be key in building public trust and harnessing the benefits. Globally, there's an understandable temptation to regulate AI with sweeping, catch-all Bills that signal seriousness and ease public concern. However, this impulse is mistaken. Horizontal legislation is a blunt tool that struggles to address the many different risks AI poses in various real-world contexts. It could also end up imposing overly burdensome restrictions even on safe and socially beneficial use cases. If the UK government is serious about implementing the AI Opportunities Action Plan, it should continue its pro-innovation, sector-specific approach: steering the middle ground between the overly broad EU AI Act and the US' increasingly deregulatory approach. This way, supporting innovation can go hand-in-hand with protection of consumer interests, human rights and national security. Regulators like the CMA, FCA, Ofcom and HSE are already wrestling with questions related to AI-driven market concentration, misinformation and bias in their respective domains. Rather than pursuing a broad AI bill, the government should continue to strengthen these watchdogs' technical muscle, funding, and legal tools. The £10m already allocated to this effort is welcome – but this should go much further. Of course, some specific security concerns may be insufficiently covered by existing regulation. To address this gap, the government's proposal for a narrow AI Bill to ensure the safety of frontier-AI models is a good starting point. The AI Security Institute has a crucial role to play in this – not as a regulator, but as an independent centre to conduct research, develop standards and evaluate models. Its long-term legitimacy should continue to be served by clear independence from both government and industry, rather than the distraction of enforcement powers. Britain has an opportunity to set a distinctive global example: pro-innovation, sector-specific, and grounded in actual use cases. Now's the time to stay focused and continue forging that path. This article first appeared in our Spotlight on Technology supplement, of 13 June 2025. Related