
We need to keep an open mind on cold fusion potential
Recently, the letters pages of the Guardian have featured conflicting accounts of cold fusion, otherwise known as low-energy nuclear reactions (LENR). On the one hand, the Nobel laureate Prof Brian Josephson and his co-authors argue (27 January) that cold fusion's time has come: companies can 'make these reactions work quite reliably', with the promise of 'ending reliance on fossil fuels'. In response, Dr Philip Thomas, a researcher at the University of Exeter, proclaims (2 February) that cold fusion is a 'pseudo-scientific fringe theory' in violation of the 'laws of nature'. Which laws, in particular, Dr Thomas does not say.
There is, however, a constructive middle ground between Josephson's fervour and Thomas's denigration. LENR advocates often fail to appreciate the evidentiary standard required to demonstrate novel nuclear effects. Overzealous critics are generally not well read on the LENR literature and lack perspective on the emergence of new fields from anomalous effects in science. As a result, they contribute to the palpable stigma that the Cambridge emeritus professor Huw Price calls the 'reputation trap'. Regardless, there is compelling experimental data and strong theoretical motivations to study cold fusion.
We are MIT-based researchers in an LENR research programme run by the US Department of Energy's innovation agency, Arpa-E. Our group is pursuing the careful replication and characterisation of promising LENR experiments in close coordination with the original experimentalists and informed by the theoretical work of the MIT professor Peter Hagelstein.
Cold fusion could result in spectacular technologies. But we are convinced that the way forward requires rigorous, open-source scientific investigation, not more claims. The Arpa-E LENR programme – a result of Google's research efforts in cold fusion summarised in the journal Nature – is a model in this regard. It balances the highest scientific standards and careful experimental documentation with an open mind to the anomalies reported in the LENR literature.
In many ways, cold fusion's time has come. Advances in theory and experiment have made the LENR field eminently actionable. It is time for fellow scientists to constructively engage and for science funders to take Arpa-E's lead and back rigorous inquiry into this promising field.Jonah MessingerPhD candidate, University of CambridgeFlorian Metzler Research scientist, MITMatt Lilley Research affiliate, MITNicola Galvanetto Postdoctoral research fellow, University of Zurich
Dr Philip Thomas seems to be unaware of current work on cold fusion. A good example is the five-year EU-funded CleanHME project, which held its wrap-up meeting recently at the University of Szczecin in Poland. This collaboration involved some 40 scientists from several European universities and institutes.
To refer to the concerns of such projects as 'pseudo-scientific fringe theory', as Dr Thomas does, is both unfair and unwise. Unfair, because these are serious scientists, fully conversant with the laws of nature. Unwise, because we are in a very tight spot. We need new sources of fossil-free energy, so we need to search for them diligently, even in what many regard as unlikely corners. By all means criticise such work on scientific grounds, but it is folly to discourage it by calling it names.
I would be delighted to introduce Dr Thomas to some of the leading scientists in the field if he would like to explore it further.Huw PriceEmeritus Bertrand Russell professor of philosophy, Cambridge
Have an opinion on anything you've read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Statesman
a day ago
- New Statesman
Does the UK need an AI Act?
Photo by Charles McQuillan / Getty Images Britain finds itself at a crossroads with AI. The stakes are heightened by the fact that out closest allies appear to be on diverging paths. Last year, the EU passed its own AI act, seeking controlled consensus on how to regulate new technologies. The US, meanwhile, is pursuing a lighter-touch approach to AI – perhaps reflecting the potential financial rewards its Big Tech companies could lose if stifled by regulation. Prime Minister Keir Starmer and Science Secretary Peter Kyle seem to be mirroring the US strategy. In the January launch of the government's AI Opportunities Action Plan, Kyle wants Britain to 'shape the AI revolution rather than wait to see how it shapes us'. Many have called for the government to bring forward an AI act, to lay the foundation for such leadership. Does Britain need one, and if so, how stringent should it be? Spotlight reached out to sectoral experts to give their views. 'An AI act would signal that Britain is serious about making technology work for people' Gina Neff – Professor of responsible AI at Queen Mary University of London This government is betting big on AI, making promises about turbo-charging innovation and investment. But regulatory safeguards are fragmented, public trust remains uncertain, and real accountability is unclear. Charging forward without a clear plan means AI will be parachuted into industries, workplaces, and public services with little assurance that it will serve the people who rely on it. An AI act would signal that Britain is serious about making AI work for people, investing in the places that matter for the country, and harnessing the power of AI for good. An AI act would create oversight where there is ambiguity, insisting on transparency and accountability. An AI act could provide the foundation to unlock innovation for public benefit by answering key questions: who is liable when AI fails? When AI systems discriminate? When AI is weaponised? Starmer's government borrows from Silicon Valley's logic, positioning AI regulation as the opposite of innovation. Such logic ignores a crucial fact: the transition to AI will require a major leap for workers, communities and societies. Government must step in where markets won't or can't: levelling the playing field so powerful companies do not dominate our future, investing in education and skills so more people can benefit from opportunities, ensuring today's laws and regulations continue to be fit for purpose, and building digital futures with companies and civil society. Subscribe to The New Statesman today from only £8.99 per month Subscribe Under Conservative governments, the UK took a 'proportionate', 'proinnovation' approach outlined in the AI White Paper, suggesting responsibility for safe and trustworthy AI rests with the country's existing 90 regulators. That was always envisioned to be a wait-and-see stop-gap before new measures. The AI Opportunities Action Plan sketches out support for the UK's AI industry, but does not go far enough on how to manage the social, cultural and economic transitions that we face. With worries about the impact on entry-level jobs, on our children, on information integrity, on the environment, on the UK's creative sector, on growing inequality, on fair yet efficient public services: there is a long list of jobs now for government to do. Lack of action will only create confusion for businesses and uncertainty about rights and protections for workers, consumers and citizens. Without an AI act to help shore it up, the good work that is already happening in the UK won't be able to fully power benefits for everyone. An AI act must go beyond data protections to establish transparency requirements and accountability provisions, outline safeguards for intellectual property, set clearer rules around and recourse for automated decision-making. These are responsibilities that tech companies are largely evading. Who can blame them? They have cornered global markets and will gain handsomely with our new investments in AI. A UK AI act could empower regulators with stronger enforcement tools to right the imbalance of power between British society and the world's biggest players in this sector. An AI act would give real structure to this country's ambitions for AI. The UK needs clarity on what AI can and cannot do, and that won't come from piecemeal guidance – it will come from leaders with vision helping us build the society that we all so rightly deserve. 'The government's hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow' Marina Jirotka and Keri Grieman – Professor of human-centred computing at the University of Oxford; Research associate, RoboTIPS project. The EU AI act entered into force not even a year ago, and there is already serious discussion on whether to reduce enforcement and simplify requirements on small and medium enterprises in order to reduce burdens on companies in a competitive international marketplace. The US House of Representatives has narrowly approved a bill that blocks states from enforcing AI regulations for ten years, while forwarding one bipartisan federal act that criminalises AI deepfakes but does not address AI on a broader level. Large language model updates are rolled out faster than the speed of subscription model billing. AI is invading every corner of our lives, from messaging apps to autonomous vehicles – some used to excellent effect, others to endless annoyance. The British government has chosen a policy of investment in AI – investing in the industry itself, in skill-building education and in inducing foreign talent. Its hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow. However, this leaves the regulatory burden on individual sectors: piecemeal, often siloed and without enough regulatory AI experts to go around, with calls coming from inside the house – the companies themselves – for a liability system. The UK needs clarity: for industry, for public trust and for the prevention of harm. There are problems that transcend individual industries: bias,discrimination, over-hype, environmental impact, intellectual property and privacy concerns, to name a few. A regulator is one way to tackle these issues, but can have varying levels of impact depending on structure: coordinating between industry bodies or taking a more direct role; working directly with companies or at arm's length; cooperative investigation or more bare-bones enforcement. But whatever the UK is to do, it needs to provide regulatory clarity sooner rather than later: the longer the wait, the more we fail to address potential harms, but we also fall behind in market share as companies choose not to bet the bank on a smaller market with an unclear regulatory regime. 'Growth for whom? Efficiency to what end?' Baroness Beeban Kidron – House of Lords member and digital rights activist All new technology ends up being regulated. On arrival greeted with awe. Claims made for its transformative nature and exceptionality. Early proponents build empires and make fortunes. But sooner or later, those with responsibilities for our collective good have a say. So here we are again with AI. Of course we will regulate, but it seems that the political will has been captured. Those with their hands on the technology are dictating the terms – terms that waver between nothing meaningful to almost nothing at all. While government valorises growth and efficiency without asking: growth for whom? Efficiency to what end? In practical terms, an AI act should not seek to regulate AI as a technology but rather regulate its use across domains: in health (where it shows enormous benefit); in education (where its claims outweigh its delivery by an unacceptable margin); in transport (where insurers are calling the shots); and in information distribution (where its deliberate manipulation, unintended hallucination and careless spread damages more than it explains). If we want AI to be a positive tool for humanity then it must be subject to the requirements of common goods. But in a world of excess capital restlessly seeking the next big thing, governments bent over to do the bidding of the already-too-powerful, and lobbyists who simultaneously claim it is too soon and too late, we see the waning of political will. Regulation can be good or bad, but we are in troubling times where the limit of our ambition is to do what we can, not what we should – which gives it a bad name. And governments – including our own – legislate to hardwire the benefits of AI into the ever-increasing concentration of power and wealth of Silicon Valley. Tech companies, AI or otherwise, are businesses. Why not subject them to corporate liability, consumer rights, product safety, anti-trust laws, human and children's rights? Why exempt them from tax, or the full whack for their cost to planet and society? It's not soon and it is not too late – but it needs independence and imagination to make AI a public good, not wilful blindness to an old-school playbook of obfuscation and denial while power and money accumulate. Yes, we need regulation, but we also need political will. 'The real test of a bill will be if it credibly responds to the growing list of everyday harms we see' Michael Birstwistle – Associate director, Ada Lovelace Institute AI is everywhere: our workplaces, public services, search engines, our social media and messaging apps. The risks of these systems are made clear in the government's International AI Safety Report. Alongside long-standing harms like discrimination and 'hallucination' (where AI confidently generates false information), systemic harms such as job displacement, environmental costs and the capacity of newer 'AI agents' to misinform and manipulate are rapidly coming to the fore. But there is currently no holistic body of law governing AI in the UK. Instead, developers, deployers and users must comply with a fragmented patchwork of rules, with many risks going unmanaged. Crucially, our current approach disincentivises those building AI systems from taking responsibility for harms they are best placed to address; regulation tends to only look at downstream users. Our recent national survey showed 88 per cent of people believe it's important that the government or regulators have powers to stop the use of a harmful AI product. Yet more than two years on from the Bletchley summit and its commitments, it's AI developers deciding whether to release unsafe models, according to criteria they set themselves. The government's own market research has said this 'wild west' is lowering business confidence to adopt. These challenges can only be addressed by legislation, and now is a crucial time act. The government has announced an AI bill, but its stated ambition (regulating 'tomorrow's models not today's') is extremely narrow. For those providing scrutiny in parliament, press and beyond, the real test of a bill will be whether it credibly responds to the growing list of everyday harms we see today — such as bias, misinformation, fraud and malicious content — and whether it equips government to manage them upstream at source. 'There's a temptation to regulate AI with sweeping, catch-all Bills. That impulse is mistaken' Jakob Mökander – Director of science and technology policy, Tony Blair Institute for Global Change As AI transforms everything from finance to healthcare, the question is not whether to regulate its design and use – but how to do it well. Rapid advances in AI offer exciting opportunities to boost economic growth and improve social outcomes. However, AI poses risks, from information security to surveillance and algorithmic discrimination. Managing these risks will be key in building public trust and harnessing the benefits. Globally, there's an understandable temptation to regulate AI with sweeping, catch-all Bills that signal seriousness and ease public concern. However, this impulse is mistaken. Horizontal legislation is a blunt tool that struggles to address the many different risks AI poses in various real-world contexts. It could also end up imposing overly burdensome restrictions even on safe and socially beneficial use cases. If the UK government is serious about implementing the AI Opportunities Action Plan, it should continue its pro-innovation, sector-specific approach: steering the middle ground between the overly broad EU AI Act and the US' increasingly deregulatory approach. This way, supporting innovation can go hand-in-hand with protection of consumer interests, human rights and national security. Regulators like the CMA, FCA, Ofcom and HSE are already wrestling with questions related to AI-driven market concentration, misinformation and bias in their respective domains. Rather than pursuing a broad AI bill, the government should continue to strengthen these watchdogs' technical muscle, funding, and legal tools. The £10m already allocated to this effort is welcome – but this should go much further. Of course, some specific security concerns may be insufficiently covered by existing regulation. To address this gap, the government's proposal for a narrow AI Bill to ensure the safety of frontier-AI models is a good starting point. The AI Security Institute has a crucial role to play in this – not as a regulator, but as an independent centre to conduct research, develop standards and evaluate models. Its long-term legitimacy should continue to be served by clear independence from both government and industry, rather than the distraction of enforcement powers. Britain has an opportunity to set a distinctive global example: pro-innovation, sector-specific, and grounded in actual use cases. Now's the time to stay focused and continue forging that path. This article first appeared in our Spotlight on Technology supplement, of 13 June 2025. Related


The Herald Scotland
a day ago
- The Herald Scotland
Professor who won ‘Nobel prize of the beer world'
Died: June 12, 2025 Sir Geoff Palmer, who has died aged 85, earned a reputation as a trailblazer and inspiration within higher education and in wider society as chancellor and professor emeritus at Heriot-Watt University and Scotland's first black professor. Born in St Elizabeth, he moved to London as a 14-year-old in 1955, where his mother had emigrated some years earlier, as part of the Windrush Generation. A keen cricketer, he earned a place on the London Schools' cricket team and at Highbury Grammar School. In 1958, upon completion of his schooling, he was employed as a junior lab technician at Queen Elizabeth College while gaining further qualifications studying one day per week at a local polytechnic. In 1961, Sir Geoff enrolled at the University of Leicester, graduating with a degree in botany. He then began his long association with Heriot-Watt University when he embarked on a PhD in grain science and technology, which he completed in 1967. This was carried out jointly between Heriot-Watt College, as it was known then, and the University of Edinburgh. From 1968 to 1977, he worked at the Brewing Research Foundation in Surrey where he used the fundamental research from his PhD studies to develop a pioneering barley abrasion process - efforts which won him the American Society of Brewing Chemists Award of Distinction, an honour dubbed the Nobel prize of the beer world. He also pioneered the use of the scanning electron microscope to study cereal grains. This process was subsequently adopted by some of the UK's biggest breweries. Sir Geoff returned to Heriot-Watt University in 1977 as a lecturer where, among his many achievements, he secured industry funding to establish the International Centre for Brewing and Distilling (ICBD), which continues to this day as a unique teaching and research facility. Read more In 1989, Sir Geoff became Scotland's first black professor and continued to teach at Heriot-Watt University until his retirement in 2005. He was subsequently appointed Professor Emeritus at the University's School of Life Sciences, and, in 2014, he was knighted for services to human rights, science and charity. He would return in 2021 to take on the role of Chancellor, a position he would embrace until his death. A beloved figure within the University's global community, Sir Geoff was known for his warm, approachable manner and his deep personal commitment to supporting and championing the success, wellbeing and growth of students at every stage of their journey. Sir Geoff met his future wife, Margaret, while they were both students at the University of Leicester. They had lived in Penicuik in Midlothian since the 1970s and he was a well-known figure in and around the town. Sir Geoff was a board member of many charitable and equality organisations, and a trustee of Penicuik Citizen's Advice Bureau, where the charity named its building Palmer House in his honour in 2021. In 2023, marking the 75th anniversary of the arrival of HMT Empire Windrush to British shores in 1948, Sir Geoff was named one of ten pioneering members of the Windrush generation honoured by His Majesty The King with a specially commissioned portrait. The artwork has since become part of the Royal Collection, serving as a lasting tribute to the men, women, and children who journeyed to post-war Britain. In March 2024, King Charles III appointed Sir Geoff a knight of the Most Ancient and Most Noble Order of the Thistle (KT), the highest order of chivalry in Scotland. In later life, Sir Geoff was diagnosed with prostate cancer and received treatment. In an interview with Whisky Magazine in 2020, Sir Geoff was asked about his legacy to which he gave this poignant reply: 'One of my daughters just had a wee girl in Glasgow. She and my other grandchildren are my legacy, and I hope that anything I've done they won't be ashamed of. 'My legacy is all of my children, students, my friends and relationships and all the people who helped me.' Professor Richard A. Williams, Principal and Vice-Chancellor of Heriot-Watt University, led the tributes to Sir Geoff. He said: 'Today marks a sad day for this university and for everyone who knew Sir Geoff. 'He was an inspiration not just to me but to colleagues past and present, and countless students around the world. His infectious enthusiasm and passion for education was impossible to ignore and this university was all the richer for having such a strong association with him over the years. 'He will be dearly missed, and our thoughts are with his loved ones at this difficult time.' Sir Geoff is survived by his wife, Margaret Palmer, their three children, and grandchildren.


Scotsman
a day ago
- Scotsman
Biofilm prevention leader Remora announces groundbreaking partnership with global textile chemical innovator
Award-winning Scottish biotechnology company Remora has announced a major milestone in its commercial journey after signing a new long-term licensing agreement with Swiss-based Beyond Surface Technologies (BST) to bring its revolutionary biofilm prevention technology to the global textile market. Sign up to our daily newsletter – Regular news stories and round-ups from around Scotland direct to your inbox Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... Award-winning Scottish biotechnology company Remora has announced a major milestone in its commercial journey after signing a new long-term licensing agreement with Swiss-based Beyond Surface Technologies (BST) to bring its revolutionary biofilm prevention technology to the global textile market. The partnership will see BST – a specialist in sustainable textile chemistry – integrate the patented Remora® technology into its green chemical formulations for use across performance apparel, outdoor garments and technical fabrics. Advertisement Hide Ad Advertisement Hide Ad Biofilms – invisible layers of microorganisms that adhere to fabrics – are a widespread issue in the textile sector. On performance garments in particular, they cause persistent odours, staining, and material degradation, even after repeated washing. They also pose risks of skin irritation and contamination. Matthias Foessel, Co-founder and CEO of Beyond Surface Technologies Remora® technology offers a breakthrough solution. Developed using a scientifically engineered molecule inspired by red seaweed's natural defense mechanisms, it prevents biofilm formation at source, without relying on toxic antimicrobial agents. Remora's collaboration with BST will support the sustainable creation of cleaner, fresher and longer-lasting textiles. The technology can be used at multiple stages of the supply chain. Dr Yvonne Davies, Chief Commercial Officer at Remora, said: 'This partnership with BST is a transformational step for Remora and a breakthrough moment for the textile industry. Our Remora® technology offers a sustainable, scientifically proven alternative to toxic biocidal treatments and will help brands tackle persistent odour, staining and material degradation caused by biofilms. 'Through BST's extensive global supply chain relationships and deep expertise in green chemistry, we now have a clear route to market with some of the biggest names in fashion, performance wear and technical textiles. This collaboration doesn't just scale our technology, it unlocks its full potential to support a cleaner, safer and more sustainable future for textiles.' Advertisement Hide Ad Advertisement Hide Ad Matthias Foessel, Co-founder and CEO of Beyond Surface Technologies, said: 'We are excited to bring this remarkable marine-inspired technology for biofilm prevention to the textile industry. It offers a unique way to keep fabrics and garments cleaner and fresher for longer. 'We're very pleased to have signed this collaboration with Remora, based on a Unilever patent portfolio. This partnership marks another important milestone in our mission to advance chemistry with reduced environmental impact - without compromising on performance.' Originally developed through a decade-long R&D collaboration with Unilever and leading UK universities, Remora's technology now has a direct route into commercial textile markets via BST's formulation expertise and global customer network – which includes many of the world's best-known fashion and sportswear brands.