logo
A power engineer on the Iberian grid collapse: It makes me very afraid for Britain

A power engineer on the Iberian grid collapse: It makes me very afraid for Britain

Telegraph01-05-2025

Last Monday, the Iberian grid suffered a disturbance in the south-west at 12:33. In 3.5 seconds this worsened and the interconnection to France disconnected. All renewable generation then went off-line, followed by disconnection of all rotating generation plant. The Iberian blackout was complete within a few seconds.
At the time the grid was producing 28.4 GW of power, of which 79 per cent was solar and wind. This was a problematic situation as solar and wind plants have another, not widely known, downside – one quite apart from their intermittency and expense.
This is the fact that they do not supply any inertia to the grid. Thermal powerplants – coal, gas, nuclear, for example – drive large spinning generators which are directly, synchronously connected to the grid. If there are changes which cause a difference between demand and supply, the generators will start to spin faster or slower: but their inertia resists this process, meaning that the frequency of the alternating current in the grid changes only slowly. There is time for the grid managers to act, matching supply to demand and keeping the grid frequency within limits.
This is vital because all grids must supply power at a steady frequency so that electrical appliances work properly and safely. Deviations from the standard grid frequency can cause damage to equipment and other problems: in practice what happens quite rapidly when frequency changes significantly is that grid machinery trips out to prevent these issues and grids go down.
When a grid has very little inertia in it – as with the Iberian one on Monday – a problem which a high-inertia grid would easily resist can cause a blackout within seconds. Lack of inertia was certainly the primary cause of the Iberian blackout, as Matt Oliver has opined in these pages. A grid with more inertia would not have collapsed as quickly, and its operators would have had time to keep it up and running.
Restoration of supplies was completed by early Tuesday morning, based on reconnection to France, which facilitated progressive area reconnections across Spain and Portugal.
Iberia is part of the Continental Europe Synchronous Area which stretches to 32 countries. It is interconnected as a phase-locked, 50 Hz grid with a generation capacity of 700 GW. To improve the stability of this grid, the EU aim is that all partners will extract 10 per cent of their power consumption from synchronous interconnectors – ones which transmit grid inertia – helping to make the whole system more resilient. France is at 10 per cent, but peninsula grids and those at the geographical fringe are the least interconnected. Spain has just 2 per cent from synchronous interconnectors.
But there are places where things are worse. The UK and Ireland are island grids. They do have undersea power interconnectors to Europe but these are non-synchronous DC links and transmit no grid inertia. There's little prospect that this will change.
Both the Irish and UK grid system operators have developed an array of grid protection services that can control grid frequency, loss of load or generation protection, grid phase angle and recovering from grid outages. Neither country has, to date, ever experienced a total system failure, even during WWII.
In 1974 construction started on Dinorwig Power Station. It is a pumped storage generation plant designed specifically for the provision of all the UK's grid protection services. Dinorwig can make huge changes to its output in a matter of seconds, compensating for sudden events. Operation began in 1984. In 1990 all the UK's generating stations could provide inertia.
Nowadays, 55 per cent of our generation mix (wind, solar, DC imports) cannot supply inertia to the grid. Are we approaching a system that compares with Spain and Portugal on Monday?
It certainly looks that way. In 2012 the National Grid produced a solar briefing note for the government which is still available online. In that note they imagine a system that has 22 GW of solar power attached to the grid. They demonstrate their concerns based on a sunny summer day when demand is low. The sun rises at 5 o'clock when little or no synchronous plant other than nuclear generation will be on line and at midday, solar is 60 per cent of all generation. The Grid's engineers then considered that situation 'difficult to manage' and concluded that wind+solar power must never exceed 60 per cent of generation.
We now have 17.7 GW of grid-connected solar farms to which we must add all rooftop solar installations. At midday on Tuesday according to Gridwatch the UK's asynchronous, no-inertia generation was at 66 per cent of total generation.
In 2024 National Grid produced a System Operability Framework document. Their objective was to outline how future scenarios of generation mixes would impact upon protection services for the grid. As more and more renewable generators are brought on-line, the difficulties of managing the grid have become more and more onerous. For example, one service titled 'primary response' in 1990 called for selected generation plants to increase generation within 10 seconds after a fault is detected: by 1,200 MW in winter and 1,500 MW in summer. In 2024 these increases are required in 1.2 seconds!
After nearly 50 years of operation, Dinorwig Power Station is currently shut down for major repairs and there has been no information on when it will re-open. Over the next five years all of our nuclear stations, bar Sizewell, will be closed. Over the same period our combined cycle gas generator fleet will halve from 30 GW to 15 GW. (It takes 5 years to build a new CCGT even using an existing site. The new ones are 66 per cent efficient and cost less than £1 billion to build a 1 GW plant – one third the cost of an offshore windmill.)
We will lose huge amounts of grid inertia. Low-inertia operation will become routine. It is hard to imagine that we won't start to suffer complete national blackouts like the Iberian one.
One last piece of doom: the recovery of Spain's grid in just one day is impressive. This speed is certainly due to the assistance of a large, stable grid reconnecting into the Iberian system thus allowing recovery in a series of stable steps as each grid area is recovered. We will not have that facility in the UK with our asynchronous interconnectors.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'We follow strict protocols' - popular period tracking app hits back at backlash
'We follow strict protocols' - popular period tracking app hits back at backlash

Daily Mirror

timea day ago

  • Daily Mirror

'We follow strict protocols' - popular period tracking app hits back at backlash

A report from the University of Cambridge has claimed that menstrual apps are a risk to privacy, but period tracking app Clue has hit back, detailing exactly how they use users' data After the damning report from University of Cambridge that select period tracking apps are harvesting and selling user information, popular tracking app Clue has set the record straight. Clue is a science-based, data-driven menstrual and reproductive health app, trusted by 10 million people globally, and despite their mission to help women - has come under scrutiny following the release of a report from University of Cambridge's Minderoo Centre. ‌ The report said the tracking apps were a "gold mine" for consumer profiling. By collecting customer data, it could allow companies to produce targeted advertisements linked to information users think is kept private. ‌ Under EU and UK law, the data from these period-tracking apps comes under a special category, which means it should have special protections from being sold on - but this report highlights that consent options are not always enforced or implemented. This then allows the data can be sold to advertisers and tech giants such as Facebook and Google. However Clue has assured users the app follows "strict protocols" when it comes to how data is managed, and said keeping their users safe is at their "core". Clue CEO Rhiannon White told The Mirror: "We adhere to the very strict standards the European GDPR sets for data security and storage. This applies to the data we hold regardless of where in the world our users are located. Our policy and firm commitment is that no matter where our users are in the world, we will never allow their private health data to be used against them. "We have never disclosed such data to any authority, and we never will. Anything that does not fundamentally serve female health and the empowerment of people with cycles would be at odds with our principles at Clue," she added. One of Clue's missions is to help close the research gap in women's health and White assured that when using the data for research, Clue takes the "utmost care and follow strict protocols". ‌ She said gaining insight from de-identified data is an "important part of our mission" because the historical lack of data for research into female health is a major contributing factor to the health gap, so will share this anonymised data with researchers from leading global institutions, such as Stanford and University of Oxford. "It is up to each user whether they want to help to close that data gap by consenting to their de-identified data being used for this purpose, which is why we offer granular consent options," and added: "This de-identified data is only shared with user consent and all research projects are carefully vetted against our strict criteria to ensure they're in the interest of our community. Help us improve our content by completing the survey below. We'd love to hear from you! ‌ "We have never and will never sell or share sensitive data with advertisers, insurers or data brokers. That is not our business model -– our business model is direct to consumer subscriptions, ensuring that our users are our customers, and we serve them." Rhiannon further detailed that the third party tools Clue uses to work are "vetted and assessed" against the strictest GDPR standards and assured they transparently detail exactly what data is handled by each tool and how in the privacy policy. ‌ "Our servers are located in the EU in Germany and in Ireland. When your data is sent between your device and our Clue servers, we use encrypted data transmission, which scrambles the information being sent so it's unreadable. Doing this increases the security of your data transfer," she added. But the researchers from the Cambridge study warn that by collecting information, it could allow companies to produce targeted advertisements linked to information users think is kept private. They also worry that if this data gets into the wrong hands, it could even affect access to abortion, health insurance discrimination and cyberstalking as well as risks to job prospects. "There are real and frightening privacy and safety risks to women as a result of the commodification of the data collected by cycle tracking app companies," said Dr Stefanie Felsberger, the lead author of the report. The report calls on organisations such as the NHS and other health bodies to create a "safer" alternative that is trustworthy.

Jeremy Clarkson gobsmacked by Steph McGovern's Millionaire revelation
Jeremy Clarkson gobsmacked by Steph McGovern's Millionaire revelation

Daily Mirror

time2 days ago

  • Daily Mirror

Jeremy Clarkson gobsmacked by Steph McGovern's Millionaire revelation

Presenter, Steph McGovern, was in the hot seat for ITV's Who Wants to Be a Millionaire? on Saturday and left host, Jeremy Clarkson, seriously impressed with her credentials Jeremy Clarkson was left gobsmacked by Steph McGovern when she played Who Wants to Be a Millionaire? on Saturday. The Geordie presenter, 43, was in the hot seat, raising money for charity, when she told the petrolhead she'd helped devise an important bit of kit for a heap of British gardeners. Revealing she'd helped to develop the humble leaf blower, Steph smiled excitedly as Jeremy couldn't resist digging into her CV. He said: "I was looking at your notes, obviously, business and economics you're known for, but you're also a champion Irish dancer and a novelist, and a journalist, and most important of all, an engineer by trade. And you developed the leaf blower! Couldn't you have made it electric?" ‌ ‌ Steph explained that while she was working at Black+Decker more than 20 years ago, an electric version of the leaf blower wasn't possible. However, she said while at the company's Leaf Hog as part of a Six Sigma team, she helped save the firm £150,000 by improving manufacturing techniques. Jeremy joked: "There's nothing more annoying, am I right, than somebody else's leaf blower?" Before her TV career with BBC and Channel 4 took off, Steph was an intrepid engineer and won the prestigious Arkwright Engineering Scholarship in 1998 and was crowned Young Engineer for Britain a year later. Her Millionaire appearance, which was a repeat from earlier this year, was just another string to her bow as she raised an impressive £125,000 for charity. Steph walked away with the cash for Rubies, a Middlesbrough-based organisation supporting young girls. ‌ "I am happily gonna take the money and am not gonna carry on, if that's okay", she told Jeremy, choosing to not risk her winnings on the £250,000 questions. Rubies, which Steph has supported for over five years, helps create 'unforgettable' experiences for girls, including trips to TV studios with the plucky presenter. On 14 July 2019, Steph revealed that she was pregnant and her girlfriend were expecting their first child. On 4 November later that year, she gave birth to a girl. ‌ This year, the mum-of-one was a finalist in Netflix's Celebrity Bear Hunt hosted by Bear Grylls and Holly Willoughby. Speaking to Prima about why she decided to do the gruelling show, which sees a host of unlikely British celebrities dropped into the Central American jungle as prey for survivalist Bear, she said: "I've previously said no to featuring in reality TV, but when BearHunt came along, I thought, 'This is the one.' Since hitting my 40s, I've been trying to take more risks and it was a once-in-a-lifetime experience. I learned a lot about myself. "I had a kid in my 40s, so I was worried that I wouldn't be able to physically push myself. But after each challenge, I felt such a sense of accomplishment. I'm the type of person who wipes seats before I sit on them, but in the jungle, I really threw myself into it. I released an inner animal!" * Soccer Aid is on ITV at 6pm on Sunday 15 June

Does the UK need an AI Act?
Does the UK need an AI Act?

New Statesman​

time4 days ago

  • New Statesman​

Does the UK need an AI Act?

Photo by Charles McQuillan / Getty Images Britain finds itself at a crossroads with AI. The stakes are heightened by the fact that out closest allies appear to be on diverging paths. Last year, the EU passed its own AI act, seeking controlled consensus on how to regulate new technologies. The US, meanwhile, is pursuing a lighter-touch approach to AI – perhaps reflecting the potential financial rewards its Big Tech companies could lose if stifled by regulation. Prime Minister Keir Starmer and Science Secretary Peter Kyle seem to be mirroring the US strategy. In the January launch of the government's AI Opportunities Action Plan, Kyle wants Britain to 'shape the AI revolution rather than wait to see how it shapes us'. Many have called for the government to bring forward an AI act, to lay the foundation for such leadership. Does Britain need one, and if so, how stringent should it be? Spotlight reached out to sectoral experts to give their views. 'An AI act would signal that Britain is serious about making technology work for people' Gina Neff – Professor of responsible AI at Queen Mary University of London This government is betting big on AI, making promises about turbo-charging innovation and investment. But regulatory safeguards are fragmented, public trust remains uncertain, and real accountability is unclear. Charging forward without a clear plan means AI will be parachuted into industries, workplaces, and public services with little assurance that it will serve the people who rely on it. An AI act would signal that Britain is serious about making AI work for people, investing in the places that matter for the country, and harnessing the power of AI for good. An AI act would create oversight where there is ambiguity, insisting on transparency and accountability. An AI act could provide the foundation to unlock innovation for public benefit by answering key questions: who is liable when AI fails? When AI systems discriminate? When AI is weaponised? Starmer's government borrows from Silicon Valley's logic, positioning AI regulation as the opposite of innovation. Such logic ignores a crucial fact: the transition to AI will require a major leap for workers, communities and societies. Government must step in where markets won't or can't: levelling the playing field so powerful companies do not dominate our future, investing in education and skills so more people can benefit from opportunities, ensuring today's laws and regulations continue to be fit for purpose, and building digital futures with companies and civil society. Subscribe to The New Statesman today from only £8.99 per month Subscribe Under Conservative governments, the UK took a 'proportionate', 'proinnovation' approach outlined in the AI White Paper, suggesting responsibility for safe and trustworthy AI rests with the country's existing 90 regulators. That was always envisioned to be a wait-and-see stop-gap before new measures. The AI Opportunities Action Plan sketches out support for the UK's AI industry, but does not go far enough on how to manage the social, cultural and economic transitions that we face. With worries about the impact on entry-level jobs, on our children, on information integrity, on the environment, on the UK's creative sector, on growing inequality, on fair yet efficient public services: there is a long list of jobs now for government to do. Lack of action will only create confusion for businesses and uncertainty about rights and protections for workers, consumers and citizens. Without an AI act to help shore it up, the good work that is already happening in the UK won't be able to fully power benefits for everyone. An AI act must go beyond data protections to establish transparency requirements and accountability provisions, outline safeguards for intellectual property, set clearer rules around and recourse for automated decision-making. These are responsibilities that tech companies are largely evading. Who can blame them? They have cornered global markets and will gain handsomely with our new investments in AI. A UK AI act could empower regulators with stronger enforcement tools to right the imbalance of power between British society and the world's biggest players in this sector. An AI act would give real structure to this country's ambitions for AI. The UK needs clarity on what AI can and cannot do, and that won't come from piecemeal guidance – it will come from leaders with vision helping us build the society that we all so rightly deserve. 'The government's hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow' Marina Jirotka and Keri Grieman – Professor of human-centred computing at the University of Oxford; Research associate, RoboTIPS project. The EU AI act entered into force not even a year ago, and there is already serious discussion on whether to reduce enforcement and simplify requirements on small and medium enterprises in order to reduce burdens on companies in a competitive international marketplace. The US House of Representatives has narrowly approved a bill that blocks states from enforcing AI regulations for ten years, while forwarding one bipartisan federal act that criminalises AI deepfakes but does not address AI on a broader level. Large language model updates are rolled out faster than the speed of subscription model billing. AI is invading every corner of our lives, from messaging apps to autonomous vehicles – some used to excellent effect, others to endless annoyance. The British government has chosen a policy of investment in AI – investing in the industry itself, in skill-building education and in inducing foreign talent. Its hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow. However, this leaves the regulatory burden on individual sectors: piecemeal, often siloed and without enough regulatory AI experts to go around, with calls coming from inside the house – the companies themselves – for a liability system. The UK needs clarity: for industry, for public trust and for the prevention of harm. There are problems that transcend individual industries: bias,discrimination, over-hype, environmental impact, intellectual property and privacy concerns, to name a few. A regulator is one way to tackle these issues, but can have varying levels of impact depending on structure: coordinating between industry bodies or taking a more direct role; working directly with companies or at arm's length; cooperative investigation or more bare-bones enforcement. But whatever the UK is to do, it needs to provide regulatory clarity sooner rather than later: the longer the wait, the more we fail to address potential harms, but we also fall behind in market share as companies choose not to bet the bank on a smaller market with an unclear regulatory regime. 'Growth for whom? Efficiency to what end?' Baroness Beeban Kidron – House of Lords member and digital rights activist All new technology ends up being regulated. On arrival greeted with awe. Claims made for its transformative nature and exceptionality. Early proponents build empires and make fortunes. But sooner or later, those with responsibilities for our collective good have a say. So here we are again with AI. Of course we will regulate, but it seems that the political will has been captured. Those with their hands on the technology are dictating the terms – terms that waver between nothing meaningful to almost nothing at all. While government valorises growth and efficiency without asking: growth for whom? Efficiency to what end? In practical terms, an AI act should not seek to regulate AI as a technology but rather regulate its use across domains: in health (where it shows enormous benefit); in education (where its claims outweigh its delivery by an unacceptable margin); in transport (where insurers are calling the shots); and in information distribution (where its deliberate manipulation, unintended hallucination and careless spread damages more than it explains). If we want AI to be a positive tool for humanity then it must be subject to the requirements of common goods. But in a world of excess capital restlessly seeking the next big thing, governments bent over to do the bidding of the already-too-powerful, and lobbyists who simultaneously claim it is too soon and too late, we see the waning of political will. Regulation can be good or bad, but we are in troubling times where the limit of our ambition is to do what we can, not what we should – which gives it a bad name. And governments – including our own – legislate to hardwire the benefits of AI into the ever-increasing concentration of power and wealth of Silicon Valley. Tech companies, AI or otherwise, are businesses. Why not subject them to corporate liability, consumer rights, product safety, anti-trust laws, human and children's rights? Why exempt them from tax, or the full whack for their cost to planet and society? It's not soon and it is not too late – but it needs independence and imagination to make AI a public good, not wilful blindness to an old-school playbook of obfuscation and denial while power and money accumulate. Yes, we need regulation, but we also need political will. 'The real test of a bill will be if it credibly responds to the growing list of everyday harms we see' Michael Birstwistle – Associate director, Ada Lovelace Institute AI is everywhere: our workplaces, public services, search engines, our social media and messaging apps. The risks of these systems are made clear in the government's International AI Safety Report. Alongside long-standing harms like discrimination and 'hallucination' (where AI confidently generates false information), systemic harms such as job displacement, environmental costs and the capacity of newer 'AI agents' to misinform and manipulate are rapidly coming to the fore. But there is currently no holistic body of law governing AI in the UK. Instead, developers, deployers and users must comply with a fragmented patchwork of rules, with many risks going unmanaged. Crucially, our current approach disincentivises those building AI systems from taking responsibility for harms they are best placed to address; regulation tends to only look at downstream users. Our recent national survey showed 88 per cent of people believe it's important that the government or regulators have powers to stop the use of a harmful AI product. Yet more than two years on from the Bletchley summit and its commitments, it's AI developers deciding whether to release unsafe models, according to criteria they set themselves. The government's own market research has said this 'wild west' is lowering business confidence to adopt. These challenges can only be addressed by legislation, and now is a crucial time act. The government has announced an AI bill, but its stated ambition (regulating 'tomorrow's models not today's') is extremely narrow. For those providing scrutiny in parliament, press and beyond, the real test of a bill will be whether it credibly responds to the growing list of everyday harms we see today — such as bias, misinformation, fraud and malicious content — and whether it equips government to manage them upstream at source. 'There's a temptation to regulate AI with sweeping, catch-all Bills. That impulse is mistaken' Jakob Mökander – Director of science and technology policy, Tony Blair Institute for Global Change As AI transforms everything from finance to healthcare, the question is not whether to regulate its design and use – but how to do it well. Rapid advances in AI offer exciting opportunities to boost economic growth and improve social outcomes. However, AI poses risks, from information security to surveillance and algorithmic discrimination. Managing these risks will be key in building public trust and harnessing the benefits. Globally, there's an understandable temptation to regulate AI with sweeping, catch-all Bills that signal seriousness and ease public concern. However, this impulse is mistaken. Horizontal legislation is a blunt tool that struggles to address the many different risks AI poses in various real-world contexts. It could also end up imposing overly burdensome restrictions even on safe and socially beneficial use cases. If the UK government is serious about implementing the AI Opportunities Action Plan, it should continue its pro-innovation, sector-specific approach: steering the middle ground between the overly broad EU AI Act and the US' increasingly deregulatory approach. This way, supporting innovation can go hand-in-hand with protection of consumer interests, human rights and national security. Regulators like the CMA, FCA, Ofcom and HSE are already wrestling with questions related to AI-driven market concentration, misinformation and bias in their respective domains. Rather than pursuing a broad AI bill, the government should continue to strengthen these watchdogs' technical muscle, funding, and legal tools. The £10m already allocated to this effort is welcome – but this should go much further. Of course, some specific security concerns may be insufficiently covered by existing regulation. To address this gap, the government's proposal for a narrow AI Bill to ensure the safety of frontier-AI models is a good starting point. The AI Security Institute has a crucial role to play in this – not as a regulator, but as an independent centre to conduct research, develop standards and evaluate models. Its long-term legitimacy should continue to be served by clear independence from both government and industry, rather than the distraction of enforcement powers. Britain has an opportunity to set a distinctive global example: pro-innovation, sector-specific, and grounded in actual use cases. Now's the time to stay focused and continue forging that path. This article first appeared in our Spotlight on Technology supplement, of 13 June 2025. Related

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store