
Slovakia accuses UK of £10m election interference plot
Robert Fico, the country's firebrand prime minister, alleged that the Foreign Office had paid social media influencers through an agency to promote the rival liberal Progressive Slovakia party.
Nigel Baker, Britain's ambassador to Bratislava, has been summoned to the country's foreign minister on Wednesday to answer the accusations.
A spokesman for the Foreign Office branded the allegations 'entirely false'.
Mr Fico told a news conference on Tuesday: 'There was a targeted deliberate activity by a foreign power, which is our ally in Nato, in cooperation with some Slovak journalists and in cooperation with some Slovak political influencers, to influence the election in 2023.'
Mr Fico, widely seen as a populist leader, cited a report by Declassified UK, an investigative website, which claimed the UK's Foreign Office had a £10m deal with Zinc Network, a London-based media agency, that had been used to influence Slovakia's last parliamentary election.
'This agency was tasked with finding influencers and political activists in Central and Eastern European countries and influencing events in those countries,' the prime minister added.
The Slovakian marker.sk website reported that the UK Government is 'secretly funding hundreds of foreign influencers whose job is to create political propaganda on YouTube' through the contract.
'We'll demand an explanation of all the details of the case,' Juraj Blanar, Slovakia's foreign minister, said ahead of his meeting with Mr Baker.
Mr Blanar said he would be forced to take further action if his government did not receive sufficient information from the British diplomat. It was not immediately clear what videos were being referred to, or what measures the Slovakians could take in response to the accusations.
Marek Estok, Slovakia's Europe minister, said he would raise the issue at the next meeting of the European Council's general affairs council. He said it would fit with plans to discuss hybrid threats and interference in election campaigns to the bloc in September.
The Foreign Office said: 'Any suggestion that the UK was seeking to sway an election result, or encourage voting for or against a specific political party is entirely false.
'This activity focused on encouraging young people to participate in their democracies and to vote in upcoming elections, regardless of their political affiliation or support.'
The Foreign Office's Open Information Partnership works across 24 countries, supporting investigative journalists, charities, think tanks, academics, NGOs, activists, and fact-checkers.
'Media consulting service'
Zinc Network's logo features alongside the Government's motif on the bottom of the Partnership's website. The media agency signed a three-year contract worth £9,450,000 in 2022 with the Foreign Office 'for delivery of media consulting service' relating to the project.
'This project fits within wider 30 year UK Government objectives, to provide balanced, independent voices to more people in the regions,' a redacted version of the contract states, according to Declassified UK.
It adds that the project should 'not interfere with the editorial independence of the civil society organisations [it] supports'.
There are no examples of its work in Slovakia published either by the FCDO or Zinc.
One example of the firm's work in Estonia claims it assisted 20 Russian-language influencers in the Baltic country to boost their online presences.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Western Telegraph
an hour ago
- Western Telegraph
US special envoy Witkoff visits food distribution centre in Gaza
International experts warned this week that a 'worst-case scenario of famine' is playing out in Gaza. Israel's near 22-month military offensive against Hamas has shattered security in the territory of some 2.0 million Palestinians and made it nearly impossible to safely deliver food to starving people. Envoy Steve Witkoff and the US Ambassador to Israel, Mike Huckabee, toured a Gaza Humanitarian Foundation (GHF) distribution site in Rafah, Gaza's southernmost city, which has been almost completely destroyed and is now a largely depopulated Israeli military zone. Steve Witkoff, centre, and Mike Huckabee, centre left, visiting a food distribution site in Gaza City (David Azaguri/US Embassy Jerusalem via AP) Hundreds of people have been killed by Israeli fire while heading to such aid sites since May, according to witnesses, health officials and the UN human rights office. Israel and GHF say they have only fired warning shots and that the toll has been exaggerated. In a report issued on Friday, the New York-based Human Rights Watch said GHF was at the heart of a 'flawed, militarised aid distribution system that has turned aid distributions into regular bloodbaths.' Mr Witkoff posted on X that he had spent more than five hours inside Gaza in order to gain 'a clear understanding of the humanitarian situation and help craft a plan to deliver food and medical aid to the people of Gaza'. Humanitarian aid is airdropped to Palestinians over Khan Younis, in the Gaza Strip (Abdel Kareem Hana/AP) Chapin Fay, a spokesperson for GHF, said the visit reflected Mr Trump's understanding of the stakes and that 'feeding civilians, not Hamas, must be the priority'. The group said it has delivered over 100 million meals since it began operations in May. All four of the group's sites established in May are in zones controlled by the Israeli military and have become flashpoints of desperation, with starving people scrambling for scarce aid. More 1,000 people have been killed by Israeli fire since May while seeking aid in the territory, most near the GHF sites but also near United Nations aid convoys, the UN human rights office said last month. The Israeli military says it has only fired warning shots at people who approach its forces, and GHF says its armed contractors have only used pepper spray or fired warning shots to prevent deadly crowding. Officials at Nasser Hospital in southern Gaza said on Friday they received the bodies of 13 people who were killed while trying to get aid, including near the site that US officials visited. GHF denied anyone was killed at their sites on Friday and said most recent shootings had occurred near UN aid convoys. Mr Witkoff's visit comes a week after US officials walked away from ceasefire talks in Qatar, blaming Hamas and pledging to seek other ways to rescue Israeli hostages and make Gaza safe. Mr Trump wrote on social media that the fastest way to end the crisis would be for Hamas to surrender and release hostages. The war was triggered when Hamas-led militants killed about 1,200 people, mostly civilians, on October 7 2023 and abducted 251 others. They still hold 50 hostages, including about 20 believed to be alive. Most of the others have been released in ceasefires or other deals. Israel's retaliatory offensive has killed more than 60,000 Palestinians, according to Gaza's health ministry. Its count does not distinguish between militants and civilians. The ministry operates under the Hamas government. The UN and other international organisations see it as the most reliable source of data on casualties.


The Guardian
an hour ago
- The Guardian
Everything the right - and the left – are getting wrong about the Online Safety Act
Last week, the UK's Online Safety Act came into force. It's fair to say it hasn't been smooth sailing. Donald Trump's allies have dubbed it the 'UK's online censorship law', and the technology secretary, Peter Kyle, added fuel to the fire by claiming that Nigel Farage's opposition to the act put him 'on the side' of Jimmy Savile. Disdain from the right isn't surprising. After all, tech companies will now have to assess the risk their platforms pose of disseminating the kind of racist misinformation that fuelled last year's summer riots. What has particularly struck me, though, is the backlash from progressive quarters. Online outlet Novara Media published an interview claiming the Online Safety Act compromises children's safety. Politics Joe joked that the act involves 'banning Pornhub'. New YouGov polling shows that Labour voters are even less likely to support age verification on porn websites than Conservative or Liberal Democrat voters. I helped draft Ofcom's regulatory guidance setting out how platforms should comply with the act's requirements on age verification. Because of the scope of the act and the absence of a desire to force tech platforms to adopt specific technologies, this guidance was broad and principles-based – if the regulator prescribed specific measures, it would be accused of authoritarianism. Taking a principles-based approach is more sensible and future proof, but does allow tech companies to interpret the regulation poorly. Despite these challenges, I am supportive of the principles of the act. As someone with progressive politics, I have always been deeply concerned about the impact of an unregulated online world. Bad news abounds: X allowing racist misinformation to spread in the name of 'free speech'; and children being radicalised or being targeted by online sexual extortion. It was clear to me that these regulations would start to move us away from a world in which tech billionaires could dress up self-serving libertarianism as lofty ideals. Instead, a culture war has erupted that is laden with misunderstanding, with every poor decision made by tech platforms being blamed on regulation. This strikes me as incredibly convenient for tech companies seeking to avoid accountability. So what does the act actually do? In short, it requires online services to assess the risk of harm – whether illegal content such as child sexual abuse material, or, in the case of services accessed by children, content such as porn or suicide promotion – and implement proportionate systems to reduce those risks. It's also worth being clear about what isn't new. Tech companies have been moderating speech and taking down content they don't want on their platforms for years. However, they have done so based on opaque internal business priorities, rather than in response to proactive risk assessments. Let's look at some examples. After the Christchurch terror attack in New Zealand, which was broadcast in a 17-minute Facebook Live post and shared widely by white supremacists, Facebook trained its AI to block violent live streams. More recently, after Trump's election, Meta overhauled its approach to content moderation and removed factchecking in the US, a move which its own oversight board has criticised as being too hasty. Rather than making decisions to remove content reactively, or in order to appease politicians, tech companies will now need to demonstrate they have taken reasonable steps to prevent this content from appearing in the first place. The act isn't about 'catching baddies', or taking down specific pieces of content. Where censorship has happened, such as the suppression of pro-Palestine speech, this has been taking place long before the implementation of the Online Safety Act. Where public interest content is being blocked as a result of the act, we should be interrogating platforms' risk assessments and decision-making processes, rather than repealing the legislation. Ofcom's new transparency powers make this achievable in a way that wasn't possible before. Yes, there are some flaws with the act, and teething issues will persist. As someone who worked on Ofcom's guidance on age verification, even I am slightly confused by the way Spotify is checking users' ages. The widespread adoption of VPNs to circumvent age checks on porn sites is clearly something to think about carefully. Where should age assurance be implemented in a user journey? And who should be responsible for informing the public that many age assurance technologies delete all of their personal data after their age is confirmed, while some VPN providers sell their information to data brokers? But the response to these issues shouldn't be to repeal the Online Safety Act: it should be for platforms to hone their approach. There is an argument that the problem ultimately lies with the business models of the tech industry, and that this kind of legislation will never be able to truly tackle that. The academic Shoshana Zuboff calls this 'surveillance capitalism': tech companies get us hooked through addictive design and extract huge amounts of our personal data in order to sell us hyper-targeted ads. The result is a society characterised by atomisation, alienation and the erosion of our attention spans. Because the easiest way to get us hooked is to show us extreme content, children are directed from fitness influencers to content promoting disordered eating. Add to this the fact that platforms are designed to make people expand their networks and spend as much time on them as possible, and you have a recipe for disaster. Again, it's a worthy critique. But we live in a world where American tech companies hold more power than many nation states – and they have a president in the White House willing to start trade wars to defend their interests. So yes, let's look at drafting regulation that addresses addictive algorithms and support alternative business models for tech platforms, such as data cooperatives. Let's continue to explore how best to provide children with age-appropriate experiences online, and think about how to get age verification right. But while we're working on that, really serious harms are taking place online. We now have a sophisticated regulatory framework in the UK that forces tech platforms to assess risk and allows the public to have far greater transparency over their decision-making processes. We need critical engagement with the regulation, not cynicism. Let's not throw out the best tools we have. George Billinge is a former Ofcom policy manager and is CEO of tech consultancy Illuminate Tech


The Guardian
an hour ago
- The Guardian
UK Online Safety Act risks ‘seriously infringing' free speech, says X
Elon Musk's X platform has said the UK's Online Safety Act (OSA) is at risk of 'seriously infringing' free speech as a row deepens over measures for protecting children from harmful content. The social media company said the act's 'laudable' intentions were being overshadowed by its aggressive implementation by the communications watchdog, Ofcom. In a statement posted on the platform, X said: 'Many are now concerned that a plan ostensibly intended to keep children safe is at risk of seriously infringing on the public's right to free expression.' It added that the risk was not a surprise to the UK government because by passing the OSA, lawmakers had made a 'conscientious decision' to increase censorship in the name of 'online safety'. 'It is fair to ask if UK citizens were equally aware of the trade-off being made,' said the statement. The act, a bugbear of the political right on both sides of the Atlantic, has come under renewed scrutiny after new restrictions on under-18s accessing pornography and viewing content harmful to children came into force on 25 July. Musk, X's owner, said days after the rules came into force that the act's purpose was 'suppression of the people'. He also retweeted a petition calling for repeal of the act that has garnered more than 450,000 signatures. X has been forced to age-restrict some content as a consequence, with the Reform UK party adding to the furore by pledging to repeal the act. Reform's commitment prompted the UK technology secretary, Peter Kyle, to accuse Nigel Farage of siding with paedophile Jimmy Savile, a comment Farage described as 'so below the belt' and deserving of an apology. Referring to Ofcom, X said regulators had taken a 'heavy-handed approach' to enforcing the act by 'rapidly increasing enforcement resources' and 'adding layers of bureaucratic oversight'. The statement said: 'The act's laudable intentions are at risk of being overshadowed by the breadth of its regulatory reach. Without a more balanced, collaborative approach, free speech will suffer.' X said it was compliant with the act but the threat of enforcement and fines – which in the case of social media platforms such as X could be as high as 10% of global turnover – could encourage censorship of legitimate content in order to avoid punishment. The statement also mentioned plans to create a national internet intelligence investigations team to monitor social media for signs of anti-migrant disorder. X said the proposal may be positioned as a safety measures but 'it clearly goes far beyond that intent'. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion It said: 'This move has set off alarm bells for free speech advocates who characterise it as excessive and potentially restrictive. A balanced approach is the only way to protect individual liberties, encourage innovation and safeguard children.' A spokesperson for Ofcom said the OSA contained provisions protecting freedom of speech. They said: 'The new rules require tech firms to tackle criminal content and prevent children from seeing defined types of material that's harmful to them. There is no requirement on them to restrict legal content for adult users. In fact, they must carefully consider how they protect users' rights to freedom of expression while keeping their users safe.' The UK's Department for Science, Innovation and Technology has been approached for comment.