logo
Sanctioned Russian crypto exchange suspends services as Tether blocks wallets

Sanctioned Russian crypto exchange suspends services as Tether blocks wallets

Reuters06-03-2025

MOSCOW, March 6 (Reuters) - Russian cryptocurrency exchange Garantex on Thursday said stablecoin Tether had blocked digital wallets on its platform holding more than 2.5 billion roubles ($28 million), forcing it to suspend operations days after coming under EU sanctions.
The European Union included Garantex in its 16th sanctions package against Russia over the conflict in Ukraine on February 24, accusing the crypto exchange of being closely associated with EU-sanctioned Russian banks and responsible for circumventing EU sanctions.
"We have bad news," Garantex said on Telegram. "Tether has entered the war against the Russian crypto market."
Tether did not immediately respond to a request for comment.
Garantex said it was temporarily suspending the provision of all services, including cryptocurrency withdrawals.
"We are fighting and will not give up," Garantex said. "Please note that all USDT held in Russian wallets is now under threat."
Deprived of access to the U.S. dollar and cut off from the SWIFT global payments network, some Russians have turned to cryptocurrencies to move money overseas and the central bank has allowed businesses to use cryptocurrencies in global trade.
The United States called Garantex a "ransomware-enabling virtual currency exchange" when sanctioning the company in April 2022, accusing it of allowing its systems to be abused by illicit actors.
Russian lawmaker Anton Gorelkin accused Western countries of pursuing political goals and said it would not be the last time pressure is exerted on Russia's cryptocurrency infrastructure.
"To the investors who underestimated this risk, my condolences," Gorelkin wrote on Telegram on Thursday.
"But it is worth recognising that it is impossible to completely block this market for Russia," he said. "Cryptocurrencies will remain one of the most effective tools for circumventing sanctions, although USDT can be safely deleted from this list."
($1 = 89.2500 roubles)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How the UK could get dragged further into conflict in the Middle East
How the UK could get dragged further into conflict in the Middle East

Scotsman

time41 minutes ago

  • Scotsman

How the UK could get dragged further into conflict in the Middle East

The Israeli assault on Iran should come as no surprise - there was hardly a better moment to strike Sign up to our daily newsletter – Regular news stories and round-ups from around Scotland direct to your inbox Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... As the whole world knows by now, Israel launched a series of devastating attacks on Iran in the early hours of Friday, targeting nuclear facilities and the regime's leadership. Operation Rising Lion seems to have caught the Iranians wrong-footed. Advertisement Hide Ad Advertisement Hide Ad Two hundred IAF warplanes were involved, and most interestingly a number of drones were launched by Israeli special forces from within Iran itself, in a pastiche of the Ukrainian drone attacks against Russian airbases a few days ago. Initial battlefield damage assessment suggests that the attacks were highly successful, although the full details will probably take some time to emerge. Amongst others, the head of the Iranian Republican Guard Corps, Hossein Salami, and Mohammad Bagheri, commander-in-chief of Iran's military, plus several senior figures in Iran's nuclear programme were killed. People look over damage to buildings in Nobonyad Square following Israeli airstrikes in Tehran, Iran. Iran's three top military generals were killed in the attacks that also targeted nuclear and military facilities | Majid Saeedi At a stroke Iran's senior military hierarchy has been eliminated. That Supreme Leader Ayatollah Ali Khameni was not among them is only down to Israel, who probably thought his demise might be a step too far at this stage. The Israeli assault should come as no surprise to anyone. Iran is the fount of all evil in the region, and the writing has been on the wall for some time. The recent IAEA censure of Iran's nuclear programme and the withdrawal of some US personnel in the Middle East were dead giveaways. Advertisement Hide Ad Advertisement Hide Ad Israel made no secret of its assessment that currently Iran is politically and militarily weak, and there was hardly a better moment to strike its nuclear facilities and leadership. And so it has come to pass. Where USA goes, UK will surely follow This episode in the long-running conflict between Israel and Iran looks like it might be a longer confrontation than previous ones. If it continues for any length of time, and if Iran carries through its threat to attack US bases as well as Israel in retaliation, then it seems to me inevitable that the USA will be drawn in. And where the USA goes the UK will surely follow. During the last tit-for-tat exchanges between the two Middle Eastern states RAF Typhoons were involved in knocking down incoming Iranian drones to defend Israel. I'd be most surprised if the same thing doesn't happen this time around. A woman chants slogans as people gather for a protest against Israel's wave of strikes on Iran in Enghelab (Revolution) Square in central Tehran | AFP via Getty Images In any case, and notwithstanding what David Lammy may say, the UK is already involved. Britain's retained bases in Cyprus in the eastern Mediterranean include RAF Akrotiri, which is an important link in the communications chain to Israel and used by the UK, USA, and others for many years. Advertisement Hide Ad Advertisement Hide Ad Plus it just so happens that the Royal Navy-led CSG25 task force, including aircraft carrier HMS Prince of Wales and her complement of F-35 fighter jets, is not so far away in the India Ocean and within range of Iran. And the Americans have long-range bombers on Diego Garcia. To survive Iran needs to stop being intransigent over its nuclear problem and compromise or else face the very real prospect of defeat and regime change.

The euro will get a new member next year – and not everyone is pleased
The euro will get a new member next year – and not everyone is pleased

The Independent

time2 hours ago

  • The Independent

The euro will get a new member next year – and not everyone is pleased

The EU has given the green light for Bulgaria to join the euro from January 1 2026. This huge step towards European integration comes just six months after Bulgaria became a full member of the Schengen area, within which people can move freely across borders. However, while rapprochement moves apace at the top level, euroscepticism shows little sign of abating at the grassroots level in Bulgaria or in national party politics. Protests calling for Bulgaria to stick with its national currency have sprung up in both the capital city, Sofia, and in several towns around the country. A May poll showed that 38 per cent of Bulgarians were against the euro and only 21 per cent agreed that the switch should go ahead in January. Others wanted to wait a few years. In a similar poll in January, 40 per cent of respondents said they never wanted Bulgaria to join the euro. Anti-euro protests tend to be associated with the Bulgarian nationalist political parties. The most influential of these, Vazrazhdane, has become increasingly popular and won 13.63 per cent in the most recent parliamentary elections in October 2024. It had won just 2.45 per cent in elections held in April 2021. Bulgaria joined the European Union in 2007. When, in December 2021, I interviewed a former spokesman for the political party NDSV (National Movement Simeon II), which was in government from 2001 to 2009, they said Bulgarians had very high expectations ahead of becoming part of the bloc. They had thought it would take just a few years for Bulgaria to be as economically developed as Switzerland, and that their standard of living would soar. The dream was for Bulgaria to become the so-called 'Switzerland of the Balkans', as both countries have a similar population size and a similar tourist appeal. The EU has channelled €16.3 billion into Bulgaria since the country joined the EU, particularly for infrastructure development. However, a year of fieldwork has shown me that Sofia has been the main benefactor of this investment. Small municipalities and rural communities have not felt the benefit as clearly. Among the €16.3 billion, Sofia received €3.1 billion and Plovdiv received €0.8 billion. Whereas Sofia gets new metro lines in recent years, citizens in some municipalities still struggle with basic public services for survival. Nearly 15 per cent of the country's population struggles with a regular quality water supply. The imagined 'European' standard of life has not yet reached small municipalities and rural areas. Europe still feels far away. Becoming part of the EU has given opportunities to Bulgarian citizens to work and live abroad in European countries. Official figures show 861,054 Bulgarian citizens lived in other EU countries in 2022. Recently, a total of 74 per cent of young people in Bulgaria are considering more or less seriously the idea of emigrating abroad. However, the trend of young people working abroad in Europe has caused brain drain and has partially contributed to the decreasing population of Bulgaria, which fell from 7.68 million before it joined the EU in 2006 to 6.44 million in 2024. According to a research analyst at a Sofia-based non-governmental organisation whom I interviewed recently, many Bulgarian parents hope that their children working abroad in Europe will return to work in Bulgaria, because jobs for migrants abroad tend not be for high-skilled workers. Accession to the eurozone is more likely to benefit Sofia-based people who do business abroad rather than older people living local lives in small municipalities or rural areas. Younger and working people have already been shown to be the ones who benefited most from European integration in Bulgaria and Romania in the first place. That said, support for EU membership has been rising recently. Holding a coalition together Despite euroscepticism, European integration is one of the few issues that unites Bulgaria's fragile coalition government, although not all political parties agree with joining the eurozone. Bulgaria held seven parliamentary elections between April 2021 and October 2024. It therefore has been a surprise that amid the political turmoil, the coalition government that was formed in October 2024 has survived. A very important motivational source here is unity on the question of Europe. But with mixed results so far and with meaningful levels of opposition the joining the euro, Bulgaria's government will have to be careful about the potential for eurosceptic movements to grow as they have in several other EU nations.

Does the UK need an AI Act?
Does the UK need an AI Act?

New Statesman​

time2 hours ago

  • New Statesman​

Does the UK need an AI Act?

Photo by Charles McQuillan / Getty Images Britain finds itself at a crossroads with AI. The stakes are heightened by the fact that out closest allies appear to be on diverging paths. Last year, the EU passed its own AI act, seeking controlled consensus on how to regulate new technologies. The US, meanwhile, is pursuing a lighter-touch approach to AI – perhaps reflecting the potential financial rewards its Big Tech companies could lose if stifled by regulation. Prime Minister Keir Starmer and Science Secretary Peter Kyle seem to be mirroring the US strategy. In the January launch of the government's AI Opportunities Action Plan, Kyle wants Britain to 'shape the AI revolution rather than wait to see how it shapes us'. Many have called for the government to bring forward an AI act, to lay the foundation for such leadership. Does Britain need one, and if so, how stringent should it be? Spotlight reached out to sectoral experts to give their views. 'An AI act would signal that Britain is serious about making technology work for people' Gina Neff – Professor of responsible AI at Queen Mary University of London This government is betting big on AI, making promises about turbo-charging innovation and investment. But regulatory safeguards are fragmented, public trust remains uncertain, and real accountability is unclear. Charging forward without a clear plan means AI will be parachuted into industries, workplaces, and public services with little assurance that it will serve the people who rely on it. An AI act would signal that Britain is serious about making AI work for people, investing in the places that matter for the country, and harnessing the power of AI for good. An AI act would create oversight where there is ambiguity, insisting on transparency and accountability. An AI act could provide the foundation to unlock innovation for public benefit by answering key questions: who is liable when AI fails? When AI systems discriminate? When AI is weaponised? Starmer's government borrows from Silicon Valley's logic, positioning AI regulation as the opposite of innovation. Such logic ignores a crucial fact: the transition to AI will require a major leap for workers, communities and societies. Government must step in where markets won't or can't: levelling the playing field so powerful companies do not dominate our future, investing in education and skills so more people can benefit from opportunities, ensuring today's laws and regulations continue to be fit for purpose, and building digital futures with companies and civil society. Subscribe to The New Statesman today from only £8.99 per month Subscribe Under Conservative governments, the UK took a 'proportionate', 'proinnovation' approach outlined in the AI White Paper, suggesting responsibility for safe and trustworthy AI rests with the country's existing 90 regulators. That was always envisioned to be a wait-and-see stop-gap before new measures. The AI Opportunities Action Plan sketches out support for the UK's AI industry, but does not go far enough on how to manage the social, cultural and economic transitions that we face. With worries about the impact on entry-level jobs, on our children, on information integrity, on the environment, on the UK's creative sector, on growing inequality, on fair yet efficient public services: there is a long list of jobs now for government to do. Lack of action will only create confusion for businesses and uncertainty about rights and protections for workers, consumers and citizens. Without an AI act to help shore it up, the good work that is already happening in the UK won't be able to fully power benefits for everyone. An AI act must go beyond data protections to establish transparency requirements and accountability provisions, outline safeguards for intellectual property, set clearer rules around and recourse for automated decision-making. These are responsibilities that tech companies are largely evading. Who can blame them? They have cornered global markets and will gain handsomely with our new investments in AI. A UK AI act could empower regulators with stronger enforcement tools to right the imbalance of power between British society and the world's biggest players in this sector. An AI act would give real structure to this country's ambitions for AI. The UK needs clarity on what AI can and cannot do, and that won't come from piecemeal guidance – it will come from leaders with vision helping us build the society that we all so rightly deserve. 'The government's hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow' Marina Jirotka and Keri Grieman – Professor of human-centred computing at the University of Oxford; Research associate, RoboTIPS project. The EU AI act entered into force not even a year ago, and there is already serious discussion on whether to reduce enforcement and simplify requirements on small and medium enterprises in order to reduce burdens on companies in a competitive international marketplace. The US House of Representatives has narrowly approved a bill that blocks states from enforcing AI regulations for ten years, while forwarding one bipartisan federal act that criminalises AI deepfakes but does not address AI on a broader level. Large language model updates are rolled out faster than the speed of subscription model billing. AI is invading every corner of our lives, from messaging apps to autonomous vehicles – some used to excellent effect, others to endless annoyance. The British government has chosen a policy of investment in AI – investing in the industry itself, in skill-building education and in inducing foreign talent. Its hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow. However, this leaves the regulatory burden on individual sectors: piecemeal, often siloed and without enough regulatory AI experts to go around, with calls coming from inside the house – the companies themselves – for a liability system. The UK needs clarity: for industry, for public trust and for the prevention of harm. There are problems that transcend individual industries: bias,discrimination, over-hype, environmental impact, intellectual property and privacy concerns, to name a few. A regulator is one way to tackle these issues, but can have varying levels of impact depending on structure: coordinating between industry bodies or taking a more direct role; working directly with companies or at arm's length; cooperative investigation or more bare-bones enforcement. But whatever the UK is to do, it needs to provide regulatory clarity sooner rather than later: the longer the wait, the more we fail to address potential harms, but we also fall behind in market share as companies choose not to bet the bank on a smaller market with an unclear regulatory regime. 'Growth for whom? Efficiency to what end?' Baroness Beeban Kidron – House of Lords member and digital rights activist All new technology ends up being regulated. On arrival greeted with awe. Claims made for its transformative nature and exceptionality. Early proponents build empires and make fortunes. But sooner or later, those with responsibilities for our collective good have a say. So here we are again with AI. Of course we will regulate, but it seems that the political will has been captured. Those with their hands on the technology are dictating the terms – terms that waver between nothing meaningful to almost nothing at all. While government valorises growth and efficiency without asking: growth for whom? Efficiency to what end? In practical terms, an AI act should not seek to regulate AI as a technology but rather regulate its use across domains: in health (where it shows enormous benefit); in education (where its claims outweigh its delivery by an unacceptable margin); in transport (where insurers are calling the shots); and in information distribution (where its deliberate manipulation, unintended hallucination and careless spread damages more than it explains). If we want AI to be a positive tool for humanity then it must be subject to the requirements of common goods. But in a world of excess capital restlessly seeking the next big thing, governments bent over to do the bidding of the already-too-powerful, and lobbyists who simultaneously claim it is too soon and too late, we see the waning of political will. Regulation can be good or bad, but we are in troubling times where the limit of our ambition is to do what we can, not what we should – which gives it a bad name. And governments – including our own – legislate to hardwire the benefits of AI into the ever-increasing concentration of power and wealth of Silicon Valley. Tech companies, AI or otherwise, are businesses. Why not subject them to corporate liability, consumer rights, product safety, anti-trust laws, human and children's rights? Why exempt them from tax, or the full whack for their cost to planet and society? It's not soon and it is not too late – but it needs independence and imagination to make AI a public good, not wilful blindness to an old-school playbook of obfuscation and denial while power and money accumulate. Yes, we need regulation, but we also need political will. 'The real test of a bill will be if it credibly responds to the growing list of everyday harms we see' Michael Birstwistle – Associate director, Ada Lovelace Institute AI is everywhere: our workplaces, public services, search engines, our social media and messaging apps. The risks of these systems are made clear in the government's International AI Safety Report. Alongside long-standing harms like discrimination and 'hallucination' (where AI confidently generates false information), systemic harms such as job displacement, environmental costs and the capacity of newer 'AI agents' to misinform and manipulate are rapidly coming to the fore. But there is currently no holistic body of law governing AI in the UK. Instead, developers, deployers and users must comply with a fragmented patchwork of rules, with many risks going unmanaged. Crucially, our current approach disincentivises those building AI systems from taking responsibility for harms they are best placed to address; regulation tends to only look at downstream users. Our recent national survey showed 88 per cent of people believe it's important that the government or regulators have powers to stop the use of a harmful AI product. Yet more than two years on from the Bletchley summit and its commitments, it's AI developers deciding whether to release unsafe models, according to criteria they set themselves. The government's own market research has said this 'wild west' is lowering business confidence to adopt. These challenges can only be addressed by legislation, and now is a crucial time act. The government has announced an AI bill, but its stated ambition (regulating 'tomorrow's models not today's') is extremely narrow. For those providing scrutiny in parliament, press and beyond, the real test of a bill will be whether it credibly responds to the growing list of everyday harms we see today — such as bias, misinformation, fraud and malicious content — and whether it equips government to manage them upstream at source. 'There's a temptation to regulate AI with sweeping, catch-all Bills. That impulse is mistaken' Jakob Mökander – Director of science and technology policy, Tony Blair Institute for Global Change As AI transforms everything from finance to healthcare, the question is not whether to regulate its design and use – but how to do it well. Rapid advances in AI offer exciting opportunities to boost economic growth and improve social outcomes. However, AI poses risks, from information security to surveillance and algorithmic discrimination. Managing these risks will be key in building public trust and harnessing the benefits. Globally, there's an understandable temptation to regulate AI with sweeping, catch-all Bills that signal seriousness and ease public concern. However, this impulse is mistaken. Horizontal legislation is a blunt tool that struggles to address the many different risks AI poses in various real-world contexts. It could also end up imposing overly burdensome restrictions even on safe and socially beneficial use cases. If the UK government is serious about implementing the AI Opportunities Action Plan, it should continue its pro-innovation, sector-specific approach: steering the middle ground between the overly broad EU AI Act and the US' increasingly deregulatory approach. This way, supporting innovation can go hand-in-hand with protection of consumer interests, human rights and national security. Regulators like the CMA, FCA, Ofcom and HSE are already wrestling with questions related to AI-driven market concentration, misinformation and bias in their respective domains. Rather than pursuing a broad AI bill, the government should continue to strengthen these watchdogs' technical muscle, funding, and legal tools. The £10m already allocated to this effort is welcome – but this should go much further. Of course, some specific security concerns may be insufficiently covered by existing regulation. To address this gap, the government's proposal for a narrow AI Bill to ensure the safety of frontier-AI models is a good starting point. The AI Security Institute has a crucial role to play in this – not as a regulator, but as an independent centre to conduct research, develop standards and evaluate models. Its long-term legitimacy should continue to be served by clear independence from both government and industry, rather than the distraction of enforcement powers. Britain has an opportunity to set a distinctive global example: pro-innovation, sector-specific, and grounded in actual use cases. Now's the time to stay focused and continue forging that path. This article first appeared in our Spotlight on Technology supplement, of 13 June 2025. Related

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store