
From the US to Japan, stablecoins are causing a global financial rewrite
Written by Sanhita Chauriha
When Facebook (now Meta) launched its ill-fated stablecoin project, Libra, in 2019, central banks dismissed it as a corporate fantasy. Fast-forward to 2025. The same monetary authorities are now crafting legal blueprints for what is shaping up to be the next great leap in financial infrastructure: The regulation and integration of stablecoins. These cryptoassets, pegged to fiat currencies and often backed by real-world reserves, are no longer just liquidity tools for crypto traders. They are becoming foundational rails for payments, settlement, and programmable money. But as stablecoins inch closer to mass adoption, governments across the world are grappling with a new policy trilemma: How to encourage innovation, maintain financial stability, and preserve monetary sovereignty.
The US, in a rare bipartisan feat, passed the GENIUS Act, a sweeping federal bill designed to regulate fiat-backed stablecoins. Under its provisions, stablecoins must be backed 1:1 by high-quality liquid assets (HQLA), be redeemable on demand, and be subject to monthly reserve disclosures and anti-money laundering checks. Issuers over $10 billion in circulation are now federally overseen, while smaller ones can operate under state charters, provided those states meet minimum national standards. In effect, the US is laying the groundwork for a tokenised digital dollar, while retaining oversight through traditional financial plumbing.
Critics argue the GENIUS Act is a Trojan horse for financial incumbents. The law requires issuers to either be licensed financial institutions or partner with one effectively handling regulatory advantage to major banks. Unsurprisingly, Bank of America's CEO Brian Moynihan recently announced the bank is ready to issue its own stablecoin, following moves by JPMorgan (JPM Coin) and PayPal (PYUSD). If this trend continues, Wall Street may soon displace the DeFi startups that once pioneered this space. Nonetheless, such institutional entry brings maturity, deeper liquidity, and integration with the broader economic traits necessary for stablecoins to scale beyond crypto-native applications.
Europe has taken a more technocratic path. The Markets in Crypto-Assets (MiCA) regulation, to be enforced from late 2024, classifies stablecoins as either 'e-money tokens' or 'asset-referenced tokens.' Stablecoin issuers must meet capital requirements, submit whitepapers, disclose reserve asset composition, and adhere to redemption guarantees. 'Significant' stablecoins that with large circulation or systemic reach will face direct oversight by the European Banking Authority and may be barred from excessive transaction volumes. MiCA aims to future-proof the euro's digital periphery while preventing stablecoins from competing directly with sovereign currency, a concern central banks share across continents.
The United Kingdom, post-Brexit, is scripting its own stablecoin strategy with a regulatory regime that will place fiat-backed stablecoins under the supervision of the Financial Conduct Authority (FCA) and the Bank of England. Interestingly, the UK proposes to exempt overseas issuers from full domestic compliance if their home jurisdictions maintain equivalent safeguards. This open-but-cautious model aims to position London as a magnet for global crypto-finance, without abandoning core prudential standards.
In Asia, innovation is swift but cautious. Singapore's Monetary Authority (MAS) has finalised a comprehensive regulatory framework for stablecoins pegged to the Singapore dollar or any G10 currency. Issuers must ensure full reserve backing, fast redemption (within five business days), and high transparency. Only those who meet the MAS's standards may market their coins as 'MAS-regulated', a label likely to become a global credibility mark. Meanwhile, Hong Kong has also passed stablecoin legislation, effective by 2025, limiting issuance to licensed financial institutions. Already, major fintech players like Ant Group are applying for licenses, eager to gain early-mover status in the city's evolving digital asset ecosystem.
Japan stands apart with perhaps the strictest regime: Only banks, fund-transfer firms, or trust companies can issue yen-pegged stablecoins. Amendments to the Payment Services Act in 2022 tightly regulate redemption, disclosure, and asset segregation, favouring stability over growth. While the market is small, Japan's emphasis on consumer protection and conservative financial norms reflects a broader regional wariness toward crypto's more volatile edges.
Meanwhile, the United Arab Emirates has emerged as the Gulf's most aggressive regulator of stablecoins. Its Virtual Assets Regulatory Authority (VARA) and the UAE central bank have mandated 1:1 reserve backing, monthly third-party audits, and strict anti-money laundering/combating the financing of terrorism (AML/CFT) protocols. Dubai, in particular, is branding itself as a digital finance hub. Its approach mirrors Singapore's in one key way: Credibility must be earned, not assumed.
What unites these regulatory initiatives despite differing geographies and philosophies is a growing consensus that stablecoins are no longer hypothetical. Their programmable nature makes them attractive for everything from cross-border settlements and tokenised trade finance to retail micropayments. The Bank for International Settlements, in its April 2024 paper, warned that more than 600 de-pegging events occurred in 2023, underscoring their fragility. Yet it also acknowledged that, with proper regulation, stablecoins could serve as complements to Central Bank Digital Currencies (CBDCs), not threats.
The stablecoin race is not merely a question of financial regulation. It is one of economic statecraft. The US sees it as a lever to maintain dollar dominance in a multipolar world. Europe views it as a way to secure financial autonomy in the age of digital platforms. Asia, meanwhile, seeks to modernise without destabilising. And the Gulf hopes to leapfrog into fintech relevance.
In short, stablecoins are forcing countries to rethink not just how money moves but who moves it, who regulates it, and to whom it ultimately belongs.
The writer is a technology lawyer. Views are personal
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
21 minutes ago
- Time of India
Social media information now a must on US Visa forms: US Embassy in India issues new guidelines
Planning to apply for a US visa? You'll now have to hand over your social media history too. The US Embassy in India has announced that all visa applicants will be required to list all usernames from the past five years on the DS-160 form, as part of a beefed-up vetting process. The announcement, posted on the official X (formerly Twitter) handle of the US Embassy in India, clearly states: 'Visa applicants are required to list all social media usernames or handles of every platform they have used from the last 5 years on the DS-160 visa application form. Applicants certify that the information in their visa application is true and correct before they sign and submit.' This requirement is part of an expanded background check process aimed at strengthening national security and verifying applicant identities more thoroughly. This change affects all US visa categories, including tourist, business, work, and student visas. Applicants are expected to list social media handles across platforms like Facebook, Instagram, Twitter (X), LinkedIn, YouTube, TikTok, Reddit, and any other site where they have posted publicly in the last five years. Applicants must ensure that all the information entered in the DS-160 form is accurate and complete. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy One, Get One Free, Up To 50% Discount, Expiring Soon Original Adidas Get Offer Undo This form is a central document used for non-immigrant visa processing. A missing or incorrect social media handle could have serious consequences. Read more: Puri Rath Yatra 2025: Key details on traffic advisory, rules, and security measures The Embassy has issued a clear warning: 'Omitting social media information could lead to visa denial and ineligibility for future visas.' As per the reports, this move is in line with the US Government's broader security goals initiated during the Trump administration. The policy is intended to: Enhance national security Prevent identity fraud Screen applicants for any potential threats Ensure transparency in the visa process By reviewing publicly available social media activity, consular officers can better assess whether the information in the visa application matches the applicant's online persona. In an additional advisory issued earlier this week, the US Embassy has also asked student visa applicants—especially those under the F, M, and J visa categories—to set their social media accounts to public visibility. This is meant to assist US authorities in carrying out the 'vetting necessary to establish their identity and admissibility to the United States under US law.' This development also comes in the backdrop of a pause in new visa interviews for student and exchange visitor categories, as ordered last month at US consulates. Read more: India's 8 most haunted sites and stories behind them What applicants should do Be transparent: List all your social media handles—yes, even that long-forgotten Twitter rant account. Avoid false information: Do not skip or fake accounts; doing so risks disqualification. Review your digital footprint: Make sure your online content doesn't contradict your visa application details. Check privacy settings: Especially for students—ensure your accounts are publicly viewable for vetting purposes. In today's hyper-connected world, your social media isn't just a highlight reel, it's now part of your immigration file. Whether you're applying for a tourist visa or prepping to study in the US, your online presence is officially on the radar.


Time of India
33 minutes ago
- Time of India
After Anthorpic, Facebook-parent Meta wins AI copyright case but gets ‘warning' from judge
Facebook-parent Meta has scored a win in a copyright infringement lawsuit brought by 13 prominent authors, including Sarah Silverman and Ta-Nehisi Coates, concerning the training of its Llama artificial intelligence (AI) model. However, the ruling may have left the door open for future legal challenges against Meta and other AI developers. The judge sided with Meta's argument that the company's use of copyrighted books to train its large language models (LLMs) falls under the fair use doctrine of US copyright law. While acknowledging that it 'is generally illegal to copy protected works without permission,' the judge stated that the plaintiffs failed to present a compelling argument that Meta's methods caused 'market harm', as per CNBC. 'On this record Meta has defeated the plaintiffs' half-hearted argument that its copying causes or threatens significant market harm,' the judge said, adding, 'That conclusion may be in significant tension with reality.' He concluded that Meta's practice of 'copying the work for a transformative purpose' is protected by fair use. Judge warns Meta, calls company's argument 'nonsense' The judge emphasised the limited scope of his ruling. 'In the grand scheme of things, the consequences of this ruling are limited. This is not a class action, so the ruling only affects the rights of these thirteen authors — not the countless others whose works Meta used to train its models,' he noted. He also clarified that the ruling 'does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful' in a general sense. He also dismissed Meta's argument that prohibiting the use of copyrighted text for training without payment would halt AI development, calling it 'nonsense.' The judge highlighted that a separate claim by the plaintiffs, alleging that Meta "may have illegally distributed their works (via torrenting)," remains pending. What Meta has to say A Meta spokesperson expressed satisfaction with the decision, stating, "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." The decision comes a day after a federal judge ruled that Anthropic's use of copyrighted books to train its Claude chatbot constitutes fair use under copyright law. The judge determined that training AI models on copyrighted works was 'quintessentially transformative' and legally justified. The ruling dismissed key copyright infringement claims brought by authors who sued Anthropic last year alleging "large-scale theft" of their works. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
41 minutes ago
- Time of India
AI and copyrights: The fight for fair use
Academy Empower your mind, elevate your skills Big tech companies Meta, Microsoft, OpenAI and Anthropic have been facing a growing number of lawsuits. Authors and creators say these companies are using their books and other creative works to train powerful AI without permission or cases highlight how " fair use " works in the age of artificial intelligence (AI).Recently, Meta won a lawsuit from 2023 , against a group of authors who claimed the tech major used their copyrighted books to train its AI without their permission. The judge, Vince Chhabria, sided with Meta, saying the authors didn't make the right arguments and didn't have enough proof. However, the judge also said that using copyrighted works to train AI could still be against the law in "many situations."This decision is similar to another case involving Anthropic , another AI firm. In that case, Judge William Alsup said Anthropic's use of books for training was "exceedingly transformative", meaning it changed the original work so much it fell under fair use. According to Fortune magazine's website, copyrighted material can be used without permission under the fair use doctrine if the use transforms the work, by serving a new purpose or adding new meaning, instead of merely copying the the judge also found Anthropic broke the law by keeping pirated copies of the books in a digital library and has ordered a separate trial on that matter, to determine its liability, if was the first time a US court ruled on whether using copyrighted material without permission for AI training is legal battles continue in a new lawsuit in New York , in which authors, including Kai Bird, Jia Tolentino, and Daniel Okrent, are accusing Microsoft of using nearly 200,000 pirated digital books to train its Megatron April, OpenAI faced several copyright cases brought by prominent authors and news outlets . "We welcome this development and look forward to making it clear in court that our models are trained on publicly available data, grounded in fair use, and supportive of innovation," an OpenAI spokesperson said at that time, as reported by lawsuits show a big disagreement between tech companies and people who own copyrights. Companies often say their use is "fair use" to avoid paying for licences. But authors and other creators want to be paid when their work helps power these new AI systems.