
Are we human or are we spammer?
Have you stopped answering the phone to unknown callers? Do you no longer click links in texts? Or have you, at some point, failed to convince a website's anti-spam filter that you are in fact human?
Perhaps you answered 'yes' to all three, 'no' to all three, or a mixture. The possibility that a withheld number might contain an invitation to do television makes it too much for my vanity to give that one up. I'll take my chances with the fraudsters.
Others have their own bespoke arrangements – calling back unknown numbers, for example – to
combat fraud
or impersonation.
All these habits have the same cause. It is getting harder and harder to prove someone is who they say they are online. Both businesses and households must take ever-greater steps to prove that they, and the people they are dealing with, are really 'people' at all.
READ MORE
Advances in computing power, in
generative AI
and in machine learning all give companies and the state greater speed in responding to attacks of one kind or another: but they also give more powerful tools to spammers, fraudsters and bad actors. Our trust in technology – in the texts we receive, the attachments we open, the forms we fill in – is slowly withering as a result.
I haven't yet seen a fake news clip or AI-generated video that is good enough to fool a keen observer. (My favourite tells are the ones trained on data so lousy with sexualised images that they are immediately ridiculous. I recently saw a 'war correspondent' supposedly reporting for 'CNN' in a plunging négligée.)
But two years ago,
ChatGPT
wrote only about as well as a student struggling to pass their high school diploma. Now it seems to write at more of an undergraduate level. It won't be long before generative AI will be able to produce video that is indistinguishable from reality to even the most sophisticated viewer.
This same ever-improving AI means that things researchers thought impractical a year ago are now possible. Government, in particular, can use it to get faster and better at making decisions and handling data.
But the technology will bring casualties in its wake as well and one of them may be ecommerce as we know it.
For financial transactions to be both safe and practical online, the ability to verify both who you, and the person you are conducting business with are, is essential. As machines get smarter than humans, or at least, smart enough to fake being human, verification becomes harder and the job of fraudsters easier. The reason why the various tests to 'prove you're a human' online are getting more difficult is that computers are getting smarter.
Fooling
The problem is that the more barriers you have to erect, the more likely it is that people will grow used to working around them and the more effective fraudsters and other bad actors will become at fooling them.
Consumers will trust online and digital transactions less and businesses will behave in riskier ways as cybersecurity asks more and more of employees.
The costs involved can be very large.
Marks & Spencer
is unusual in being incredibly open about the hit – both financial, in the shape of a £300 million blow to profits, and logistical, in that trading is still affected – of the cyber attack they have experienced. Other businesses and organisations, without customers or shareholders to mollify, often treat these breaches as a personal embarrassment.
And where the problem hits the headlines, the consequences linger on for much longer. The British Library has not fully recovered from the ransomware attack it experienced two years ago. Hackney Council, in east London, is still feeling the effects of a cyber attack half a decade ago.
How can the problem be solved? Some in cyber security fear that the long-term answer is, 'It can't'.
History teaches us that the only way knowledge can be unlearnt is through a societal collapse of a kind no one should wish to live through. So simply prohibiting the use of new, smarter machines is a non-starter. The same technological advances allowing us to improve productivity in the regular economy are the ones that make us more vulnerable to cyber attack, more prone to impersonation, and make fake images and video harder and harder to distinguish from the real world.
What should we do instead? As smart machines do more and more work in research, bureaucracy and design, one solution to the 'verification problem' may be that anything that requires peer-to-peer checking increasingly returns to face-to-face encounters. The next wave of jobs may not be as cyber experts, but as bank tellers. – Copyright The Financial Times Limited 2025

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
6 hours ago
- Irish Times
Are we human or are we spammer?
Have you stopped answering the phone to unknown callers? Do you no longer click links in texts? Or have you, at some point, failed to convince a website's anti-spam filter that you are in fact human? Perhaps you answered 'yes' to all three, 'no' to all three, or a mixture. The possibility that a withheld number might contain an invitation to do television makes it too much for my vanity to give that one up. I'll take my chances with the fraudsters. Others have their own bespoke arrangements – calling back unknown numbers, for example – to combat fraud or impersonation. All these habits have the same cause. It is getting harder and harder to prove someone is who they say they are online. Both businesses and households must take ever-greater steps to prove that they, and the people they are dealing with, are really 'people' at all. READ MORE Advances in computing power, in generative AI and in machine learning all give companies and the state greater speed in responding to attacks of one kind or another: but they also give more powerful tools to spammers, fraudsters and bad actors. Our trust in technology – in the texts we receive, the attachments we open, the forms we fill in – is slowly withering as a result. I haven't yet seen a fake news clip or AI-generated video that is good enough to fool a keen observer. (My favourite tells are the ones trained on data so lousy with sexualised images that they are immediately ridiculous. I recently saw a 'war correspondent' supposedly reporting for 'CNN' in a plunging négligée.) But two years ago, ChatGPT wrote only about as well as a student struggling to pass their high school diploma. Now it seems to write at more of an undergraduate level. It won't be long before generative AI will be able to produce video that is indistinguishable from reality to even the most sophisticated viewer. This same ever-improving AI means that things researchers thought impractical a year ago are now possible. Government, in particular, can use it to get faster and better at making decisions and handling data. But the technology will bring casualties in its wake as well and one of them may be ecommerce as we know it. For financial transactions to be both safe and practical online, the ability to verify both who you, and the person you are conducting business with are, is essential. As machines get smarter than humans, or at least, smart enough to fake being human, verification becomes harder and the job of fraudsters easier. The reason why the various tests to 'prove you're a human' online are getting more difficult is that computers are getting smarter. Fooling The problem is that the more barriers you have to erect, the more likely it is that people will grow used to working around them and the more effective fraudsters and other bad actors will become at fooling them. Consumers will trust online and digital transactions less and businesses will behave in riskier ways as cybersecurity asks more and more of employees. The costs involved can be very large. Marks & Spencer is unusual in being incredibly open about the hit – both financial, in the shape of a £300 million blow to profits, and logistical, in that trading is still affected – of the cyber attack they have experienced. Other businesses and organisations, without customers or shareholders to mollify, often treat these breaches as a personal embarrassment. And where the problem hits the headlines, the consequences linger on for much longer. The British Library has not fully recovered from the ransomware attack it experienced two years ago. Hackney Council, in east London, is still feeling the effects of a cyber attack half a decade ago. How can the problem be solved? Some in cyber security fear that the long-term answer is, 'It can't'. History teaches us that the only way knowledge can be unlearnt is through a societal collapse of a kind no one should wish to live through. So simply prohibiting the use of new, smarter machines is a non-starter. The same technological advances allowing us to improve productivity in the regular economy are the ones that make us more vulnerable to cyber attack, more prone to impersonation, and make fake images and video harder and harder to distinguish from the real world. What should we do instead? As smart machines do more and more work in research, bureaucracy and design, one solution to the 'verification problem' may be that anything that requires peer-to-peer checking increasingly returns to face-to-face encounters. The next wave of jobs may not be as cyber experts, but as bank tellers. – Copyright The Financial Times Limited 2025


RTÉ News
17 hours ago
- RTÉ News
Inside the face scanning tech behind social media age limits
As age restrictions on social media gain political momentum, biometric software is being explored as a way to effectively enforce any potential laws. However, critics warn that privacy and surveillance issues could arise if these tools become more widespread in policing the internet. So, should this technology be used to build a safer, more secure internet? Politicians in Australia are pushing ahead with plans to ban under-16s from social media, and early findings from a government-backed trial suggest biometric software could be central to enforcing a ban. Yoti, a London-based tech firm, is a leader in age estimation technology and participant in the Australian research. Already used by platforms like Instagram, Yoti says its software can scan faces to estimate age without storing personal data. It says it can tell the age of 13 to 17-year-olds within an median margin of error of 1.3 years and 18 to 24-year-olds within 2.2 years. Better, it says, than human judgment. The face scanning technology captures a live selfie and apps and websites embed the software and apply their desired age threshold. Prime Time visited Yoti's headquarters to test the system. It correctly identified that I was over 18 but like most users I wasn't given the opportunity to find out exactly what my estimated age was, just that I had met the threshold. Yoti CEO Robin Tombs said the software has a vast capacity to look at patterns, making it "much more accurate at estimating age than a human would be." While the Australian government-commissioned trials backed the use of age assurance technologies, it did raise issues with the accuracy of some systems. People tend to misunderstand what working effectively means, Mr Tombs explained, adding that an accurate estimate to within a number of months of age is "very, very hard to do." Yoti says that 99.3% of 13 to 17-year-olds are correctly estimated as under 21. Companies using the software often set a higher buffer age to account for the technology's limits; however this increases the risk of wrongly restricting access of some adults. Dublin social media star Edel Lawless explained to Prime Time what happened when she failed a digital age test on TikTok. The platform requires users who want to broadcast live to verify that they are over 18. After being directed to take a live selfie to prove her age, Ms Lawless, who was 18 at the time, told the programme she was rejected. "I think my baby face didn't allow it. I don't think it's a very good way of estimating someone's age." The now 19-year-old had to submit a photo of herself holding her age card before she could proceed. Yoti contends that notwithstanding false negatives or positives in its age estimations, the vast majority of people benefit from ease of use and not having to submit documents to prove their age. It is also very hard to game or trick the system, Mr Tombs said. The live selfie process limits the potential for AI generated images to be used while the software is also sophisticated enough to recognise if someone is wearing a mask or other disguise. But digital rights campaigners have warned that age estimation systems can contain biases. "It's a normalisation of biometric surveillance that we've worked against for many years," Ella Jakubowska, Head of Policy at European Digital Rights told the programme. Biometric software has been historically trained on white, male faces. In 2021, Facebook apologised when an AI image tool was identifying black men as non-human primates. Yoti insists the technology has improved in the last decade and the 125,000 images it bases its model on cover all ages, three skin types and both genders. The company's own test results show that although there are differences in the age estimation between different groups "they are not material," according to Mr Tombs. UK Digital rights group Big Brother Watch described the moves towards age assurance technologies as a knee jerk reaction to a more complex and wider discussion around internet safety and moderation. The rate of false negatives threatens free speech and free access to the internet, the group believes. "There are many people who are above the age of 18 but may for a host of reasons not pass an age assurance test. Adults have an important right to access legal content online," Advocacy Manager, Matthew Feeney told Prime Time. "I don't think we want to lurch into a world where the most powerful communications platform we have in the history of our species is one where you need to step through a security booth in order to access it." Requiring a face scan or age documents for adults to access certain content, potentially of a sensitive or adult nature presents risks of surveillance, the group contends. "We don't want to live in a world where people feel like they cannot visit information they want or say what they believe because they don't feel like they have privacy online," Mr Feeney added. Activists have also raised concerns over how facial scans are processed and stored. The Yoti database is derived from an earlier product, a digital ID app in which users uploaded their image and proof of age with permission for the company to build up a large, anonymous database. It insists the selfies uploaded today are not saved and privacy is paramount. Stricter regulation is also being implemented in many jurisdictions around storing and processing personal images. "Regulators are requiring faces to be destroyed, deleted immediately, and not used for any other purpose. I think people do trust that if a regulator requires that, it would be very risky for a big brand or Yoti to not comply with those regulations," CEO Robin Tombs said. But even if the software is effective and properly regulated, the risk of adding age restrictions to parts of the internet could lead to a less vibrant online society and a tech environment with safeguards that may not achieve what they're intended to, according to Big Brother Watch. "What I'm worried about is that these kinds of regulations are going to incentivize children to go into the dark web, to use encrypted web browsers, to get into places where it's actually much, much harder to moderate where they're going," Matthew Feeney explained. "There are websites all across the world that make YouTube, Instagram look relatively tame." Social media ban debate Australian Prime Minister Anthony Albanese has said its world-leading legislation will protect young people from the "harms" of social media adding that the onus will be on platforms to demonstrate they are taking reasonable steps to prevent access to those under 16. A similar law should be considered here, Tánaiste Simon Harris has said, describing smartphone use as a "ticking time bomb". The Irish Medical Organisation also debated a motion to consider such a move at its recent AGM. But there isn't consensus on the merits or efficacy of such bans. In the UK, the Technology Secretary Peter Kyle has backed away from following the Australian path. Campaigners argue social media platforms themselves should police their content and algorithms to be less harmful for everyone, not just children. "It feels like a really unsophisticated solution," Ella Jakubowska of European Digital Rights told Prime Time. "Any sledgehammer approach that tries to cut young people out of the internet is missing the point that there's a lot we need to do to keep all people safe from platforms. Imposing age gates on internet access will disempower both adults and children, she argued. "All the evidence shows that young people are the most resilient and the most able to deal with online harms when they have a parent or guardian or other adult figure that they can confide in that helps them navigate the internet safely and in ways that build resilience, not just shutting them out," Ms Jakubowska said. Distinct from any potential social media ban, next month the first age assurance regulations will come into force in Ireland requiring video sharing platforms that allow pornographic or gratuitously violent content to have "effective" age assurance. Self-declaration by users will not be accepted. However, as most social media companies that are based here already prohibit such content and most adult sites are based outside of the country, users are unlikely to see a significant impact in the short term. Europe, meanwhile, is moving towards more harmonised age assurance standards with some countries like Germany and France already requiring stricter age verification. The Commission has recently recommended that the highest form of age assurance – age verification with documentation – is necessary in certain cases such as alcohol, pornographic content, or gambling. A digital identity wallet is being developed to let users verify their age without directly sharing personal information with online platforms. The plan is for identity documents to be securely stored in an EU-wide app. The large tech companies are also steeling themselves for a battle over how future age assurance requirements are enforced. Social media platforms have argued that such verification should be done at the app-store or operating system level, rather than by each individual app. Digital rights campaigners remain concerned about the use of facial scanning generally. Oppressive States have historically exploited data putting campaigners and members of marginalised groups, among others, at risk, Ella Jakubowska said. "I'm aware that we can sometimes sound as if we're being a bit cynical or doomsdayish but it's legitimate to be aware of the potential of creating mass surveillance systems," Ms Jakubowska said. Beyond social media Regulators are also now grappling with broader questions around AI and what decisions it should be allowed to make. Should AI be permitted to decide who gets a job or a bank loan? Or what about who is served in a retail store? Ireland's largest shop-fitting company expects new age estimation screens to be in stores within months. At the showroom of Mercury Retail Services, MD Garry Doyle explained how vape stores are among the potential clients of the new technology. "With the growth of age-restricted sales it's really important that a retailer is able to give their staff the tools they need to be able to make the decisions at the till," Mr Doyle explained. A tablet screen is positioned facing customers to scan their face and indicate if they meet a set age threshold. When I tested the system, it accurately verified that I was over 21, confirming this with a green light after scanning my face. It gives business owners a tool to show they are taking their responsibilities seriously, Mr Doyle said. "It's not saying you can't serve this customer. It's helping the retailer make an informed decision of whether they need to ask for identification." Contending that Ireland is a long way away from a purely autonomous 24-hour store, Mr Doyle added: "I don't think you'll ever get to a situation whereby you are replacing shop staff with AI because retail in Ireland is about the experience and relationships that retailers have with their customers." As biometric age estimation technologies become increasingly integrated into online platforms and everyday interactions, governments and regulators will be tasked with navigating a complex and delicate balance - ensuring that measures designed to protect vulnerable users, particularly children, do not come at the expense of individual privacy, civil liberties, and digital freedom.


Irish Times
17 hours ago
- Irish Times
Crypto trading exchange Kraken secures EU-wide licence from Central Bank
Kraken, a crypto trading exchange, has secured authorisation from the Central Bank of Ireland under new EU regulations, which the company said would allow it to expand more quickly across the Europe. It is the first licence that the Irish watchdog has issued under the EU's Markets in Crypto-Assets Regulation (Mica), which came into effect last year. Kraken already held so-called virtual asset service provider (Vasp) registrations in Ireland and a number of other European markets, including, Belgium, France, Italy, the Netherlands, Poland and Spain. 'Securing a license from the Central Bank of Ireland (CBI), with its long heritage and experience as a rigorous financial regulator, isn't just about compliance. It's a powerful signal of Kraken's commitment to expanding the crypto ecosystem through responsible innovation,' said Arjun Sethi, co-chief executive of the company in a statement. READ MORE 'Over the past several years, our team has worked tirelessly to meet the CBI's gold standard regulatory expectations. This license reflects that effort and places us in a strong position to expand our product offering, grow our institutional and retail client base, and deliver secure, accessible, and fully regulated crypto services to millions more people across the EU.' The development bucks a recent trend. Gemini, the cryptocurrency exchange founded by the US billionaire Winklevoss twins, switched its headquarters from Ireland to Malta earlier this year citing a better environment for 'innovation among fintech and digital assets'. Last week, Coinbase, one of the world's biggest crypto marketplaces, revealed it had switched its European regulatory hub from Ireland to Luxembourg, where it has a Mica licence. Central Bank governor Gabriel Makhouf has long taken a sceptical view of crypto assets, saying most explicitly in a blog post two years ago that it 'might be more accurate' to describe them as 'Ponzi schemes' rather than investments. Mica aims to bring crypto under the same regulatory umbrella as traditional finance, but some fear that uneven enforcement across the EU could undermine its goals. France's financial markets regulator has publicly warned that the European Securities and Markets Authority's (Esma) lack of direct authority could lead to a 'regulatory race to the bottom'. Kraken said that its Mica authorisation, along with European licences under the markets in financial instruments directive (Mifid) and as an electronic money institution (Emi) will enable it extend its regulated offering to millions of clients across the EU. 'Together, these licenses support significant growth opportunities across retail, professional, and institutional client segments, including in spot trading, derivatives and payments,' it said.