Latest news with #AgeCheckCertificationScheme


The Guardian
2 days ago
- Politics
- The Guardian
Key stakeholders in Australia's social media age assurance trial frozen out amid media leaks and resignations
The organisation behind the age assurance technology trial that will inform how to keep under-16s off social media has frozen out key stakeholders amid media leaks and resignations of two members. The trial's Iain Corby has also downplayed reporting about inaccuracies with the facial estimation technology – one of the technologies tested in the trial – arguing that it can still be used even if it is out by seven years. The $6.5m age assurance technology trial, run by the UK-based Age Check Certification Scheme (ACCS), tested various types of technology that could be used by social media platforms and adult websites to keep out under-16s or under-18s, respectively, when Australia's under-16s social media ban comes into force in December. The project provided its final report to the communications minister, Anika Wells, at the start of August. The final report is expected to run to 10 volumes and 2,500 pages. However, the stakeholder advisory board for the trial – which comprises tech companies, child safety advocates, academics and privacy advocates – may not see the final report until Wells releases it publicly in the coming weeks. Two sources close to the board told Guardian Australia the board was not expected to be provided with a copy before then due to leak concerns. When the trial was announced, the project plan noted that transparency was key to ensuring public trust in the project. Sign up: AU Breaking News email 'The programme needs to be completed with transparency and ensuring the credibility and confidence of participants, the commissioning department and the Australian public,' the plan stated. Initially, detailed minutes for the stakeholder meetings were published online outlining disagreements and concerns raised by those involved. But the last several meeting minutes have not been posted, and that is expected to continue for the final meeting as the report is released. The group behind the trial did not publicise that the report had been handed to government, apart from a blog post on the trial's website. Two members of the advisory board have also quit. Guardian Australia confirmed that the Electronic Frontiers Australia chair, John Pane, resigned from the board last week, following another resignation reported earlier by Crikey. In a statement issued last week, Pane criticised the preliminary report findings from June – where the project team claimed that age assurance technology could be 'private, robust and effective' – as 'strong on hype and rhetoric, and difficult to reconcile with the evidence'. 'These political talking points seem to be a case of 'selling the sizzle and not the steak'– or perhaps even 'privacy washing',' Pane said. He argued that assessment of the privacy practices of some vendors amounted to checking if the vendor had a privacy policy, and was a 'tick-box compliance' exercise. Pane also said the trial organisers had not confirmed whether the vendors who participated in the trial will permanently de-identify all personal data collected from test subjects. Tim Levy, the managing director of children's safety technology company Qoria, resigned from the trial earlier this year. Levy said the voracity of the conclusions of the interim report were 'not going to match community expectations and I believe my team of 600 dedicated cybersafety professionals would not like us to be associated with such an unsafe report'. ACCS's chief executive, Tony Allen, said Pane's contribution and the work of the board was welcomed but he said all of the points raised 'have been addressed in the full report – and in some considerable depth'. 'It is partly because of those points that it is taking some time to prepare the report (and perhaps more so, the supporting materials) for publication.' Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion Allen said all data has been anonymised and the personally identifiable information deleted. He said the trial continues to engage with the board and is working for the next meeting. Allen said a gap in publication of meeting minutes was 'no conspiracy' but a 'factor of preparation for publication, which takes some time'. Corby, who is responsible for stakeholder engagement for the trial and is also the executive director of the Age Verification Providers Association, told a podcast published on the industry research and consultancy website Biometrics Update last month that people 'need to be patient and get the full 10 volumes [of the report] in the public domain, and then it will be a lot clearer what the trial has found'. He said the report will be a 'bible of data' that will be 'quoted around the world'. '[The report] is albeit done by Age Verification Certification Scheme, who are known in the sector, but with very close scrutiny from an advisory board and ethics panel, the government themselves in Australia, [and] Prof Toby Walsh providing independent review of the evaluation approach,' Corby said. 'So it's been done with a lot of discipline around its independence and validity.' In June, the ABC reported that tests of facial age estimation technology had estimated a 16-year-old being as old as 37. Corby dismissed this report, stating that errors in age estimation don't undermine the whole project of age assurance. 'You're always going to have what we call a buffer age, and that might be three years or it might need to be five. Or for one provider, it might be three, and for a provider with a poorer quality algorithm, it might be seven in order to achieve the same level of accuracy overall,' he said on the podcast. 'But that doesn't mean to say you have to give up on the social media minimum age bill in Australia because one particular category of technology doesn't give you an exact answer.' Corby told Crikey this week the trial would not comment until the final report is released.


Hans India
23-06-2025
- Business
- Hans India
Enforcing teen social media ban is ‘effective' but at odds with evidence
Technologies to enforce the Australian government's social media ban for the Under-16s are 'private, robust and effective'. That's according to the preliminary findings of a federal government-commissioned trial that has nearly finished testing them. The findings may give the government greater confidence to forge ahead with the ban, despite a suite of expert criticism. They might also alleviate some of the concerns of the Australian population about privacy and security implications of the ban, which is due to begin in December. For example, a report based on a survey of nearly 4,000 people and released by the government earlier this week found nine out of ten people support the idea of a ban. It was also found that a large number of people were 'very concerned' about how the ban would be implemented. Nearly 80 per cent of respondents had privacy and security concerns, while roughly half had concerns about age assurance accuracy and government oversight. The trial's preliminary findings paint a rosy picture of the potential for available technologies to check people's ages. However, they contain very little detail about specific technologies, and appear to be at odds with what we know about age-assurance technology from other sources. From facial to hand-movement recognition: The social media ban for U-16s was legislated in December 2024. A last-minute amendment to the law requires technology companies to provide 'alternative age assurance methods' for account holders to confirm their age, rather than relying only on government-issued ID. The Australian government commissioned an independent trial to evaluate the 'effectiveness, maturity, and readiness for use' of these alternative methods. The trial is being led by the Age Check Certification Scheme – a company based in the United Kingdom that specialises in testing and certifying identity verification systems. It includes 53 vendors offering a range of age assurance technologies to guess people's ages, using techniques like facial recognition and hand-movement recognition. According to the preliminary findings of the trial, 'age assurance can be done in Australia'. The trial's project director, Tony Allen, said 'there are no significant technological barriers' to assuring people's ages online. He added the solutions are 'technically feasible, can be integrated flexibly into existing services and can support the safety and rights of children online'. However, these claims are hard to square with other evidence. High error rates: A day earlier to the survey results, ABC reported that the trial found face-scanning technologies 'repeatedly misidentified' children as young as 15 as being in their 20s and 30s. These tools could only guess children's ages 'within an 18-month range in 85 percent of cases'. This means a 14-year-old might gain access to a social media account, while a 17-year-old might be blocked. This is in line with results of global trials of face-scanning technologies conducted for more than a decade. An ongoing series of studies of age estimation technology by the United States' National Institute of Standards and Technology shows that algorithms 'fail significantly when attempting to differentiate minors' of various ages. The tests also show that error rates are higher for young women compared to young men. Error rates are also higher for people with darker skin tones. These studies show that even the best age-estimation software currently available – Yoti – has an average error of 1.0 years. Other software options mistake someone's age by 3.1 years on average. This means, at best, a 16-year-old might be estimated to be 15 or 17 years old; at worst, they could be seen to be 13 or 19 years of age. These error rates mean a significant number of U-16 children could access social media accounts despite a ban being in place, while some over 16 could be blocked. Yoti also explains businesses needing to check exact ages (such as 18) can set higher age thresholds (such as 25), so fewer people under 18 get through the age check. This approach would be similar to that taken in Australia's retail liquor sector, where the sales staff verifies ID for anyone who appears to be under the age of 25. However, many young people lack the government-issued ID required for an additional age check. It's also worth remembering that in August 2023, the Australian government acknowledged that the age assurance technology market was 'immature' and could not meet key requirements, such as working reliably without circumvention and balancing privacy and security. Many questions remain unanswered: We don't yet know exactly what methods the platforms will use to verify account holders' ages. While face-scanning technologies are often discussed, they could use other methods to confirm age. The government trial also tested voice and hand movementsto guess young people's ages. But those methods also have accuracy issues. And it's not yet clear what recourse people will have if their age is misidentified. Will parents be able to complain if children U-16 gain access to accounts, despite restrictions? Will older Australians who are incorrectly blocked be able to appeal? If so, to whom? There are other outstanding questions. What's stopping someone who's Under-16 from getting someone who is over 16 years to set up an account on their behalf? To mitigate this risk, the government might require all social media users to verify their age at regular intervals. It's also unclear what level of age estimation error the government may be willing to accept in implementing a social media ban. The legislation says technology companies must demonstrate they have taken 'reasonable steps' to prevent under 16s from holding social media accounts. What is considered 'reasonable' is yet to be clearly defined. Australians will have to wait until later this year for the full results of the government's trial to be released, and to know how technology companies will respond. With less than six months until the ban comes into effect, social media users still don't have all the answers they need. (The writer is associated with RMIT University)


AsiaOne
21-06-2025
- Business
- AsiaOne
Australia social media teen ban software trial organisers say the tech works, World News
SYDNEY - Some age-checking applications collect too much data and no product works 100 per cent of the time, but using software to enforce a teenage social media ban can work in Australia, the head of the world's biggest trial of the technology said on Friday (June 20). The view from the government-commissioned Age Assurance Technology Trial of more than 1,000 Australian school students and hundreds of adults is a boost to the country's plan to keep under 16s off social media. From December, in a world first ban, companies like Facebook and Instagram owner Meta, Snapchat and TikTok must prove they are taking reasonable steps to block young people from their platforms or face a fine of up to A$49.5 million (S$41 million). Since the Australian government announced the legislation last year, child protection advocates, tech industry groups and children themselves have questioned whether the ban can be enforced due to workarounds like Virtual Private Networks, which obscure an internet user's location. "Age assurance can be done in Australia privately, efficiently and effectively," said Tony Allen, CEO of the Age Check Certification Scheme, the UK-based organisation overseeing the Australian trial. The trial found "no significant tech barriers" to rolling out a software-based scheme in Australia, although there was "no one-size-fits-all solution, and no solution that worked perfectly in all deployments," Allen added in an online presentation. Allen noted that some age-assurance software firms "don't really know at this stage what data they may need to be able to support law enforcement and regulators in the future. "There's a risk there that they could be inadvertently over-collecting information that wouldn't be used or needed." Organisers of the trial, which concluded earlier this month, gave no data findings and offered only a broad overview which did not name individual products. They will deliver a report to the government next month which officials have said will inform an industry consultation ahead of the December deadline. A spokesperson for the office of the eSafety Commissioner, which will advise the government on how to implement the ban, said the preliminary findings were a "useful indication of the likely outcomes from the trial. [[nid:705771]] "We are pleased to see the trial suggests that age assurance technologies, when deployed the right way and likely in conjunction with other techniques and methods, can be private, robust and effective," the spokesperson said. The Australian ban is being watched closely around the world with several governments exploring ways to limit children's exposure to social media.


Express Tribune
20-06-2025
- Business
- Express Tribune
Teen social media ban clears first hurdle in Australia
Some age-checking applications collect too much data and no product works 100% of the time, but using software to enforce a teenage social media ban can work in Australia, the head of the world's biggest trial of the technology said on Friday. The view from the government-commissioned Age Assurance Technology Trial of more than 1,000 Australian school students and hundreds of adults is a boost to the country's plan to keep under 16s off social media. From December, in a world first ban, companies like Facebook and Instagram owner Meta, Snapchat, and TikTok must prove they are taking reasonable steps to block young people from their platforms or face a fine of up to A$49.5 million ($32 million). Since the Australian government announced the legislation last year, child protection advocates, tech industry groups and children themselves have questioned whether the ban can be enforced due to workarounds like Virtual Private Networks, which obscure an internet user's location. "Age assurance can be done in Australia privately, efficiently and effectively," said Tony Allen, CEO of the Age Check Certification Scheme, the UK-based organisation overseeing the Australian trial. The trial found "no significant tech barriers" to rolling out a software-based scheme in Australia, although there was "no one-size-fits-all solution, and no solution that worked perfectly in all deployments," Allen added in an online presentation. Allen noted that some age-assurance software firms "don't really know at this stage what data they may need to be able to support law enforcement and regulators in the future. "There's a risk there that they could be inadvertently over-collecting information that wouldn't be used or needed." Organisers of the trial, which concluded earlier this month, gave no data findings and offered only a broad overview which did not name individual products. They will deliver a report to the government next month which officials have said will inform an industry consultation ahead of the December deadline. A spokesperson for the office of the eSafety Commissioner, which will advise the government on how to implement the ban, said the preliminary findings were a "useful indication of the likely outcomes from the trial. "We are pleased to see the trial suggests that age assurance technologies, when deployed the right way and likely in conjunction with other techniques and methods, can be private, robust and effective," the spokesperson said. The Australian ban is being watched closely around the world with several governments exploring ways to limit children's exposure to social media.


India Today
20-06-2025
- Business
- India Today
Kids under 16 may soon face social media ban after Australia proves it has tech for age verification
Australia is preparing to become the first country in the world to enforce a nationwide ban on social media use for children under the age of 16. This bold move now appears increasingly likely after a major government-backed trial found that age verification technology can work both effectively and privately. The Age Assurance Technology Trial, involving over 1,000 school students and hundreds of adults, tested how well current tools could verify a user's age without over-collecting personal data. The trial was overseen by the UK-based nonprofit Age Check Certification Scheme (ACCS), and the results are being seen as a key step towards making Australia's proposed legislation a no significant tech barrier to age assurance in Australia,' said Tony Allen, CEO of ACCS. Speaking at an online briefing, Allen acknowledged that no system is perfect, but emphasised that 'age assurance can be done in Australia privately, efficiently and effectively.'Although some tools may collect more data than necessary, Allen stressed the importance of balance. 'There's a risk some solutions over-collect data that won't even be used. That's something to watch.'Here is how the system will work At the heart of the proposed verification model is a layered approach. It begins with traditional ID-based checks using documents like passport or driver's licence. These are verified through independent systems, and platforms never directly access the estimation adds another layer: users can upload a selfie or short video that AI analyses to determine age. This method is quick and does not store biometric data. A third component – contextual inference – draws from behavioural patterns such as email type, language, and digital behaviour to further estimate a user's age. While not reliable alone, it helps strengthen the system when used with other these technologies aim to prevent children from easily bypassing checks while also respecting December 2025, platforms like Instagram, TikTok, Snapchat and X will be required to take 'reasonable steps' to keep underage users off their services. If they fail, they could face penalties of up to A$49.5 million (which is about US $32 million) per platforms, including YouTube, WhatsApp and Google Classroom, are exempt for now. Australia's move is being closely monitored by other countries, including the UK, New Zealand, and members of the EU, all of which are exploring ways to regulate children's access to social media. The Australian government sees this trial as proof that privacy and child protection can go hand in hand. A spokesperson for the eSafety Commissioner's office reportedly called the findings 'a useful indication of the likely outcomes from the trial', and added that when deployed correctly, the technologies 'can be private, robust and effective.'Despite the positive trial results, there are still some caveats. Children may try to bypass age checks using VPNs, shared devices or borrowed credentials. It will now be up to social media platforms to detect and prevent these workarounds – a responsibility they've rarely shouldered at this scale In