It is technically possible to verify the age of social media users, but leaving social media platforms to choose their own methods could be a recipe for inconsistency, a landmark study of age assurance has found.
One hundred days out from the implementation of its social media ban for children under 16, the federal government has published the full findings of a survey conducted by a British firm to gauge whether age assessment technologies are viable.
As revealed by preliminary findings in June, the authors concluded that age can be assessed to a reasonable degree of confidence in a variety of ways. But the report did not identify a clear best approach and found risks and shortcomings associated with all methods.
“Implementation depends on the willingness of a small number of dominant tech companies to enable or share control of [age assurance] processes,” the report stated.
“Co-ordination among dominant [tech] providers is essential if any truly ecosystem-wide age assurance model is to succeed.”
The survey began before the under-16 social media ban was government policy, and the authors were asked to consider age assurance more broadly, not to evaluate the ban.
But the comments about dominant tech players are directly relevant, given the ban will place the onus on the social media platforms themselves to verify the age of their users.
Platforms will not be told which approach to use
Communications Minister Anika Wells and eSafety Commissioner Julie Inman-Grant will, in the coming weeks, reveal the “reasonable steps” platforms will be required by law to take to comply with the ban.
Those steps might require them to meet a certain standard for accuracy or to enact certain safeguards for privacy, but they will not require platforms to use any specific method.
The report found that a variety of methods were technically possible, including formal verification using government documents, parental approval, or emerging technologies to assess age based on facial structure, gestures, or behaviours.
Concerns with reliability and privacy were identified with all of these approaches. Age assessment technologies were less reliable for girls than boys and for non-white faces and could not provide precise estimates, with an average error of two to three years.
Accessing government documents (for example, passports or licences) carried privacy risks, with the authors identifying a “concerning trend” of some providers holding on to user data unnecessarily, but they were, in general, more accurate.
Parental controls, which are already used loosely by Apple and Google in some contexts, raised both privacy and accuracy issues.
Despite these general concerns, the survey identified several third-party verification providers who could deliver reliable age assurance without storing much or any user data.
“This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” Ms Wells said.
“While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and important that user privacy can be safeguarded.”
Communications Minister Anika Wells said the report was another piece of evidence that age assurance was technologically possible. (ABC News: Adam Kennedy)
Leaving platforms to decide could risk inconsistency
The report alluded to the fact that large social media and tech platforms, including Meta, Snap, TikTok, Google, and Apple, already had, or were developing, age assurance methods of their own.
But while those platforms participated in the study to varying degrees, the authors were unable to provide detailed assessments of their proprietary age assurance methods.
“Individual services (for example, YouTube, TikTok, Roblox) implement their own systems for account creation, age gates, content filtering, and parental features,” the report noted.
“However, these solutions often operate in isolation and are not interoperable across platforms … Reliance on voluntary or proprietary measures [by platforms] leaves many children unprotected or inconsistently treated.”
The report also considered approaches to clamp down on circumvention of age assurance methods, including the use of virtual private networks and “deepfakes” of government documents or faces.
While it observed that many providers were actively working to fight back against both of these circumvention methods, it did not identify foolproof ways to do so and Ms Wells and Ms Inman-Grant have both acknowledged that no method will be airtight.
Expert fears about false positives and negatives confirmed
Many experts have raised concerns about the effectiveness of any method. Lisa Given, an RMIT computer science professor, told the ABC in June she did not believe the ban was viable and that parents would get a “rude shock”.
“We are going to see a messy situation emerging immediately where people will have what they call false positives [and] false negatives,” she said.
False negatives would be those who are over 16 but deemed to be underage, and false positives would be those who are under 16 but deemed to be overage.
The report identified that false positive and false negative rates were both around three per cent for age verification using official documents.
For technology to assess age based on faces or other traits using an age limit of 16, a “grey zone” of two to three years on either side was identified, and there were some errors found as much as four years on either side.
Addressing the National Press Club in June, Ms Inman-Grant said the ban, which she prefers to call a social media “delay”, would likely be enacted with a range of technologies, and there would be no “technology mandates”.
“The technology exists right now for these platforms to identify under-16s on their services,” she said.
“Companies will be compelled to measure and report on the efficacy and success of their efforts so that we can further gather evidence and evaluate”.