Predicting how old someone is based on only how they look is incredibly hard to get right, especially in those awkward early teen years. And yet bouncers, liquor store owners, and other age-restricted goods gatekeepers make that quick estimation all the time.
Their predictions are often wrong. Now London-based digital identity company Yoti believes its AI-powered age estimation can predict how old someone is if they’re aged anywhere from six to 60. For the first time, it claims, it can accurately determine whether children are under or over 13, the minimum age many social media firms require their users to be.
Yoti’s image technology may be increasingly appealing as Big Tech and internet services have faced increasing scrutiny over how children use their products. However, privacy advocates say automatically analysing people’s faces normalizes surveillance, is largely unregulated, and has the potential to show bias.
Yoti says its age estimation technology, which it has developed over the last three years, has a margin of error of 2.79 years across its total 45-year age range. For under 25s the margin of error drops below 1.5 years. In the next few weeks, it will get brick-and-mortar tests at five major supermarket chains in the UK. The company hasn’t named the supermarket brands but says a number of unnamed pornography and gaming websites are also trialing the tech to stop underage visitors. It adds that its age estimation technology is already being used by children’s streaming social network Yubo and healthy living app Smash.
Point a camera running Yoti’s software at your face — it can work through the web on your phone, laptop, or tablet, or at a self-checkout terminal — and the system estimates your age range. On multiple tests using a browser-based staging environment on my phone, the system correctly put me at between 27-31 and 28-32 years old. The company says neither it nor its clients store the images it captures, and you don’t need to register to use it. “It's not identifying, it’s not authenticating any individual,” says Julie Dawson, director of regulatory and policy at Yoti. The company claims it’s not facial recognition as it can’t identify individuals. “When it sees a new face, it just spits out the estimated age of that individual,” Dawson says.
Yoti clients can also use thresholds for age estimations: for instance, setting an estimation limit of 25 when someone in the UK buying alcohol has to be over 18 by law. Anyone flagged as under that threshold could then be asked to provide an ID. The system also lets its customers know how confident it is in any given estimate.
The company has trained its neural networks with “hundreds of thousands” of pictures of people’s faces, says Yoti cofounder and CEO Robin Tombs. It has mostly collected those faces itself, through its standalone Yoti app that lets people verify their ID with governments and other bodies by uploading official documents like passports and driver’s licenses. When people upload their details to the Yoti app they have the option to opt-out of the data being used to train Yoti’s AI. The company itself is unsure what facial features its AI uses to determine people’s age. “We have to be honest,” Tombs says, “we don't really know whether it's to do with wrinkles, or saggy eyes, or quite what. It has just done so many that it is now very good at it.”
Yoti has labeled each image used in the training data with a person’s gender, plus the year and month they were born. Children under 13 aren’t allowed on the app, so the company instead employed “professional data capture companies” to ask parents for their children to be involved in the research in exchange for a fee. Yoti did not confirm how many images of children under 13 it used in its training data, citing commercial reasons.
The art of estimating someone’s age goes back as far as carnival barkers. But using technology to automatically estimate age is becoming increasingly popular as Big Tech tries to stop pre-teens from creating accounts. At the moment online age checking often involves little more than people entering their date of birth, which can easily be faked. Equally, websites tend to avoid asking people to upload identity documents due to fears of data breaches. These lax approaches can result in young children being on parts of the web that aren’t designed for them: more than half of 11-year-olds in the UK have social media accounts, despite officially being too young to open them.
Emerging age estimation efforts broadly range from those based on biometric details, such as facial and hand analysis, to profiling people based on what they do and say. During the first three months of this year TikTok removed seven million accounts it suspected were created by under 13s; it previously said it uses facial recognition algorithms and people’s connections to others to work out how old users may be. Facebook even uses, in part, the text in the ‘happy birthday’ messages you receive.
Jen Persson, the director of children-focused rights group Defend Digital Me says age checks should be made without using facial analysis or biometric data where possible. “It does not need biometrics to do it,” she says. “If it works in shops today without it, then the least invasive option should be the one that continues.” Persson says people should also question how much data children should give away when signing-up to online services or buying apps and games.
While Tombs may not know exactly what his AI looks for in a face, he stands by its accuracy. He says that the tech can estimate a 13-25 year-old’s age within 1.5 years. That margin of error comes down to 1.3 years for children between six and 12, making Yoti highly effective, he says, at blocking anyone under 13 from using a given service.
At the moment, external validation for the accuracy of age estimation technology is virtually non-existent; there’s little similar in the UK to the regular commercial facial recognition accuracy tests completed by the US National Institute of Standards and Technology. But in a November 2020 analysis, the nonprofit Age Check Certification Scheme found Yoti’s system to be 98.89 percent reliable when identifying if people are under 25-years-old.
There are discrepancies, though. The analysis highlighted the system is more accurate for males than females. And Yoti’s own white paper shows that the tech is least accurate for older females with darker skin, having an error range of up to around five years. The white paper says error rates are higher in older groups with darker skin tones due to “how well-represented” they are in the training data and says environmental factors—such as the weather and alcohol—have more of an impact on older people than children.
“Broadly speaking, inferences about age are probably something that machine learning can do to some degree,” says Daniel Leufer, a Europe policy analyst focussed on AI at civil liberties group Access Now. However, he is skeptical about age estimation’s accuracy and the need for the technology in the first place. Regulators should look at who these systems will likely fail when they’re considering the use cases, Leufer says. “Typically that answer is people who are routinely failed by other systems,” he adds.
In a statement, Yoti stressed that most of its customers focus on younger segments of the population: “While our accuracy is not as strong for some older age groups, the likelihood of people experiencing bias is unlikely due to the age thresholds for the majority of age-restricted goods and services being much younger.” Training data for children under 13 was representative across skin tones, ages, and gender, Tombs says; the company’s white paper shows similar average error rates for those ages across those demographics. The CEO says he would back more academic research into age estimation and NIST-style evaluations of the technology.
However, perhaps the biggest question for age estimation technologies will be what impact they have on society. European lawmakers are already under pressure to introduce bans on biometric surveillance. If age estimation becomes normalized it may stop children from accessing websites that could cause them harm. But at the same time, it would expand the amount of general surveillance technology that children face on a daily basis. That’s where regulators need to get involved, says Beeban Kidron, chair of the child-focused 5Rights Foundation charity, which has helped introduce online child protection rules in the UK. She says the amount of data that’s collected about children already would astound many parents.
Last week a new trial of facial recognition cameras in UK school lunch queues was “paused” after regulators warned the system could be too intrusive. “The most important problem that we have is not a technological problem, but a governance problem,” Kidron says. There needs to be more rules that ensure children’s right to privacy is protected, she says, and the systems that do so are built securely.
© Condé Nast Britain 2021.