“When a new face comes along, it does a pixel-level analysis of that face and then spits out a number — the age estimation with a confidence value,” Dawson said. Once the estimation is completed, Yoti and Instagram delete the selfie video and the still image taken from it.
Verifying a user’s age can be a vexing problem for tech companies, in part because plenty of users may not have a government-issued photo ID card that can be checked.
Karl Ricanek, a professor at the University of North Carolina Wilmington and director of the school’s Face Aging Group Research Lab, thinks Yoti’s technology is a good application of AI.
“It’s a worthwhile endeavor to try and protect kids,” he said.
Yet while such technology could be helpful to Instagram, a number of factors can make it tricky to accurately estimate age from a picture, Ricanek said, including puberty — which changes a person’s facial structure — as well as skin tone and gender.
What that means, in practice, is that there will be a lot of errors, said Luke Stark, an assistant professor at Western University in Ontario, Canada, who studies the ethical and social implications of AI. “We’re still taking about a mean absolute error, either way, of a year to a year and a half,” he said.
The results varied: For a couple of reporters, the estimated age range was right on target, but for others it was off by many years. For instance, it estimated one editor was between the ages of 17 and 21, when they’re actually in their mid-30s.
Among other issues, Stark is also concerned that the technology will contribute to so-called “surveillance creep.”
“It’s certainly problematic, because it conditions people to assume they’re going to be surveilled and assessed,” he said.