The change comes as Instagram’s parent company, Meta, faces increased scrutiny over the presence of children younger than 13 on its apps. A federal privacy law prohibits collecting data on people under 13 without parental permission, but only if the platforms know it’s happening, which lets companies look the other way, privacy advocates claim. State and federal legislators have proposed a variety of bills — including the Kids Online Safety Act in the Senate, the Children and Teens’ Online Privacy Protection Act in the Senate and the Age-Appropriate Design Code Act in California — which would significantly increase tech companies’ legal responsibility to protect children online.
Thursday’s change focuses not on keeping young children off Instagram, said Meta’s director of data governance and public policy, Erica Finkle, but on making sure teen accounts reflect true ages and receive the right safeguards. Teen accounts can’t receive direct messages from adults they’re not connected with, for example, and they’re protected from certain types of ad targeting. Instagram also recently started nudging teen accounts to turn on the “take a break” feature, which reminds them to stop scrolling the platform’s infinite-scroll feeds.
“I’ve been working with policymakers, with regulators, and we all share the same goals,” Finkle said. “What’s really driving all of this, regardless of the specific piece of legislation, is ensuring that teens have appropriate experiences online and are safe and protected.”
Adult accounts who attempt to change their age to under 18 already had to provide a form of ID for verification. Now, the company is experimenting with new ways to verify age. People can still send an accepted form of ID, which Meta says it will store securely and delete within 30 days. They can ask three Instagram friends to vouch for their age — those friends must be adults, must respond within three days and can’t be vouching for anyone else. Or, they can submit a “video selfie” and Meta will use AI from digital-identity company Yoti to guess their age, the company says.
Yoti says its AI is trained on images of faces from people around the world who gave their “explicit, revocable” consent.
For parents and others concerned about young people on Instagram, the move feels incomplete, says Irene Ly, policy counsel for Common Sense Media, an organization that advocates for child-friendly media policies. In 2021, leaked documents suggested Meta had buried internal research on Instagram’s harmful effects on young women, according to whistleblower and former Meta employee Frances Haugen.
“While it is good Instagram is trying to experiment with verifying users’ ages with technology that would not compromise users’ privacy, Instagram must still make design changes to the platform so it is safer for young users,” Ly said. “This will not address the fact that their algorithm is amplifying harmful content promoting disordered eating, self-harm, or substance abuse.”
Meta spokeswoman Faith Eischen pointed to the app’s existing guidelines for algorithmically recommended content and its sensitive content control, which lets people reduce or increase the amount of guns, drugs, bare bodies and violence they see on Instagram.
Even the idea that Instagram’s age-guessing AI protects privacy is up for debate, says Mutale Nkonde, founder of algorithmic justice organization AI for the People and member of TikTok’s content moderation advisory board. Meta is notorious for sharing data internally to build detailed profiles on its users, she said — how can people be sure Meta and Yoti aren’t using their video selfies for any purpose besides age verification? Furthermore, Nkonde said one reason Instagram can run its age-verification test in the United States is because, unlike in the European Union, companies can collect data from people younger than 16. She asked: Why are teenagers the guinea pigs for Meta’s face-scanning partnership?
“Children are a protected class, so every precaution should be taken to protect them rather than use them to test a new technology,” she said. “This is an inappropriate use case.”
Eischen said Yoti is a respected company with internal research that supports the accuracy and fairness of its AI. The system never recognizes faces, only estimates their ages, she noted. And Meta will never use data it collects for age verification for any other purpose, she said.