My Blog
Entrepreneur

We Forgot To Give Neural Networks The Ability To Forget


A conversation about artificial intelligence’s current inability to forget and what this means for the right to privacy and user agency. Miguel Luengo-Oroz, AI expert and social entrepreneur, speaks with Ashoka’s Hanae Baruchel. (Full bio below.)

Hanae Baruchel: ChatGPT and other generative artificial intelligence (AI) apps have been making headlines recently because of their unprecedented ability to have coherent conversations with humans. You had an interaction of your own recently.

Miguel Luengo-Oroz: Yes, I asked GPT-3 “Can neural networks forget?” and its response was that “Artificial intelligence systems, like neural networks, do not have the ability to forget in the same way that humans do. The network may adjust its weights to better fit the new data, which could result in it making different predictions for the same input. However, this is not the same as forgetting in the sense that the network still has access to all of the information it has learned previously, it is just prioritizing the new information more highly.”

Baruchel: This inability to forget isn’t getting much attention and this concerns you. Why?

Luengo-Oroz: One of the core principles of online privacy regulations like Europe’s General Data Protection Regulation (GDPR) is the idea that the data I produce is mine, and a company can use it only if I allow it to. This means I can always withdraw my consent and ask for my data back. I can even ask for the right to be forgotten. AI algorithms are trained in part on user data, and yet, almost none of the guidelines, frameworks and regulatory proposals emerging from governments and private sector companies explicitly focus on building AI models that can be untrained. We don’t have a way to reverse the changes induced in their system by a single data point at the request of a data owner.

Baruchel: So users should have the ability to say: “Stop using the AI model that was trained with my data”?

Luengo-Oroz: Exactly. Let’s give AIs the ability to forget. Think of it as the Ctrl-Z button of AI. Let’s say my picture was used to train an AI model that recognizes people with blue eyes and I don’t consent anymore, or never did. I should be able to ask the AI model to behave as if my picture had never been included in the training dataset. This way, my data would not contribute to fine tuning the model’s internal parameters. In the end, this may not affect the AI so much because my picture unlikely made a substantial contribution on its own. But we can also imagine a case where all people with blue eyes request that their data not influence the algorithm, making it impossible for it to recognize people with blue eyes. Let’s imagine in another example that I’m Vincent van Gogh and I don’t want my art to be included in the training dataset of an algorithm. If someone then asks the machine to paint a dog in the style of Vincent van Gogh, it would not be able to execute that task.

Baruchel: How would this work?

Luengo-Oroz: In artificial neural networks, every time a data point is used to train an AI model it slightly alters the way each artificial neuron behaves. One way to remove this contribution, is to fully retrain the AI model without the data point in question. But this is not a practical solution because it requires too much computing power and it is too resource intensive. Instead, we need to find a technical solution that reverses the influence of this data point, changing the final AI model without having to train it all over again.

Baruchel: Are you seeing people in the AI community pursuing such ideas?

Luengo-Oroz: So far, the AI community has done little specific research on the idea of untraining neural networks, but I’m sure there will be clever solutions soon. There are adjacent ideas to get inspiration from such as the concept of “catastrophic forgetting,” the tendency of AI models to forget previously learned information upon learning new information. The big picture of what I am suggesting here is that we build neural nets that are not just sponges that immortalize all the data they suck in, like stochastic parrots. We need to build dynamic entities that adapt and learn from the datasets they are allowed to use.

Baruchel: Beyond the right to be forgotten, you suggest that this kind of traceability could also bring about big innovations when it comes to digital property rights.

Luengo-Oroz: If we were able to trace what user-data contributed to training specific AI models, this could become a mechanism to compensate people for their contributions. As I wrote back in 2019, we could think of some sort of Spotify model that rewards humans with royalties each time someone uses an AI trained with their data. In the future, this type of solution could ease the tense relationship between the creative industry and generative AI tools like DALL-E or GPT-3. It could also lay the groundwork for concepts like Forgetful Advertising, a new ethical digital advertising model that would purposefully avoid the storage of personal behavioral data. Maybe the future of AI is not just about learning it all –the bigger the data set and the bigger the AI model, the better— but about building AI systems that can learn and forget as humanity wants and needs.

Dr. Miguel Luengo-Oroz is a scientist and entrepreneur passionate about imagining and building technology and innovation for social impact. As the former first chief data scientist at the United Nations, Miguel pioneered the use of artificial intelligence for sustainable development and humanitarian action. Miguel is the founder and CEO of the social enterprise Spotlab, a digital health platform leveraging the best AI and mobile technologies for clinical research and universal access to diagnosis. Over the last decade, Miguel has built teams worldwide bringing AI to operations and policy in domains including poverty, food security, refugees and migrants, conflict prevention, human rights, economic development, gender, hate speech, privacy and climate change. He is the inventor of Malariaspot.org –videogames for collaborative malaria image analysis–, and is affiliated with the Universidad Politécnica de Madrid. He became an Ashoka Fellow in 2013.

Follow Next Now/Tech & Humanity for more on what works and what’s next.

Related posts

This Military Tip Will Make You Rethink Productivity

newsconquest

How to Achieve Explosive Startup Growth in 2024

newsconquest

Grow Your Business With Cryptocurrency Merchant Services

newsconquest

Leave a Comment