My Blog
Technology

How to opt out of having your data ‘train’ ChatGPT and other chatbots

How to opt out of having your data ‘train’ ChatGPT and other chatbots
How to opt out of having your data ‘train’ ChatGPT and other chatbots


If you ask OpenAI’s ChatGPT personal questions about your sex life, the company might use your back-and-forth to “train” its artificial intelligence.

Your data is fuel for many AI chatbots. But some companies, including OpenAI and Google, let you opt out of having your individual chats used to improve their AI.

I have instructions at the bottom of this article for how to stop your chatbot conversations from being used to train six prominent chatbots — when that’s an option. But there’s a bigger question: Should you bother?

We’ve already trained AI. Without your explicit permission, major AI systems may have scooped up your public Facebook posts, your comments on Reddit or your law school admissions practice tests to mimic patterns in human language.

Opt-out options mostly let you stop some future data grabbing, not whatever happened in the past. And companies behind AI chatbots don’t disclose specifics about what it means to “train” or “improve” their AI from your interactions. It’s not entirely clear what you’re opting out from, if you do.

AI experts still said it’s probably a good idea to say no if you have the option to stop chatbots from training AI on your data. But I worry that opt-out settings mostly give you an illusion of control.

Is it bad that chatbots might use your conversations to ‘train’ AI?

We’ve gotten familiar with technologies that improve from tracking what we do.

GET CAUGHT UP

Summarized stories to quickly stay informed

Netflix might suggest movies based on what you or millions of other people have watched. The auto-correct features in your text messaging or email work by learning from people’s bad typing.

That’s mostly useful. But Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, said we might feel differently about chatbots learning from our activity.

Chatbots can seem more like private messaging, so Bogen said it might strike you as icky that they could use those chats to learn. Maybe you’re fine with this. Maybe not.

Niloofar Mireshghallah, an AI specialist at the University of Washington, said the opt-out options, when available, might offer a measure of self-protection from the imprudent things we type into chatbots.

She’s heard of friends copying group chat messages into a chatbot to summarize what they missed while on vacation. Mireshghallah was part of a team that analyzed publicly available ChatGPT conversations and found a significant percentage of the chats were about sex stuff.

It’s not typically clear how or whether chatbots save what you type into them, AI experts say. But if the companies keep records of your conversations even temporarily, a data breach could leak personally revealing details, Mireshghallah said.

It probably won’t happen, but it could. (To be fair, there’s a similar potential risk of data breaches that leak your email messages or DMs on X.)

What actually happens if you opt out?

I dug into six prominent chatbots and your ability to opt out of having your data used to train their AI: ChatGPT, Microsoft’s Copilot, Google’s Gemini, Meta AI, Claude and Perplexity. (I stuck to details of the free versions of those chatbots, not those for people or businesses that pay.)

On free versions of Meta AI and Microsoft’s Copilot, there isn’t an opt-out option to stop your conversations from being used for AI training.

Read more instructions and details below on these and other chatbot training opt-out options.

Several of the companies that have opt-out options generally said that your individual chats wouldn’t be used to coach future versions of their AI. The opt-out is not retroactive, though.

Some of the companies said they remove personal information before chat conversations are used to train their AI systems.

The chatbot companies don’t tend to detail much about their AI refinement and training processes, including under what circumstances humans might review your chatbot conversations. That makes it harder to make an informed choice about opting out.

“We have no idea what they use the data for,” said Stefan Baack, a researcher with the Mozilla Foundation who recently analyzed a data repository used by ChatGPT.

AI experts mostly said it couldn’t hurt to pick a training data opt-out option when it’s available, but your choice might not be that meaningful. “It’s not a shield against AI systems using data,” Bogen said.

Instructions to opt out of your chats training AI

These instructions are for people who use the free versions of six chatbots for individual users (not businesses). Generally, you need to be signed into a chatbot account to access the opt-out settings.

Wired, which wrote about this topic last month, had opt-out instructions for more AI services.

ChatGPT: From the website, sign into an account and click on the circular icon in the upper right corner → Settings → Data controls → turn off “Improve the model for everyone.”

If you chose this option, “new conversations with ChatGPT won’t be used to train our models,” the company said.

Read more settings options, explanations and instructions from OpenAI here.

Microsoft’s Copilot: The company said there’s no opt-out option as an individual user.

Google’s Gemini: By default if you’re over 18, Google says it stores your chatbot activity for up to 18 months. From this account website, select “Turn Off” under Your Gemini Apps Activity.

If you turn that setting off, Google said your “future conversations won’t be sent for human review or used to improve our generative machine-learning models by default.”

Read more from Google here, including options to automatically delete your chat conversations with Gemini.

Meta AI: Your conversations with the new Meta AI chatbot in Facebook, Instagram and WhatsApp may be used to train the AI, the company says. There’s no way to opt out. Meta also says it can use the contents of photos and videos shared to “public” on its social networks to train its AI products.

You can delete your Meta AI chat interactions. Follow these instructions. The company says your Meta AI interactions wouldn’t be used in the future to train its AI.

If you’ve seen social media posts or news articles about an online form purporting to be a Meta AI opt-out, it’s not quite that.

Under privacy laws in some parts of the world, including the European Union, Meta must offer “objection” options for the company’s use of personal data. The objection forms aren’t an option for people in the United States.

Read more from Meta on where it gets AI training data.

Claude from Anthropic: The company says it does not by default use what you ask in the Claude chatbot to train its AI.

If you click a thumbs up or thumbs down option to rate a chatbot reply, Anthropic said it may use your back-and-forth to train the Claude AI.

Anthropic also said its automated systems may flag some chats and use them to “improve our abuse detection systems.”

Perplexity: From the website, log into an account. Click the gear icon at the lower left of the screen near your username → turn off the “AI Data Retention” button.

Perplexity said if you choose this option, it “opts data out of both human review and AI training.”

Related posts

Debate over whether AI poses existential risk is dividing tech

newsconquest

‘X-Ray Vision’ Could Be the Next Superpower You Get With Augmented Reality

newsconquest

D23 Expo 22 So Far: ‘Mandalorian’ Season 3 Trailer, More Star Wars, Marvel News

newsconquest