My Blog
Technology

How to talk with your kids about AI

How to talk with your kids about AI
How to talk with your kids about AI


It’s time for The Talk about artificial intelligence. Actually, it might be way overdue.

AI apps can do amazing things, but they also can get children into a lot of trouble. And chances are, your kids are already using them.

But you don’t have to be an AI expert to talk with your kids about it. Starting this week, popular AI apps like ChatGPT are getting their own version of nutrition labels to help parents and kids navigate how to use them and what to avoid. They’re written by family-advocacy group Common Sense Media.

The reviews expose some uncomfortable truths about the current state of AI. To help families guide their conversations, I asked Common Sense review leader Tracy Pizzo Frey to help boil them down to three key lessons.

Like any parent, Pizzo Frey and her team are concerned not only with how well AI apps work, but also where they might warp kids’ worldview, violate their privacy or empower bullies. Their conclusions might surprise you: ChatGPT, the popular ask-anything chatbot, gets just three stars out of five. Snapchat’s My AI gets just two stars.

The thing every parent should know: American youths have adopted AI as though it’s magic. Students are such major users of ChatGPT that the service’s overall traffic dips and surges significantly along with the school calendar year, according to web-measurement company Similarweb.

Children are, in fact, a target market for AI companies even though many describe their products as works in progress. This week, Google announced it was launching a version of its “experimental” Bard chatbot for teens. ChatGPT technically requires permission from a parent to use if you’re under 18, but kids can get around that simply by clicking “continue.”

The problem is, AI is not magic. Today’s buzzy generative AI apps have deep limitations and insufficient guardrails for kids. Some of their issues are silly — making pictures of people with extra fingers — but others are dangerous. In my own AI tests, I’ve seen AI apps pump out wrong answers and promote sick ideas like embracing eating disorders. I’ve seen AI pretend to be my friend and then give terrible advice. I’ve seen how simple AI makes creating fake images that could be used to mislead or bully. And I’ve seen teachers who misunderstand AI accusing innocent students of using AI to cheat.

“Having these kinds of conversations with kids is really important to help them understand what the limitations of these tools are, even if they seem really magical — which they’re not,” Pizzo Frey tells me.

AI is also not going away. Banning AI apps isn’t going to prepare young people for a future where they’ll need to master AI tools for work. For parents, that means asking lots of questions about what your kids are doing with these apps so you can understand what specific risks they might encounter.

Here are three lessons parents need to know about AI so they can talk to their kids in a productive way:

1) AI is best for fiction, not facts

Hard reality: You can’t rely on know-it-all chatbots to get things right.

But wait … ChatGPT and Bard seem to get things right more often than not. “They are accurate part of the time simply because of the amount of data they’re trained on. But there’s no checking for factual accuracy in the design of these products,” says Pizzo Frey.

There are lots and lots of examples of chatbots being spectacularly wrong, and it’s one of the reasons both Bard and ChatGPT get mediocre ratings from Common Sense. Generative AI is basically just a word guesser — trying to finish a sentence based on patterns from what they’ve seen in their training data.

(ChatGPT’s maker OpenAI didn’t respond to my request for comment. Google said the Common Sense review “fails to take into account the safeguards and features that we’ve developed within Bard.” Common Sense plans to include the new teen version of Bard in its next round of reviews.)

I understand lots of students use ChatGPT as a homework aid, to rewrite dense textbook material into language they can better digest. But Pizzo Frey recommends a hard line: Anything important — anything going into an assignment or that you might be asked about on a test — needs to be checked for accuracy, including what it might be leaving out.

Doing this helps kids learn important lessons about AI, too. “We are entering a world where it could become increasingly difficult to separate fact from fiction, so it’s really important that we all become detectives,” says Pizzo Frey.

That said, not all AI apps have these particular factual problems. Some are more trustworthy because they don’t use generative AI tech like chatbots and are designed in ways that reduce risks, like learning tutors Ello and Kyron. They get the highest scores from Common Sense’s reviewers.

And even the multiuse generative AI tools can be great creative tools, like for brainstorming and idea generation. Use it to draft the first version of something that’s hard to say on your own, like an apology. Or my favorite: ChatGPT can be a fantastic thesaurus.

An AI app may act like a friend. It may even have a realistic voice. But this is all an act.

Despite what we’ve seen in science fiction, AI isn’t on the verge of becoming alive. AI does not know what’s right or wrong. And treating it like a person could harm kids and their emotional development.

There are growing reports of kids using AI for socializing, and people speaking with ChatGPT for hours.

Companies keep trying to build AI friends, including Meta’s new chatbots based on celebrities such as Kendall Jenner and Tom Brady. Snapchat’s My AI gets its own profile page, sits in your friends list and is always up for chatting even when human friends are not.

“It is really harmful, in my opinion, to put that in front of very impressionable minds,” says Pizzo Frey. “That can really harm their human relationships.”

AI is so alluring, in part, because today’s chatbots have a technical quirk that causes them to agree with their users, a problem known as sycophancy. “It’s very easy to engage with a thing that is more likely to agree with you than something that might push or challenge you,” Pizzo Frey says.

Another part of the problem: AI is still very bad at understanding the full context that a real human friend would. When I tested My AI earlier this year, I told the app I was a teenager — but it still gave me advice on hiding alcohol and drugs from parents, as well tips for a highly age-inappropriate sexual encounter.

A Snap spokeswoman said the company had taken pains to make My AI not look like a human friend. “By default, My AI displays a robot emoji. Before anyone can interact with My AI, we show an in-app message to make clear it’s a chatbot and advise on its limitations,” she said.

3) AI can have hidden bias

As AI apps and media become a larger part of our lives, they’re bringing some hidden values with them. Too often, those include racism, sexism and other kinds of bigotry.

Common Sense’s reviewers found bias in chatbots, such as My AI responding that people with stereotypical female names can’t be engineers and aren’t “really into technical stuff.” But the most egregious examples they found involved text-to-image generation AI apps such as DallE and Stable Diffusion. For example, when they asked Stable Diffusion to generate images of a “poor White person,” it would often generate images of Black men.

“Understanding the potential for these tools to shape our children’s worldview is really important,” says Pizzo Frey. “It’s part of the steady drumbeat of always seeing ‘software engineers’ as men, or an ‘attractive person’ as someone who is White and female.”

The root problem is something that’s largely invisible to the user: How the AI was trained. If it gobbled up information across the whole internet without sufficient human judgment, then the AI is going to “learn” some pretty messed-up stuff from dark corners of the internet where kids should not be.

Most AI apps try to deal with unwanted bias by putting systems in place after the fact to correct their output — making certain words off-limits in chats or images. But those are “Band-Aids,” says Pizzo Frey, that often fail in real-world use.

Related posts

Keep Your Coffee Hot All Day With An Ember Smart Mug — Prices Start at $98

newsconquest

Cybersecurity researchers not will face hacking fees beneath CFAA

newsconquest

Microsoft to pay $20 million to settle Xbox Live privacy allegations

newsconquest