My Blog
Technology

Two recent surveys show AI will do more harm than good

Two recent surveys show AI will do more harm than good
Two recent surveys show AI will do more harm than good



Comment

This article is a preview of The Tech Friend newsletter. Sign up here to get it in your inbox every Tuesday and Friday.

Tech companies are more excited than kids on Christmas about AI. Most people aren’t so thrilled.

New surveys about public attitudes toward artificial intelligence taught me two things:

First, the more AI becomes a reality, the less confidence we have that AI will be an unqualified win for humanity.

And second, we don’t always recognize the pedestrian uses uses of AI in our lives — including in filtering out email spam or recommending new songs — and that may make us overlook both the risks and benefits of the technology.

The bottom line: AI has not won your trust. You want to see proof of its benefits before the technology is used in your hospital room, the battlefield and our roads.

This skepticism is healthy. Frankly, you might have more good sense about AI than many of the experts developing this technology.

If tech companies, AI technologists and regulators are listening, you are saying loud and clear that you have nuanced opinions about where AI should and shouldn’t be used.

And this AI trust problem won’t be helped by unhinged replies from Microsoft’s AI chatbot or Tesla’s recent overhaul of its AI-powered driver assistance feature because of car crash risks.

Let’s dig into the public attitudes about AI and what they might mean for your life.

A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.

When the same question was asked in a 1987 poll, a higher share of respondents – about one in five – said AI would do more good than harm, Monmouth said.

In other words, people have less unqualified confidence in AI now than they did 35 years ago, when the technology was more science fiction than reality.

The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.

(The Pew survey was conducted in December and published last week. Monmouth conducted its poll in late January. You can read the organizations’ methodologies here and here.)

The biggest share of respondents in both polls said they had mixed views on whether AI would be a plus or a minus.

“It’s fantastic that there is public skepticism about AI. There absolutely should be,” said Meredith Broussard, an artificial intelligence researcher and professor at New York University.

Broussard said there can be no way to design artificial intelligence software to make inherently human decisions, like grading students’ tests or determining the course of medical treatment.

Where you think AI is a good idea and a bad idea

Most Americans essentially agree with Broussard that AI has a place in our lives, but not for everything.

Monmouth asked people six questions about settings in which AI might be used. Most people said it was a bad idea to use AI for military drones that try to distinguish between enemies and civilians or trucks making local deliveries without human drivers. Most respondents said it was a good idea for machines to perform risky jobs such as coal mining.

Attitudes about where AI is right and wrong haven’t budged much since Monmouth asked people those questions in 2015.

Alec Tyson, associate director of research with Pew, told me that prior research by his team found that people want to see evidence of tangible benefits before they feel confident in AI for high stakes settings such as law enforcement or in self-driving cars.

Public attitudes can shift, of course. We change our minds all the time. But the irony is that AI is being tested or used in many settings in which people expressed doubts, including self-driving cars and deciding when to administer medicines.

Roman Yampolskiy, an AI specialist at the University of Louisville engineering school, told me he’s concerned about how quickly technologists are building computers that are designed to “think” like the human brain and apply knowledge not just in one narrow area, like recommending Netflix movies, but for complex tasks that have tended to require human intelligence.

“We have an arms race between multiple untested technologies. That is my concern,” Yampolskiy said. (If you want to feel terrified, I recommend Yampolskiy’s research paper on the inability to control advanced AI.)

AI is everywhere, and we may not know it

Automated product recommendations on sites like Amazon, email spam filters and the software that chats with you on an airline website are examples of AI. The Pew survey found that people didn’t necessarily consider all of that stuff to be AI.

And Patrick Murray, director of the Monmouth University Polling Institute, said few of his students said yes when he asked if they use AI on a regular basis. But then he started to list examples including digital assistants such as Amazon’s Alexa and Siri from Apple. More students raised their hands.

The term “AI” is a catch-all for everything from relatively uncontroversial technology, such as autocomplete in your web search queries, to the contentious software that promises to predict crime before it happens. Our fears about the latter might be overwhelming our beliefs about the benefits from more mundane AI.

Broussard also said that public skepticism of AI may be influenced by depictions of evil computers from books and movies — like Skynet, the super-intelligent malicious machines in “The Terminator” movies. Broussard said the ways AI can end up eroding your quality of life won’t be as dramatic as murderous fictional computers.

“I’m worried about constant surveillance and AI used in policing and people relying on AI-based worker management systems that depend on not giving people biology breaks in factories,” Broussard said. “I am not worried about Skynet.”

Catch me up: How to try the new AI tech everyone is talking about

Listen to the “Post Reports” podcast: The AI arms race is on.

Twitter said last week that it will stop letting people receive one-time account access codes by text message, unless they pay for its subscription service.

You have options if Twitter’s decision affects you.

To remind you, many apps and sites give you the option to add a second step to log in for stronger security. With two-factor authentication, you must have both your account password and some other proof that you are you — like a temporary string of numbers that the app texts to you.

Instead of receiving those codes by text message, you can instead download and use free apps that generate limited-time codes as an extra security measure.

You can download Google’s two-factor authentication app for iOS or Android; or Authy for iOS or Android from a company called Twilio; or Microsoft’s authentication app for iOS or Android.

The Verge has instructions on adding authenticator apps to your Twitter account.

Brag about YOUR one tiny win! Email me or use this form to tell me about app, gadget or tech trick that made your day a little better. We might feature your advice in a future edition of The Tech Friend.



Related posts

Now That Google Has the Pixel Fold, Where’s Apple’s Foldable iPhone?

newsconquest

Unable to Pay Your Credit Card Bills This Month? Here’s What You Can Do

newsconquest

Hundreds of Twitter Employees Reportedly Resign After Musk’s Ultimatum

newsconquest