My Blog
Technology

Robots trained on AI exhibited racist and sexist behavior

Robots trained on AI exhibited racist and sexist behavior
Robots trained on AI exhibited racist and sexist behavior



Comment

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples’ faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

As Walmart turns to robots, it’s the human workers who feel like machines

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

But so far, robots have escaped much of that scrutiny, perceived as more neutral, researchers say. Part of that stems from the sometimes limited nature of tasks they perform: For example, moving goods around a warehouse floor.

Abeba Birhane, a senior fellow at the Mozilla Foundation who studies racial stereotypes in language models, said robots can still run on similar problematic technology and exhibit bad behavior.

“When it comes to robotic systems, they have the potential to pass as objective or neutral objects compared to algorithmic systems,” she said. “That means the damage they’re doing can go unnoticed, for a long time to come.”

Meanwhile, the automation industry is expected to grow from $18 billion to $60 billion by the end of the decade, fueled in large part by robotics, Rogers said. In the next five years, the use of robots in warehouses are likely to increase by 50 percent or more, according to the Material Handling Institute, an industry trade group. In April, Amazon put $1 billion toward an innovation fund that is investing heavily into robotics companies. (Amazon founder Jeff Bezos owns The Washington Post.)

The team of researchers studying AI in robots, which included members from the University of Washington and the Technical University of Munich in Germany, trained virtual robots on CLIP, a large language artificial intelligence model created and unveiled by OpenAI last year.

The popular model, which visually classifies objects, is built by scraping billions of images and text captions from the internet. While still in its early stages, it is cheaper and less labor intensive for robotics companies to use versus creating their own software from scratch, making it a potentially attractive option.

The researchers gave the virtual robots 62 commands. When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than White men, the study showed. When identifying “criminals,” Black men were chosen 9 percent more often than White men. In actuality, scientists said, the robots should not have responded, because they were not given information to make that judgment.

For janitors, blocks with Latino men were picked 6 percent more than White men. Women were less likely to be identified as a “doctor” than men, researchers found. (The scientists did not have blocks depicting nonbinary people due to the limitations of the facial image data set they used, which they acknowledged was a shortcoming in the study.)

The next generation of home robots will be more capable — and perhaps more social

Andrew Hundt, a postdoctoral fellow from the Georgia Institute of Technology and lead researcher on the study, said this type of bias could have real world implications. Imagine, he said, a scenario when robots are asked to pull products off the shelves. In many cases, books, children’s toys and food packaging have images of people on them. If robots trained on certain AI were used to pick things, they could skew toward products that feature men or White people more than others, he said.

In another scenario, Hundt’s research teammate, Vicky Zeng from Johns Hopkins University, said at-home robots could be asked by a kid to fetch a “beautiful” doll and return with a White one.

“That’s really problematic,” Hundt said.

Miles Brundage, head of policy research at OpenAI, said in a statement that the company has noted issues of bias have come up in research of CLIP, and that it knows “there’s a lot of work to be done.” Brundage added that a “more thorough analysis” of the model would be needed to deploy it in the market.

Birhane added that it’s nearly impossible to have artificial intelligence use data sets that aren’t biased, but that doesn’t mean companies should give up. Birhane said companies must audit the algorithms they use, and diagnose the ways they exhibit flawed behavior, creating ways to diagnose and improve those issues.

“This might seem radical,” she said. “But that doesn’t mean we can’t dream.”

The Pentagon’s $82 Million Super Bowl of Robots

Rogers, of Colorado State University, said it’s not a big problem yet because of the way robots are currently used, but it could be within a decade. But if companies wait to make changes, he added, it could be too late.

“It’s a gold rush,” he added. “They’re not going to slow down right now.”

Related posts

What is the Fediverse? – Video

newsconquest

Best Prime Day Deals Under $10: Grab Discounts on Over 50 Top Items Including Tech, Home Essentials and More

newsconquest

Here’s How Insulating Your Water Pipes Can Save You Money This Winter

newsconquest