My Blog
Technology

Existing civil right laws will be used against biased AI, officials pledge



Regulators across the Biden administration on Tuesday unveiled a plan to enforce existing civil rights laws against artificial intelligence systems that perpetuate discrimination, as the rapid evolution of ChatGPT and other generative artificial intelligence tools exacerbates long-held concerns about bias in American society.

With AI increasingly used to make decisions about hiring, credit, housing and other services, top leaders from the Equal Employment Opportunity Commission and other federal watchdogs warned about the risk of “digital redlining.” The officials said they are concerned that faulty data sets and poor design choices could perpetuate racial disparities. They promised to use existing law to combat those harms.

“There is no AI exemption to the laws on the books,” said Federal Trade Commission Chair Lina Khan (D), one of several regulators who appeared during the Tuesday news conference to signal a “whole of government” approach.

Biden administration is trying to figure out how to audit AI

The event was intended as a demonstration of Washington’s determination to grapple with how AI is transforming Americans’ lives — for better or worse — and reflected years-long concerns from Biden administration regulators that Silicon Valley represents a new battleground for racial justice.

Charlotte A. Burrows, the chair of the Equal Employment Opportunity Commission, called rapid AI development a “new civil rights frontier.”

Though this generation of regulators long has sounded the alarm about the risks posed by AI, its work has taken on greater urgency as tech companies engage in an arms race following the release of ChatGPT.

In a sign of the increasing prevalence of such tools, the Republican National Committee on Tuesday released an ad using AI-generated images to counter Biden’s announcement that he would run for reelection. The spot showed a series of images suggesting that if Biden were reelected, financial systems would crumble and U.S. borders would be overrun, with a disclosure that said “built entirely with AI imagery.”

AI boosters and critics alike have recently descended upon Washington, eager to court regulators and policymakers as ChatGPT and similar tools fascinate the public with their uncanny ability to generate humanlike conversations and art. In a sign of Washington’s growing interest, the Commerce Department’s National Artificial Intelligence Advisory Committee convened top tech executives and academics Tuesday to discuss ways the government could regulate AI.

At that meeting, council members discussed a draft of a report on artificial intelligence prepared for President Biden that calls on him to create a “Chief Responsible AI Officer” who would be responsible for coordinating AI response across the federal government. It also called for more funding for the National Institute of Standards and Technology, the federal agency that develops standards for new technology.

The council said it would convene a panel to examine bias in technologies used by law enforcement, such as facial recognition.

Washington’s debate about AI has been influenced by warnings about its risks, including a recent letter signed by Twitter CEO Elon Musk and other technologists that called for tech companies to impose a moratorium on AI development until the systems are better understood.

Some regulators have urged policymakers to focus on ways AI can be abused now, rather than on hypothetical future risks.

“AI is being used right now to decide who to hire, who to fire, who gets a loan, who stays in the hospital and who gets sent home,” FTC Commissioner Alvaro Bedoya tweeted this month. “I am much more worried about those current, real-life uses of AI than potential downstream existential threats.”

Before joining the FTC, Bedoya led research into how the government’s use of facial recognition and other technologies harms marginalized groups.

Many of Biden’s appointees, from the White House to the Justice Department, had hoped to use the levers of government to counter what they perceived as the ways tech platforms could be used to discriminate against marginalized groups. But as Biden announces his reelection bid and attention turns to the 2024 election, their big ambitions are colliding with a ticking clock.

At Tuesday’s news conference, regulators from the FTC, the Consumer Financial Protection Bureau, the Justice Department and the EEOC expressed an urgency to address both rapid developments in generative artificial intelligence, but also algorithms that have long been influencing employment, finances and other areas of the American economy.

“We are looking not just at what we’re seeing being deployed now, but also down the road to be prepared for and to sort of really put up some guardrails,” said Burrows, the EEOC chair, who said her agency is evaluating what kind of expertise it needs to seek in new hires to address generative AI.

Rohit Chopra (D), the chair of the CFPB, said his agency is exploring how it can work with tech industry whistleblowers who might flag where their own companies are running afoul of the law. He also said the agency was examining potential risks in chat apps, especially when generative AI makes it more difficult for people to trust the source of messages.

Khan, the FTC chair, said the agency recently launched a new Office of Technology to enhance the agency’s tech expertise.

Related posts

SEC Sues Coinbase, Accuses Crypto Platform of Breaking Market Rules

newsconquest

Best Road Trip Camera Gear

newsconquest

Pokemon Cross Is Including Extremely Beasts

newsconquest

Leave a Comment