My Blog
Technology

California fights for AI laws amid Trump plan to curb regulation

California fights for AI laws amid Trump plan to curb regulation
California fights for AI laws amid Trump plan to curb regulation


SAN FRANCISCO — Republican delegates meeting this week in Milwaukee pledged to roll back federal restrictions on artificial intelligence, while other allies of former president Donald Trump laid plans for “Manhattan Projects” to boost military AI.

But in California, the state’s Democratic-controlled legislature is moving in the opposite direction, debating a proposal that would force the biggest and best-funded companies to test their AI for “catastrophic” risks before releasing it to the public.

The measure, written by Scott Wiener, a Democratic state senator from San Francisco, has drawn howls from tech industry leaders, who argue it would scare off technologists aiming to build AI tools in the state and add bureaucratic busywork that might box out scrappy start-ups.

Opponents of the bill have claimed it could even result in developers being sent to jail if their tech is used to harm people, something Wiener has vociferously denied.

After the bill was approved by a California Senate committee earlier this month, Google’s head of AI and emerging tech policy Alice Friend wrote a letter to the chairman arguing that its provisions are “not technically feasible” and “would punish developers even if they have acted responsibly.”

Wiener says the law is necessary to forestall the most extreme potential risks of AI and instill trust in the technology. Its passage is urgent, he said, in light of Republican commitments to undo President Biden’s 2023 executive order, which uses the Defense Production Act to require AI companies to share information about safety testing with the government.

“This action by Trump makes it all the more important for California to act to promote robust AI innovation,” Wiener said on X last week.

GET CAUGHT UP

Stories to keep you informed

The bill has established Sacramento as ground zero for the battle over government regulation of AI. It is also shedding light on the limits of Silicon Valley’s enthusiasm for government oversight, even as key leaders such as OpenAI CEO Sam Altman publicly urge policymakers to act.

By mandating previously voluntary commitments, Wiener’s bill has gone further than tech leaders are willing to accept, said Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution.

It is “suggesting that Big Tech needs to be much more accountable,” Lee said, “and that was not well received among industry.”

Dylan Hoffman, TechNet’s executive director for California and the Southwest, said Friend’s letter — along with letters from Meta and Microsoft before it — show the “weight and importance” the companies place on the measure. “It’s a pretty extraordinary step for them … to step out from behind the trade association and put their name on the letter.”

Spokespeople for Google, OpenAI and Meta declined to comment. “Microsoft has not taken a position on the bill and will continue to maintain support for federal legislation as the primary means to regulate the issues it addresses,” said Robyn Hines, senior director of government affairs at Microsoft.

Even before Wiener unveiled his bill in February, California had established itself as the nation’s de facto tech legislature. After years of debate in Congress, California passed the country’s most wide-ranging digital privacy law in 2018. And California’s Department of Motor Vehicles has become a key regulator of autonomous vehicles.

On AI, Biden’s executive order last October marked Washington’s most extensive effort to regulate the booming technology. But Republicans have announced plans to repeal the order if Trump wins Nov. 5, leaving states to carry the flag for stricter AI regulation.

More than 450 bills involving AI have been active in legislative sessions in state capitals across the nation this year, according to TechNet, an industry trade association whose members include OpenAI and Google. More than 45 are pending in California, though many have been abandoned or held up in committee.

But Wiener’s bill is the most prominent and controversial of the batch. It would require any AI company deploying a certain amount of computing power to test whether its models could lead to “catastrophic” risks, such as helping people develop chemical or biological weapons, hacking into key infrastructure or blacking out power grids. The companies would submit safety reports to a new government office, the Frontier Model Division, or FMD, which would have the power to update which AI models are covered by the law, something opponents say could introduce even more uncertainty.

The bill tasks the government with creating a cloud computing system to be used by researchers and start-ups, allowing them to develop AI without having to rely on the massive expense of Big Tech cloud companies.

Dan Hendrycks, founder of the nonprofit Center for AI Safety, consulted on the bill. Last year he organized an open letter signed by prominent AI researchers and executives claiming that AI could be as dangerous to humanity as nuclear war and pandemics.

Others argue such risks are overblown and unlikely to materialize for years, if ever. And skeptics of the bill point out that even if such risks were imminent, there is no standard way to test for them.

“Size is the wrong metric,” said Oren Etzioni, an AI researcher and founder of the AI deepfake detection nonprofit TrueMedia.org. “We could have models that this doesn’t touch but are much more potentially dangerous.”

The focus on “catastrophic” risks has also frustrated some AI researchers who say there are more tangible harms from AI, such as injecting racist and sexist bias into tech tools and providing another venue for people’s private data to be vacuumed up by tech companies, issues that other bills moving through the California legislature aim to tackle.

The bill’s focus on catastrophic risks even led Meta’s head of AI, Yann LeCun, to call Hendrycks an “apocalyptic cult guru.”

“The idea that taking societal-scale risks from AI seriously makes one an ‘apocalyptic cult guru’ is absurd,” Hendrycks said.

Hendrycks recently launched a company called Gray Swan, which builds software to assess the safety and security of AI models. On Thursday, tech news site Pirate Wires published an article containing allegations that the company represents a conflict of interest for Hendrycks, because it might win business helping companies comply with the AI law if it is passed.

“Critics have accused me of an elaborate scheme to make money, when I fact I’ve spent my professional career working to advance AI safety issues,” Hendrycks said. “I disclosed what is a theoretical conflict of interest as soon as I was able, and whatever I stand to gain from this tiny startup is a minuscule fraction of the economic stakes driving the behavior of those who oppose the bill.”

Although Hendrycks has lately been criticized by some Silicon Valley denizens, leaders of the companies opposing the law have issued similar warnings about the danger of powerful AI models. Senior AI executives from Google, Microsoft and OpenAI signed the letter that Hendrycks’s group circulated in May last year warning that humanity faced a “risk of extinction from AI.” At a congressional hearing in the same month, Altman said that AI could “cause significant harm to the world.”

OpenAI also joined last year with fellow start-up Anthropic, Google and other tech companies to start an industry group to develop safety standards for new and powerful AI models. Last week, tech trade association ITI, whose members include Google and Meta, released a set of best practices for “high-risk AI systems” that include pro-active testing.

Still, those same companies are pushing back on the idea of writing commitments into law.

In a June 20 letter organized by start-up incubator Y Combinator, founders railed against placing extra scrutiny on projects that use a large amount of computing power. “Such specific metrics may not adequately capture the capabilities or risks associated with future models,” the letter said. “It is crucial to avoid over-regulating AI.”

Start-up leaders also are concerned that the bill would make it harder for companies to develop and release “open source” technology, which is available for anyone to use and change. In a March post on X, now-Republican vice-presidential candidate J.D. Vance described open source as key to building models free of the political bias of OpenAI and Google’s tech.

Wiener has altered the bill in response to industry feedback and criticism, including by stipulating that open-source developers aren’t responsible for safety problems that emerge from third parties changing their tech. Industry critics say those tweaks aren’t enough.

Meanwhile, other bills working their way through the California legislature have drawn less notice from the tech industry.

Assemblymember Rebecca Bauer-Kahan, a Democrat who represents a suburban swath of the eastern Bay Area, wrote several AI bills moving through the chamber, including one requiring companies to test AI models for biases. Another of her bills would ban developers from using the personal information of children to train their AI models without parental consent, potentially challenging the tech industry practice of scraping training data from websites.

AI bills introduced by other California legislators would require tech companies to release summaries describing the data used to develop AI models, create tools to detect AI-generated content, and apply digital watermarks to make AI-generated content identifiable — as some companies including Google have already attempted.

“We would love for the federal government to take a lead here,” Bauer-Kahan said. “But in the absence of them functioning and passing laws like this, Sacramento feels the need to step up.”

Related posts

USA vs. Mexico Livestream: How to Watch CONCACAF Nations League Final Soccer From Anywhere

newsconquest

Best Spark Plugs for 2022

newsconquest

If You are Meant to Devour 3 Foods a Day, Why Is It So Laborious?

newsconquest