AI companies must disclose testing protocols and what guardrails they put in place to the California Department of Technology. If the tech causes “critical harm,” the state’s attorney general can sue the company.
Wiener’s bill comes amid an explosion of state bills addressing artificial intelligence, as policymakers across the country grow wary that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world’s largest technology companies, plays a singular role in setting precedent for tech industry guardrails.
“You can’t work in software development and ignore what California is saying or doing,” said Lawrence Norden, the senior director of the Brennan Center’s Elections and Government Program.
Federal legislators have had numerous hearings on AI and proposed several bills, but none have been passed. AI regulation advocates are now concerned that the same pattern of debate without action that played out with previous tech issues like privacy and social media will repeat itself.
“If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law, I’ll be the first to cheer that, but I’m not holding my breath,” Wiener said in an interview. “We need to get ahead of this so we maintain public trust in AI.”
Wiener’s party has a supermajority in the state legislature, but tech companies have fought hard against regulation in the past in California, and they have strong allies in Sacramento. Still, Wiener says he thinks the bill can be passed by the fall.
“We’ve been able to pass some very, very tough technology-related policies,” he said. “So yes we can pass this bill.”
California isn’t the only state pushing AI legislation. There are 407 AI-related bills currently active across 44 U.S. states, according to an analysis by BSA The Software Alliance, an industry group that includes Microsoft and IBM. That’s a dramatic increase since BSA’s last analysis in September 2023, which found states had introduced 191 AI bills.
Several states have already signed bills into law that address acute risks of AI, including its potential to exacerbate hiring discrimination or create deepfakes that could disrupt elections. About a dozen states have passed laws that require the government to study the technology’s impact on employment, privacy and civil rights.
But as the most populous state in the U.S., California has unique power to set standards that have impact across the country. For decades, California’s consumer protection regulations have essentially served as national and even international standards for everything from harmful chemicals to cars.
In 2018, for example, after years of debate in Congress, the state passed the California Consumer Privacy Act, setting rules for how tech companies could collect and use peoples’ personal information. The U.S. still doesn’t have a federal privacy law.
Wiener’s bill largely builds off an October executive order by President Biden that uses emergency powers to require companies to perform safety tests on powerful AI systems and share those results with the federal government. The California measure goes further than the executive order, to explicitly require hacking protections, protect AI-related whistleblowers and force companies to conduct testing.
The bill will likely be met with criticism from a large section of Silicon Valley that argues regulators are moving too aggressively and risk enshrine systems that make it difficult for start-ups to compete with big companies. Both the executive order and the California legislation single out large AI models — something that some start-ups and venture capitalists criticized as shortsighted of how the technology may develop.
Last year, a debate raged in Silicon Valley over the risks of AI. Prominent researchers and AI leaders from companies including Google and OpenAI signed a letter stating that the tech was on par with nuclear weapons and pandemics in its potential to cause harm to civilization. The group that organized that statement, the Center for AI Safety, was involved in drafting the new legislation.
Tech workers, CEOs, activists and others were also consulted on the best way to approach regulating AI, Wiener said. “We’ve done enormous stakeholder outreach over the past year.”
The important thing is that there’s a real conversation about the risks and benefits of AI taking place, said Josh Albrecht, co-founder of AI start-up Imbue. “It’s good that people are thinking about this at all.”
Experts expect the pace of AI legislation to only accelerate as companies release increasingly powerful models this year. The proliferation of state-level bills could lead to greater industry pressure on Congress to pass AI legislation, because complying with a federal law may be easier than responding to a patchwork of different state laws.
“There’s a huge benefit to having clarity across the country on laws governing artificial intelligence, and a strong national law is the best way to provide that clarity,” said Craig Albright, BSA’s senior vice president for U.S. government relations. “Then companies, consumers, and all enforcers know what’s required and expected.”
Any California legislation could have a key impact on the development of artificial intelligence more broadly because many of the companies developing the technology are based in the state.
“The California state legislature and the advocates that work in that state are much more attuned to technology and to its potential impact, and they are very likely going to be leading,” said Norden.
States have a long history of moving faster than the federal government on tech policy. Since California passed its 2018 privacy law, nearly a dozen other states have enacted their own laws, according to an analysis from the International Association of Privacy Professionals.
States have also sought to regulate social media and children’s safety, but the tech industry has challenged many of those laws in court. Later this month, the Supreme Court is scheduled to hear oral arguments in landmark social media cases over social media laws in Texas and Florida.
At the federal level, partisan battles have distracted lawmakers from developing bipartisan legislation. Senate Majority Leader Charles E. Schumer (D-N.Y.) has set up a bipartisan group of senators focused on AI policy that’s expected to soon unveil an AI framework. But the House’s efforts are far less advanced. At a Post Live event on Tuesday, Rep. Marcus J. Molinaro (R-N.Y.) said House Speaker Mike Johnson called for a working group on artificial intelligence to help move legislation.
“Too often we are too far behind,” Molinaro said. “This last year has really caused us to be even further behind.”