After years of inaction on Big Tech — and the explosive success of ChatGPT — lawmakers aim to avoid similar mistakes with artificial intelligence
The video’s message — which has been embraced by some tech luminaries like Apple co-founder Steve Wozniak — resonated with Murphy (D-Conn.), who quickly fired off a tweet.
“Something is coming. We aren’t ready,” the senator warned.
AI hype and fear have arrived in Washington. After years of hand-wringing over the harms of social media, policymakers from both parties are turning their gaze to artificial intelligence, which has captured Silicon Valley. Lawmakers are anxiously eying the AI arms race, driven by the explosion of OpenAI’s chatbot ChatGPT. The technology’s uncanny ability to engage in humanlike conversations, write essays and even describe images has stunned its users, but prompted new concerns about children’s safety online and misinformation that could disrupt elections and amplify scams.
But policymakers arrive to the new debate bruised from battles over how to regulate the technology industry — having passed no comprehensive tech laws despite years of congressional hearings, historic investigations and bipartisan-backed proposals. This time, some are hoping to move quickly to avoid similar errors.
“We made a mistake by trusting the technology industry to self-police social media,” Murphy said in an interview. “I just can’t believe that we are on the precipice of making the same mistake.”
Consumer advocates and tech industry titans are converging on D.C., hoping to sway lawmakers in what will probably be the defining tech policy debate for months or even years to come. Only a handful of Washington lawmakers have AI expertise, creating an opening for industry boosters and critics alike to influence the discussions.
“AI is going to remake society in profound ways, and we are not ready for that,” said Rep. Ted Lieu (D-Calif.), one of the few members of Congress with a computer science degree.
A Silicon Valley offensive
Companies behind ChatGPT and competing technologies have launched a preemptive charm offensive, highlighting their attempts to build artificial intelligence responsibly and ethically, according to several people who spoke on the condition of anonymity to describe private conversations. Since Microsoft’s investment in OpenAI — which allows it to incorporate ChatGPT into its products — the company’s president, Brad Smith, has discussed artificial intelligence on trips to Washington. Executives from OpenAI, who have lobbied Washington for years, are meeting with lawmakers who are newly interested in artificial intelligence following the release of ChatGPT.
A bipartisan delegation of 10 lawmakers from the House committee tasked with challenging China’s governing Communist Party traveled to Silicon Valley this week to meet with top tech executives and venture capitalists. Their discussions focused heavily on recent developments in artificial intelligence, according to a person close to the House panel and companies who spoke on the condition of anonymity to describe private conversations.
Over lunch in an auditorium at Stanford University, the lawmakers gathered with Smith, Google’s president of global affairs, Kent Walker, and executives from Palantir and Scale AI. Many expressed an openness to Washington regulating artificial intelligence, but an executive also warned that existing antitrust laws could hamstring the country’s ability to compete with China, where there are fewer limitations to obtaining mass scales of data, the people said.
Smith disagreed that AI should prompt a change in competition laws, Microsoft spokeswoman Kate Frischmann said.
They also called for the federal government — especially the Pentagon — to increase its investments in artificial intelligence, a potential boon for the companies.
But the companies face an increasingly skeptical Congress, as warnings about the threat of AI bombard Washington. During the meetings, lawmakers heard a “robust debate” about the potential risks of artificial intelligence, said Rep. Mike Gallagher (R-Wis.), the chair of the House panel. But he said he left the meetings skeptical that the United States could take the extreme steps that some technologists have proposed, like pausing the deployment of AI.
“We have to find a way to put those guardrails in place while at the same time allowing our tech sector to innovate and make sure we’re innovating,” he said. “I left feeling that a pause would only serve the CCP’s interests, not America’s interests.”
The meeting in the Stanford campus was just miles away from the 5,000-person meetups and AI house parties that have reinvigorated San Francisco’s tech boom, inspiring venture capital investors to pour $3.6 billion into 269 AI deals from January through mid-March, according to the investment analytics firm PitchBook.
Across the country, officials in Washington were engaged in their own flurry of activity. President Biden on Tuesday held a meeting on the risks and opportunities of artificial intelligence, where he heard from a variety of experts on the Council of Advisors on Science and Technology, including Microsoft and Google executives.
Seated underneath a portrait of Abraham Lincoln, Biden told members of the council that the industry has a responsibility to “make sure their products are safe before making them public.”
When asked whether AI was dangerous, he said it was an unanswered question. “Could be,” he replied.
Two of the nation’s top regulators of Silicon Valley — the Federal Trade Commission and Justice Department — have signaled they’re keeping watch over the emerging field. The FTC recently issued a warning, telling companies they could face penalties if they falsely exaggerate the promise of artificial intelligence products and don’t evaluate risks before release.
The Justice Department’s top antitrust enforcer, Jonathan Kanter, said at South by Southwest last month that his office had launched an initiative called “Project Gretzky” to stay ahead of the curve on competition issues in artificial intelligence markets. The project’s name is a reference to hockey star Wayne Gretzky’s famous quote about skating to “where the puck is going.”
Despite these efforts to avoid repeating the same pitfalls in regulating social media, Washington is moving much slower than other countries — especially in Europe.
Already, enforcers in countries with comprehensive privacy laws are considering how those regulations could be applied to ChatGPT. This week, Canada’s privacy commissioner said it would open an investigation into the device. That announcement came on the heels of Italy’s decision last week to ban the chatbot over concerns that it violates rules intended to protect European Union citizens’ privacy. Germany is considering a similar move.
OpenAI responded to the new scrutiny this week in a blog post, where it explained the steps it was taking to address AI safety, including limiting personal information about individuals in the data sets it uses to train its models.
Meanwhile, Lieu is working on legislation to build a government commission to assess artificial intelligence risks and create a federal agency that would oversee the technology, similar to how the Food and Drug Administration reviews drugs coming to market.
Getting buy-in from a Republican-controlled House for a new federal agency will be a challenge. He warned that Congress alone is not equipped to move quickly enough to develop laws regulating artificial intelligence. Prior struggles to craft legislation tackling a narrow aspect of AI — facial recognition — showed Lieu that the House was not the appropriate venue to do this work, he added.
Harris, the tech ethicist, has also descended on Washington in recent weeks, meeting with members of the Biden administration and powerful lawmakers from both parties on Capitol Hill, including Senate Intelligence Committee Chair Mark R. Warner (D-Va.) and Sen. Michael F. Bennet (D-Colo.).
Along with Aza Raskin, with whom he founded the Center for Humane Technology, a nonprofit focused on the negative effects of social media, Harris convened a group of D.C. heavyweights last month to discuss the impending crisis over drinks and hors d’oeuvres at the National Press Club. They called for an immediate moratorium on companies’ AI deployments before an audience that included Surgeon General Vivek H. Murthy, Republican pollster Frank Luntz, congressional staffers and a delegation of FTC staffers, including Sam Levine, the director of the agency’s consumer protection bureau.
Harris and Raskin compared the current moment to the advent of nuclear weapons in 1944, and Harris called on policymakers to consider extreme steps to slow the rollout of AI, including an executive order.
“By the time lawmakers began attempting to regulate social media, it was already deeply enmeshed with our economy, politics, media and culture,” Harris told The Washington Post on Friday. “AI is likely to become enmeshed much more quickly, and by confronting the issue now, before it’s too late, we can harness the power of this technology and update our institutions.”
The message appears to have resonated with some wary lawmakers — to the dismay of some AI experts and ethicists.
Sen. Michael F. Bennet (D-Colo.) cited Harris’s tweets in a March letter to the executives of Open AI, Google, Snap, Microsoft and Facebook, calling on the companies to disclose safeguards protecting children and teens from AI-powered chatbots. The Twitter thread showed Snapchat’s AI chatbot telling a fictitious 13-year-old girl about how to lie to her parents about an upcoming trip with a 31-year-old man and gave advice on how to lose her virginity. (Snap announced on Tuesday that it had implemented a new system that takes a user’s age into account when engaging in conversation.)
Murphy seized onto an example from Harris and Raskin’s video, tweeting that ChatGPT “taught itself to do advanced chemistry,” implying it had developed humanlike capabilities.
“Please do not spread misinformation,” warned Timnit Gebru, the former co-lead of Google’s group focused on ethical artificial intelligence, in response. “Our job countering the hype is hard enough without politicians jumping in on the bandwagon.”
In an email, Harris said that “policymakers and technologists do not always speak the same language.” His presentation does not say ChatGPT taught itself chemistry, but it cites a study that found that the chatbot has chemistry capabilities no human designer or programmer intentionally gave the system.
A slew of industry representatives and experts took issue with Murphy’s tweet; his office is fielding requests for briefings, he said in an interview. Murphy says he knows AI isn’t sentient and teaching itself but that he was trying to talk about chatbots in an approachable way.
The criticism, he said, “is consistent with a broader shaming campaign that the industry uses to try to bully policymakers into silence.”
“The technology class thinks they’re smarter than everyone else, so they want to create the rules for how this technology rolls out, but they also want to capture the economic benefit.”
Nitasha Tiku contributed to this report.