On his first trip to the White House days after the letter’s release, Sunak argued that the event would unite global leaders around the booming technology — helping them to seize the benefits of AI while limiting its risks.
“The U.S. and the U.K. are the world’s foremost democratic AI powers,” Sunak said, standing feet away from President Biden in the White House’s East Room. “So today, President Biden and I agreed to work together on AI safety.”
But the summit, which kicks off Wednesday, has instead highlighted diverging priorities between 10 Downing Street and the White House. Sunak has focused his rhetoric on doomsday scenarios, warning in a speech Thursday of an extreme — though unlikely — possibility that “humanity could lose control of AI completely.” The White House, meanwhile, has focused its efforts on tangible problems, issuing a sweeping new executive order Monday that aims to prevent the technology from exacerbating bias, displacing workers and undermining national security.
The meeting kicks off with global leaders including Vice President Harris, technology executives including Tesla chief executive Elon Musk and leading AI scientists and civil society groups traveling to Bletchley Park — the once-secret home of the famous World War II code breakers who decrypted Nazi messages.
Despite Sunak’s warnings that AI could be used to build chemical weapons or disseminate child sexual abuse, the United Kingdom to date has taken a light-touch approach to regulation. Sunak has attempted to position the country as the world’s third AI power, behind the United States and China.
“The U.K.’s answer is not to rush to regulate,” Sunak said in his speech.
Many researchers and civil society leaders argue that Sunak’s position is a strategic move to court influential tech companies — as the pound slides and the economy tips toward a recession. Unproven claims of super-intelligence could distract policymakers from new laws addressing existing problems, instead directing attention toward the not-yet-realized risks of AI-powered weapons and super-intelligence.
“AI is not a topic of the future, but is already causing problems in the present,” said Marietje Schaake, a former member of the European Parliament and the special adviser to the European Commission implementing the Digital Services Act. “We need democratic regulation and independent oversight.”
Amid concerns that the U.K.’s guest list would over-index on industry leaders, Harris’s office played a role in ensuring civil society leaders were invited to the summit, according to a person familiar with the matter who spoke on the condition of anonymity to discuss the effort. Alexandra Reeve Givens, chief executive of the Center for Democracy and Technology; AI researcher Deb Raji; and Alondra Nelson, who formerly led the White House Office of Science and Technology Policy, are expected to attend.
Britain has struggled for relevance in the post-Brexit world, and it has sought to distinguish its policies from the aggressive line the European Union has taken against U.S. tech giants. At home, the push for AI regulation is viewed as image-building for the deeply unpopular Sunak, particularly as his Conservative Party prepares for elections no later than January 2025. Observers say the summit, and its AI policy overall, amounts to a quest to recapture clout — and a blatant economic calculation.
The stakes are high: The U.K.’s AI market is valued at more than $21 billion, and it is estimated to balloon, adding $1 trillion to the U.K. economy by 2035, according to a September 2022 report from the International Trade Administration.
Even amid his warnings of an AI takeover, Sunak has been viewed as tech-friendly. The U.K. has signaled an unobtrusive approach to AI, unveiling a white paper titled “A pro-innovation approach to AI regulation.”
In his speech Thursday, Sunak announced a new global AI Safety Institute in Britain that would “carefully examine, evaluate, and test new types of AI.” He offered few specifics about how the agency would function, and whether there would be any legal requirements for companies to submit their models for assessments. Sunak said the British government has invested 1 billion pounds in supercomputing, 2.5 billion pounds in quantum computers, and he announced an investment of 100 million pounds into the use of AI to discover treatments for diseases.
This funding influx could position Britain as a more attractive destination for AI development and jobs: Already, leading generative AI companies Anthropic and OpenAI have opened offices in London in the past year.
Joe White, the U.K.’s technology envoy to the United States, said the prime minister chose to focus on existential risk so that he would not duplicate work happening in other forums, such as the Group of Seven. The G-7, which includes the United States and the U.K., on Monday unveiled a code of conduct for companies building AI systems, which has similar principles to the Biden executive order but is completely voluntary.
“This is a new class of challenge that we collectively as a global society face,” White said, adding the summit is focused on making sure AI systems don’t “get out of control.”
Even as the U.K. and United States signal different AI priorities, the countries continue to have common ground. The executive order directed agencies that fund life-science projects to take steps to prevent AI from being used to engineer dangerous biological materials. Karen Pierce, the British ambassador to the United States, said the new measures “make an important contribution to our shared international agenda on AI safety.”
Industry leaders have applauded Sunak’s focus for the summit. Ensuring the longer-term risk of super-intelligent AI is understood and contained is just as important as trying to mitigate near-term risks such as bias and disinformation, said Demis Hassabis, chief executive of DeepMind, Google’s AI division.
“You don’t want to wait till the eve of AGI when you discuss AGI risk,” Hassabis said, referring to artificial general intelligence, an industry term for a theorized AI that matches or surpasses human intelligence.
By holding the summit at Bletchley Park, the once top secret home of Alan Turing and other World War II-era cryptographers, the British are sending a symbolic message of their storied place in technological history, while seeking to declare themselves a vital part of its future.
Yet the U.K. trails far behind the United States on AI industrial power, resulting in different regulatory priorities. The United States has emerged as a global leader in artificial intelligence, as the launch of OpenAI’s ChatGPT and other U.S. companies’ generative AI products spark international fear and acclaim.
“Our friends in the U.K. are very much invested in growing the sector,” said Reggie Babin, senior counsel at Akin Gump Strauss Hauer & Feld, who previously served as chief counsel to Senate Majority Leader Charles E. Schumer (D-N.Y.). “There may be some reticence in limiting what can be done now in order to ensure you’re able to maximize the upside of what can be done later.”
In the European Union, lawmakers are putting the final touches on broad AI legislation that would ban the highest-risk algorithms and force huge penalties on violators. Policymakers there have been working on their AI legislation for about five years and dismiss the British response as shortsighted.
“It doesn’t make any sense to put in the same phrase that, yes, you see a substantial risk from AI, and then you do nothing about it,” said Dragos Tudorache, co-rapporteur of the AI committee in the European Parliament.
He said Sunak only really began sounding the alarm in earnest after an open letter in March warning of AI’s risks from 1,000 tech industry leaders including Musk.
Until then, “in the U.K., everyone was asleep at the wheel, from the Parliament all the way to the executive.”