My Blog
World News

Governments used to lead innovation. On AI, they’re falling behind.

Governments used to lead innovation. On AI, they’re falling behind.
Governments used to lead innovation. On AI, they’re falling behind.


BLETCHLEY, Britain — As Adolf Hitler rained terror on Europe, the British government recruited its best and brightest to this secret compound northwest of London to break Nazi codes. The Bletchley Park efforts helped turn the tide of war and lay the groundwork for the modern computer.

But as countries from six continents concluded a landmark summit on the risks of artificial intelligence at the same historic site as the British code breakers Thursday, they faced a vexing modern-day reality: Governments are no longer in control of strategic innovation, a fact that has them scrambling to contain one of the most powerful technologies the world has ever known.

Already, AI is being deployed on battlefields and campaign trails, possessing the capacity to alter the course of democracies, undermine or prop up autocracies, and help determine the outcomes of wars. Yet the technology is being developed under the veil of corporate secrecy, largely outside the sight of government regulators and with the scope and capabilities of any given model jealously guarded as propriety information.

During World War II, “and to some extent in the Cold War, you could get the nation’s most brilliant scientists to work on projects of national interest,” said Stuart Russell, a noted professor of computer science at the University of California at Berkeley. “But that’s not true anymore.”

The tech companies driving this innovation are calling for limits — but on their own terms. OpenAI CEO Sam Altman has suggested that the government needs a new regulator to address future advanced AI models, but the company continues to plow forward, releasing increasingly advanced AI systems. Tesla CEO Elon Musk signed onto a letter calling for a pause on AI development but is still pushing ahead with his own AI company, xAI.

“They are daring governments to take away the keys, and it’s quite difficult because governments have basically let tech companies do whatever they wanted for decades,” Russell said. “But my sense is that the public has had enough.”

The lack of government controls on AI has largely left an industry built on profit to self-police the risks and moral implications of a technology capable of next-level disinformation, ruining reputations and careers, even taking human life.

That may be changing. This week in Britain, the European Union and 27 countries including the United States and China agreed to a landmark declaration to limit the risks and harness the benefits of artificial intelligence. The push for global governance took a step forward, with unprecedented pledges of international cooperation by allies and adversaries.

On Thursday, top tech leaders including Altman, DeepMind founder Demis Hassabis and Microsoft President Brad Smith sat around a circular table with Harris, British Prime Minister Rishi Sunak and other global leaders. The executives agreed to allow experts from Britain’s new AI Safety Institute to test models for risks before their release to the public. Sunak hailed this as “the landmark achievement of the summit,” as Britain agrees to two partnerships, with the newly announced U.S. Artificial Intelligence Safety Institute, and with Singapore, to collaborate on testing.

But there have been limited details provided about how the testing would work, and the agreements are largely voluntary. It’s also unclear how the testing will differ from the mandates outlined in the White House’s executive order or pledge with AI companies.

Observers say the global effort — with follow-up summits planned in South Korea and France in six months and one year, respectfully — remains in its relative infancy and is being far outpaced by the speed of development of wildly powerful AI tools.

Musk, who attended the two-day event, mocked government leaders by sharing a cartoon on social media that depicted them saying that AI was a threat to humankind and that they couldn’t wait to develop it first.

Companies now control the lion’s share of funding for tech and science research and development in the United States, in a reversal from the World War II and Cold War eras. U.S. businesses accounted for 73 percent of spending on such research in 2020, according to data compiled by the National Center for Science and Engineering Statistics. That’s a dramatic reversal from 1964, when government funding accounted for 67 percent of this spending.

That paradigm shift has created a geopolitical vacuum, with new institutions urgently needed to enable governments to balance the opportunities presented by AI with national security concerns, said Dario Gil, IBM’s senior vice president and director of research.

“That is being invented,” Gil said. “And if it looks a little bit chaotic, it’s because it is.”

He said this week’s Bletchley declaration as well as and recent announcements of two government AI Safety Institutes, one in Britain and one in the United States, were steps toward that goal.

In the 1940s, the British ramped up the critical operation at Bletchley that would grow to 9,000 scientists, researchers and engineers — including pioneering minds like Alan Turing, who theorized thinking computers, and Max Newman and Tommy Flowers, who helped conceive, design and build the code breaking Colossus, an early programmable electronic computer.

The power of their discoveries generated moral questions. The allies were forced to decide whether to risk letting the Germans know their codes had been broken by responding to decrypted messages describing imminent attacks — or allow innocent deaths to safeguard that knowledge for war goals.

As with the dropping of atomic bomb by the United States on Japan, those decisions were made by governments ultimately accountable to electorates. In contrast, today’s leading technological minds of AI are laboring in private companies with driving interests that may not dovetail with national, or even global, security.

“It is very concerning that tech companies have as much power and the amount of resources that they have now, because obviously there is nobody democratically elected [inside them] who’s telling the tech companies what to do,” said Mar Hicks, associate professor of data science at the University of Virginia.

Today, governments and regions are taking a piecemeal approach, with the E.U. and China moving the fastest toward heavier handed regulation. Seeking to cultivate the sector even as they warn of AI’s grave risks, the British have staked out the lightest touch on rules, calling their strategy a “pro innovation” approach. The United States — home to the largest and most sophisticated AI developers — is somewhere in the middle, placing new safety obligations on developers of the most sophisticated AI systems but not so much as to stymie development and growth.

At the same time, American lawmakers are considering pouring billions of dollars into AI development amid concerns of competition with China. Senate Majority Leader Charles E. Schumer (D-N.Y.), who is leading efforts in Congress to develop AI legislation, said legislators are discussing the need for a minimum of $32 billion in funding.

For now, the United States is siding with cautious action. Tech companies, said Paul Scharre, executive vice president of the Center for New American Security, are not necessarily loved in Washington by Republicans or Democrats. And President Biden’s recent executive order marked a notable shift from more laissez faire policies on tech companies in the past.

But there’s no doubting that Americans are treading more lightly than, say, those in Europe — where an AI Act expected to be hashed out by December would outright ban the highest-risk algorithms and force massive penalties for violators.

“I’ve heard some people make the arguments the government just needs to sit back and just trust these companies and that the government doesn’t have the technical experience to regulate this technology,” Scharre said. “I think that’s a receipt for disaster. These companies aren’t accountable to the general public. Governments are.”

For authoritarian states including Russia and China, AI is posing particularly benefits and risks, with desperate attempts to control, sometimes prohibit and often harness for state uses. During the Russian invasion of Ukraine, a manipulated voice recording went out purporting to be to Ukrainian President Volodymyr Zelensky telling the population to lay down their arms. A relatively rudimentary deepfake, it nevertheless suggested the promise of AI as a weapon of obfuscation in war — and one that could be refined by magnitudes in the near future.

Yet the technology is seen in Moscow and Beijing as a double-edged sword — with ChatGPT, for instance, banned in Russia for giving users westernized answers to questions about the Ukraine invasion, including use of the banned term “war.”

China’s inclusion in the Bletchley declaration disappointed some of the summit’s attendees, including Michael Kratsios, the former Trump-appointed chief technology officer of the United States. Kratsios said he attended a Group of 20 summit meeting in 2019 where officials from China agreed to AI principles, including a commitment that “AI actors should respect human rights and democratic values throughout the AI system life cycle.” Yet China has rolled out new rules in recent months to keep AI bound by “core socialist values” and in compliance with the country’s vast internet censorship regime.

“Just like with almost anything else when it comes to international agreements, they proceeded to flagrantly violate [the principles],” said Kratsios, who is now the managing director of ScaleAI. He added that it was a “mistake” to believe the country would comply with the new Bletchley declaration.

Meanwhile, civil society advocates who were sidelined from the main event at Bletchley Park say governments are moving too slow — perhaps dangerously so. Beeban Kidron, a British baroness who has advocated for children’s safety online, warned that regulators risk making the same mistakes that they have when responding to tech companies in recent decades, which “has privatized the wealth of technology and outsourced the cost to society.”

“It is tech exceptionalism that poses an existential threat to humanity not the technology itself,” Kidron said in a speech Thursday at a competing event in London.

Related posts

U.N. says Gaza aid program in tatters, Israel pushes assault

newsconquest

Blinken to Return to Israel to Discuss Hostages and Planned Rafah Incursion

newsconquest

An 8-Year-Old Is at the Heart of a Fight Over Tibetan Buddhism

newsconquest