In contrast to far-off fears of the technology ending humanity, a spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service. The debate has taken on new urgency amid global efforts to regulate the swiftly evolving technology.
“Last year, the conversation was ‘gee whiz,’” Chris Padilla, IBM’s vice president of government and regulatory affairs, said in an interview. “Now, it’s what are the risks? What do we have to do to make AI trustworthy?”
The topic has taken over the confab: Panels with AI CEOs including Sam Altman are the hottest ticket in town, and tech giants including Salesforce and IBM have papered the snow-covered streets with ads for trustworthy AI.
But the mounting anxieties about the perils of AI are casting a pall over the tech industry’s marketing blitz.
The event opened Tuesday with Swiss President Viola Amherd calling for “global governance of AI,” raising concerns the technology might supercharge disinformation as a throng of countries head to the polls. At a sleek cafe Microsoft set up across the street, CEO Satya Nadella sought to assuage concerns the AI revolution would leave the world’s poorest behind, following the release of an International Monetary Fund report this week that found the technology is likely to worsen inequality and stoke social tensions. Over canapés and cocktails down the street at the Alpine Inn, Google CFO Ruth Porat promised to work with policymakers to “develop responsible regulation” and touted the company’s investments in efforts to retrain workers.
But the calls for a response have laid bare the limits of this annual summit, as efforts to coordinate a global strategy to the technology are hampered by economic tensions between the world’s leading AI powers, the United States and China.
Meanwhile, countries hold competing geopolitical interests when it comes to regulating AI: Western governments are weighing rules that stand to benefit the companies within their borders while leaders in India, South America and other parts of the Global South see the technology as the key to unlocking economic prosperity.
The AI debate is a microcosm of a broader paradox looming over Davos, as attendees strap on their snow boots to sample pricey wine, go on sledding excursions and belt out classic rock hits in a piano lounge sponsored by the cybersecurity firm Cloudflare. The relevance of the conference founded more than 50 years ago to promote globalization during the Cold War is increasingly in question, amid raging wars in Ukraine and the Middle East, rising populism and climate threats.
In a speech Wednesday, U.N. Secretary General António Guterres raised the dual perils of climate chaos and generative AI, noting that they were “exhaustively discussed” by the Davos set.
“And yet, we have not yet an effective global strategy to deal with either,” he said. “Geopolitical divides are preventing us from coming together around global solutions.”
It’s clear tech companies are not waiting for governments to catch up, and legacy banks, media companies and accounting firms at Davos are weighing how to incorporate AI into their businesses.
Davos regulars say growing investment in AI is evident on the promenade, where companies take over storefronts to host meetings and events. In recent years, buzzwords like Web3, blockchain and crypto dominated those shops. But this year, the programming shifted to AI. Hewlett-Packard Enterprise and the Emirati firm G42 even sponsored an “AI House,” which converted a chalet-style building into a gathering spot to listen to speakers including Meta chief AI scientist Yann LeCun, IBM CEO Arvind Krishna and MIT professor Max Tegmark.
The promenade effectively serves as “a focus group for the next emerging tech wave,” said veteran WEF attendee Dante Disparte, chief strategy officer and head of global policy at Circle.
Executives signaled that AI will become an even more influential force in 2024, as companies build more advanced AI models and developers use those systems to power new products. At a panel hosted by Axios, Altman said the overall intelligence of OpenAI’s models was “increasing across the board.” Long-term, he predicted the technology would “vastly accelerate the rate of scientific discovery.”
But even as the company powers ahead, he said he worries politicians or bad actors might abuse the technology to influence elections. He said OpenAI doesn’t yet know what election threats will arise this year but that it will attempt to make changes quickly and work with outside partners. On Monday, as the conference was kicking off, the company rolled out a set of election protections, including a commitment to help people identify when images were created by its generator, DALL-E.
“I’m nervous about this, and I think it’s good that we’re nervous about this,” he said.
OpenAI, which has fewer than 1,000 employees, has a significantly smaller team working on elections than large social media companies such as Meta and TikTok. Altman defended their commitment to election security, saying team size was not the best way to measure a company’s work in this area. But The Washington Post found last year that the company does not enforce its existing policies on political targeting.
Policymakers remain fearful that the companies aren’t thinking enough about the social implications of their products. At the same event, Eva Maydell, a member of the European Parliament, said she is developing recommendations for AI companies ahead of global elections.
“This year’s theme of the annual meeting is rebuilding trust,” said Maydell, who worked on the bloc’s AI Act, which is expected to become law this year following a December political deal. “I very much hope this won’t be the year that we’re going to lose trust in our democratic processes because of disinformation, because of the inability to explain truth.”