My Blog
Technology

Sam Altman’s ouster at OpenAI exposes growing rift in AI industry


SAN FRANCISCO — At noon Friday, Sam Altman logged onto Google Meet and found himself face-to-face with his board of directors.

The CEO of the pioneering artificial intelligence company OpenAI had spent the previous day at the exclusive Asia-Pacific Economic Cooperation conference in San Francisco, where he talked up the potential of artificial intelligence and its impact on humanity. The week before, Altman had been on a different stage, announcing OpenAI’s latest product road map and expansion plans.

Now, however, Altman learned that he was being fired. According to a post on X by OpenAI co-founder and president Greg Brockman, who quit the company in solidarity with Altman, the news was delivered by Ilya Sutskever, the company’s chief researcher. The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter.

The schism between Altman and Sutskever mirrors a larger rift in the world of advanced AI, where a race to dominate the market has been accompanied by a near-religious movement to prevent AI from advancing beyond human control. While questions remain about what spurred the board’s decision to oust Altman, growing tensions had become impossible to ignore as Altman rushed to launch products and become the next big technology company.

His abrupt and surprising departure leaves OpenAI’s future uncertain, say venture capitalists and AI industry executives. Except for Sutskever, the remaining board members are more closely aligned with a movement to stop existential risks around advanced AI than to scale a business. Silicon Valley funders, meanwhile, are already betting that Altman and Brockman will launch their own AI venture to keep the AI arms race going, eager to invest.

“All of a sudden, it’s open season in the AI landscape,” investor Sarah Guo, founder of Conviction AI, posted on X.

By Saturday, OpenAI’s investors were already trying to woo Altman back. “Khosla Ventures wants [Altman] back at [OpenAI] but will back him in whatever he does next,” Vinod Khosla, one of the company’s investors, said in a post on X. Altman and Brockman could not be reached for comment.

Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by chief operating officer Brad Lightcap that was obtained by The Washington Post.

“We still share your concerns about how the process has been handled,” Lightcap said in the memo. “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.”

Altman’s ouster also caught rank-and-file employees within OpenAI off-guard, according to a person familiar with internal conversations, speaking on the condition of anonymity to discuss private conversations. The staff is “still processing it,” the person said.

In text messages that were shared with The Post, some OpenAI research scientists said Friday afternoon that they had “no idea” Altman was going to be fired, and described being “shocked” by the news. One scientist said they were learning about what happened with Altman’s ouster at the same time as the general public.

Over the past year, some OpenAI employees have expressed concerns with Altman’s focus on building consumer products and driving up revenue, which some of those employees saw as at odds with the company’s original mission to develop AI that would benefit all of humanity, a person familiar with employees’ thinking said, speaking on the condition of anonymity. Under Altman, OpenAI had been aggressively hiring product development employees and building up its consumer offerings. Its technology was being used by thousands of start-ups and larger companies to run AI features and products that are already being pitched and sold to customers.

During its first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators.

To the tech industry, that announcement was viewed as OpenAI wanting to become a major player on its own and not limiting itself to building AI models for other companies.

“This is not your standard start-up leadership shake-up. 10,000’s of start-ups are building on OpenAI,” Aaron Levie, CEO of cloud storage company Box said on X.” “This instantly changes the structure of the industry.”

OpenAI started as a nonprofit research lab launched in 2015 to safely build superhuman AI and keep it away from corporations and foreign adversaries. Believers in that mission bristled against the company’s transformation into a juggernaut start-up that could become the next big name in Big Tech.

Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”

“My hope is that we can do a lot more good for the world than just become another corporation that gets that big,” D’Angelo said in the interview. He did not respond to requests for comment.

Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work.

Toner occupies the board seat once held by Holden Karnofsky, a former hedge fund executive and CEO of Open Philanthropy, which invested $30 million in OpenAI to gain a board seat and influence the company toward AI safety. Karnofsky, who is married to Anthropic co-founder Daniela Amodei, left the board in 2021 after Amodei and her brother Dario Amodei, who both worked at OpenAI, left to launch Anthropic, an AI start-up more focused on safety.

OpenAI’s board had already lost its most powerful outside members in the past several years. Elon Musk stepped down in 2018, with OpenAI saying his departure was to remove a potential conflict of interest as Tesla developed AI technology of its own. LinkedIn co-founder Reid Hoffman, who also sits on Microsoft’s board, stepped down as an OpenAI director in March, citing a conflict of interest after starting a new AI start-up called Inflection AI that could compete with OpenAI. Shivon Zilis, an executive at Musk’s brain-interface company Neuralink and one of his closest lieutenants, also left in March.

With the departures of Altman and Brockman, OpenAI is being governed by four members: Toner, McCauley, D’Angelo and Sutskever, who OpenAI paid $1.9 million in 2016 for joining the company as its first research director, according to tax filings. Independent directors don’t hold equity in OpenAI.

Sutskever helped create AI software at the University of Toronto, called AlexNet, which classified objects in photographs with more accuracy than any previous software had achieved, laying much of the foundation for the field of computer vision and deep learning.

He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.

At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people — because those data centers,” he said, “they will be really quite powerful.”

OpenAI has a unique governing structure, which it adopted in 2019. It created a for-profit subsidiary that allowed investors a return on the money they invested into OpenAI, but capped how much they could get back, with the rest flowing back into the company’s nonprofit. The company’s structure also allows OpenAI’s nonprofit board to govern the activities of the for-profit entity, including the power to fire its chief executive.

Microsoft, which has invested billions of dollars in OpenAI in exchange for special access to its technology, doesn’t have a board seat. Altman’s ouster was an unexpected and unpleasant surprise, according to a person familiar with internal discussions at the company who spoke on the condition of anonymity to discuss sensitive matters. A Microsoft spokesperson declined to comment on the prospect of Altman returning to the company. On Friday, Microsoft said it was still committed to its partnership with OpenAI.

As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board.

“What happened at OpenAI today is a board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” Ron Conway, a longtime venture capitalist who was one of the attendees at OpenAI’s developer conference, said on X. “It is shocking, it is irresponsible, and it does not do right by Sam and Greg or all the builders in OpenAI.”



Related posts

What SCOTUS’ ruling for Google, pass on Section 230 debate means

newsconquest

Formula 1 Racing 2023: How to Watch and Livestream the Las Vegas GP

newsconquest

Tinder is making felony background tests to be had to your dates

newsconquest

Leave a Comment