A TED AI Show interview published Tuesday afternoon with former OpenAI board member Helen Toner reveals new details about the company’s efforts to fire its billionaire CEO and cofounder Sam Altman and why she “just couldn’t believe” what he was saying.
In November 2023, the board of ChatGPT-maker OpenAI fired Altman after determining that he “was not consistently candid in his communications.” Days after the move, 95% of the company signed a letter threatening to quit unless Altman was reinstated.
Altman ended up getting his job back less than a week after he was fired, and he remains OpenAI’s CEO today, but one question lingered after the attempted ouster: Why exactly did the board move to fire him in the first place?
On the podcast, Toner, a research strategy director at Georgetown University, said board members reached a point where they couldn’t trust Altman.
Toner claimed that Altman was “withholding information, misrepresenting things that were happening at the company,” and “outright lying to the board” for years.
Helen Toner. Photo by Jerod Harris/Getty Images for Vox Media
Toner gave specific examples, first saying that when ChatGPT came out in November 2022, the board learned about the release through Twitter — they had not known it was coming out ahead of time.
What has *really* been happening at OpenAI? Former board member @hlntnr and host @bilawalsidhu take a peek behind the curtain — plus the future of government regulation of AI. Listen to The TED AI Show on @AmazonMusic or wherever you get your podcasts: https://t.co/wZh1JsVy7m pic.twitter.com/kc31ElaZiC
— TED Talks (@TEDTalks) May 28, 2024
Toner said that Altman also didn’t tell the board that he “owned” the OpenAI Startup Fund, a $175 million fund for early-stage AI companies. His ownership contradicted his claim that he was “an independent board member with no financial interest in the company,” according to Toner.
The fund’s website reads that “OpenAI itself is not an investor” at the time of writing.
Toner also accused Altman of giving the board “inaccurate information about the small number of formal safety processes that the company did have in place” and that the board didn’t know how well the company’s AI safety features were performing or any changes that had to be made.
Top OpenAI safety researchers have left the company recently; OpenAI dissolved the group they led and created a new safety team this week led by Altman.
Related: Now that OpenAI’s Superalignment Team Has Been Disbanded, Who’s Preventing AI from Going Rogue?
Two OpenAI executives also shared with the board that they didn’t think Altman was the right person to lead the company, sharing screenshots and documentation revealing a “toxic” atmosphere.
“All four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us,” Toner said. “That’s a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO raise more money.”
When asked why 95% of OpenAI’s staff wanted Altman back at the helm, Toner said that the situation could have been portrayed to employees as either Altman coming back or the company being destroyed.
OpenAI responded to Toner’s statements by saying that it conducted an “extensive review” of the board’s attempt to fire Altman and found that the decision “was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”
The Giving Pledge announced on Tuesday that Altman has pledged to give away most of his wealth.