
In short
- Ilya Sutskever prepared a 52-page case against Sam Altman based almost entirely on unverified claims from one source: CTO Mira Murati
- OpenAI came within days of merging with rival Anthropic amid the crisis, with board member Helen Toner arguing that destroying the company could be “consistent with the mission”
- The board was “hurried” and “inexperienced,” according to Ilya herself, who had been planning Altman’s removal for at least a year, waiting for favorable board dynamics.
Ilya Sutskever provided nearly ten hours of videotaped testimony in the Musk v. Altman lawsuit on October 1 of this year.
The co-founder who helped build ChatGPT and became infamous for voting to fire Sam Altman in November 2023 was eventually put under oath and forced to answer. The 365-page transcript was released this week.
What it reveals is a portrait of brilliant scientists making catastrophic board decisions, unverified accusations treated as facts, and ideological divisions so deep that some board members preferred to destroy OpenAI rather than allow it to survive under Altman’s leadership.
The Musk to Altman The lawsuit focuses on Elon Musk’s claim that OpenAI and its CEO, Altman, betrayed the company’s original nonprofit mission by turning its research into a for-profit venture aligned with Microsoft. This raises questions about who controls the advanced AI models and whether they can be safely developed in the public interest.
For those following the OpenAI drama, the document is an eye-opening and damning read. It’s a case study of how things go wrong when technical genius meets organizational incompetence.
Here are the five most important revelations.
1. The 52-page dossier that the public hasn’t seen yet
Sutskever wrote an extensive case for Altman’s removal, complete with screenshots, and organized into a 52-page letter.
Sutskever testified that he explicitly said in the memo, “Sam exhibits a consistent pattern of lying, undermining his executives, and turning his executives against each other.”
He sent the memo to independent directors using disappearing email technology “because I was afraid those memos would leak somehow.” The entire assignment was not accomplished through discovery.
“The context for this document is that the independent board members asked me to prepare it. And I did. And I was quite careful,” Sutskever testified, saying parts of the memo appear in screenshots taken by OpenAI CTO Mira Murati.
2. A board chess game that lasts a year
When asked how long he would consider firing Altman, Sutskever replied: “At least a year.”
When asked what dynamic he was waiting for, he said: “That the majority of the board is not clearly friendly to Sam.”
A CEO who controls the composition of the board of directors is functionally untouchable. Sutskever’s testimony shows that he understood this perfectly and adjusted his strategy accordingly.
When the departure of board members created that opening, he moved. He played a long-term policy of governance, despite how close Altman and Sutskever seemed publicly.
3. The OpenAI weekend almost disappeared
On Saturday, November 18, 2023 – within 48 hours of Altman’s resignation – there were active discussions about merging OpenAI with Anthropic.
Helen Toner, a former OpenAI board member, was “the most supportive” in this direction, according to Sutskever.
If the merger had taken place, OpenAI would have ceased to exist as an independent entity.
“I don’t know if it was Helen who contacted Anthropic or whether Anthropic contacted Helen,” Sutskever testified. “But they came up with a proposal to merge with OpenAI and take over its leadership.”
Sutskever said he was “very unhappy about it,” later adding that he “really didn’t want OpenAI to merge with Anthropic.”
4. “Destroying OpenAI could be in line with the mission”
When OpenAI executives warned that the company would collapse without Altman, Toner responded that destroying OpenAI could be consistent with its security mission.
This is the ideological heart of the crisis. Toner represented a strand of AI safety thinking that sees rapid AI development as existentially dangerous – potentially more dangerous than no AI development at all.
“The executives – it was a meeting with the board members and the executive team – told the board that, if Sam does not return, OpenAI will be destroyed, and that is not consistent with OpenAI’s mission,” Sutskever testified. “And Helen Toner said something that is consistent, but I think she said it even more directly.”
If you truly believed that OpenAI posed risks that outweighed its benefits, then an impending employee uprising was irrelevant. The statement helps explain why the board remained steadfast even when more than 700 employees threatened to leave.
5. Miscalculations: one source for everything, an inexperienced board and a cult-like loyalty of the staff
Nearly everything in Sutskever’s 52-page memo came from one person: Mira Murati.
He did not verify the claims with Brad Lightcap, Greg Brockman or other executives named in the complaints. He trusted Murati completely, and verification “did not occur to (him).”
“I completely believed in the information Mira gave me,” Sutskever said. “In retrospect, I realize I didn’t know. But then I thought I knew. But I knew through secondhand knowledge.”
When asked about the board’s process, Sutskever was blunt about what went wrong.
“One thing I can say is that the process was rushed,” he testified. “I think it was rushed because the board was inexperienced.”
Sutskever also expected OpenAI employees to be indifferent to Altman’s removal.
When 700 of the 770 employees signed a letter demanding and threatening Altman to leave for Microsoft, he was genuinely surprised. He had fundamentally misjudged the loyalty of the staff and the isolation of the board from the reality of the organization.
“I didn’t expect them to cheer, but I also didn’t expect them to feel strong either way,” Sutskever said.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

