
TL;DR: The New Yorker's April 2026 investigation of OpenAI is the most detailed insider account yet of how AI safety commitments can erode under commercial pressure — and it lands a governance challenge directly in the lap of every enterprise buying frontier AI.
The New Yorker's April 2026 investigation into OpenAI isn't just another tech-leadership scandal — it's the most detailed insider account yet of how AI safety commitments can erode when commercial pressures mount. Drawing on more than 100 interviews, secret memos compiled by co-founder Ilya Sutskever, and extensive contemporaneous notes from former OpenAI VP and current Anthropic CEO Dario Amodei, "Sam Altman May Control Our Future—Can He Be Trusted?" by Ronan Farrow and Andrew Marantz documents a pattern of alleged deception by Sam Altman and a systematic dismantling of safety-first governance at the world's most influential AI company.
The memos don't just tell a story about one company — they expose the structural tension between AI safety and commercial velocity that every organization deploying AI now has to confront.
OpenAI's flagship safety team reportedly received far less compute than it was publicly promised before being dissolved entirely in 2024, according to insider documents.
The most damning thread in the New Yorker investigation centers on OpenAI's superalignment team — the research group tasked with ensuring future AI systems remain aligned with human values. When OpenAI announced the team in mid-2023, it came with a public commitment: 20% of the compute the company had secured to date would be dedicated to alignment research over a four-year horizon.
According to the documents obtained by the New Yorker, and consistent with contemporaneous coverage of the team's collapse, the superalignment team's compute requests were repeatedly deferred or denied — with actual allocation falling well short of the public pledge — while product-facing teams received expanding allocations. The team was effectively dissolved in 2024. Jan Leike, who co-led it alongside Sutskever, said on his way out that safety had "taken a backseat to shiny products"; the New Yorker's memos provide the internal paper trail behind that statement.
The dissolution signals a broader industry pattern: AI safety commitments made during fundraising cycles may not survive contact with competitive pressure. The question for enterprises becomes: how do you evaluate whether your AI vendor's safety claims are backed by actual resource allocation?
Dario Amodei's contemporaneous notes from his OpenAI tenure describe a pattern of alleged misrepresentations by Sam Altman — but Amodei now runs a direct competitor, and that bias matters.
Dario Amodei left OpenAI in 2021 to found Anthropic, taking several key safety researchers with him. At the time, the departure was framed as a philosophical difference about organizational structure; the New Yorker's access to his notes reveals the split was far more adversarial than publicly known.
It is worth stating plainly: Amodei is not a disinterested observer. He is the CEO of OpenAI's most direct competitor for enterprise contracts, talent, and regulatory favor — that bias has to be weighed. The New Yorker acknowledges this and argues the contemporaneous nature of the writing lends it evidentiary weight regardless.
The notes reportedly detail instances where Altman allegedly:
The memos also recontextualize the board changes that bracketed Altman's November 2023 firing and reinstatement. The investigation argues he systematically advocated for board members less likely to challenge commercial decisions — culminating in the exit of those who had prioritized safety oversight. Today's OpenAI board, closer to a traditional corporate governance model, is presented as the end result of that multi-year effort.
Sutskever's role in the November 2023 board crisis — where he initially voted to fire Altman, then appeared to reverse course — has been one of the most analyzed episodes in recent tech history. The New Yorker's access to roughly 70 pages of Slack messages, HR documents, and personal analysis he compiled in fall 2023 adds crucial context. One memo reportedly states bluntly that "Sam exhibits a consistent pattern of lying." Sutskever has since founded Safe Superintelligence Inc., a move the memos frame as a direct response to his OpenAI experience.
| Aspect | Public Narrative | Memo Revelations |
|---|---|---|
| Superalignment Resources | 20% of compute committed | A fraction of the pledge actually allocated |
| Board Changes | Governance modernization | Allegedly reduced safety oversight |
| Sutskever's Departure | Amicable transition | Documented pattern of frustration |
| Safety Review Process | Rigorous and independent | Allegedly shortened for product launches |
The New Yorker piece arrived just as AI-industry tensions spilled into outright violence. On April 10, 2026, a 20-year-old suspect — who reportedly traveled from Texas intending to confront Altman over AI's "purported risk to humanity," per the federal complaint — was arrested after allegedly throwing an incendiary device at Altman's San Francisco residence; he now faces attempted murder and federal arson-related charges. Local coverage and international reporting confirm the timeline.
These incidents are independent of the exposé itself, but the juxtaposition is striking. The normalization of physical threats against tech executives is a dangerous trajectory regardless of one's view on OpenAI's governance.
For executives making AI investment decisions, the investigation surfaces three evaluation criteria:
The next chapter is already in motion. The week of April 27, 2026, Musk v. Altman opened in Oakland federal court. Musk's fraud claim was dropped before trial — his case proceeds on remaining claims tied to OpenAI's transition from nonprofit to capped-profit structure and alleged breaches of founding agreements, themes that overlap directly with the New Yorker piece. Whether the documentary record surfaced in discovery aligns with or contradicts the Sutskever and Amodei notes will be a defining test of tech CEO accountability in the AI era.
For organizations navigating AI strategy amid this uncertainty, the lesson is clear: governance due diligence on AI partners is no longer optional — it's a core business risk.
According to the New Yorker's documents — and consistent with CNBC's May 2024 reporting — the superalignment team was chronically under-resourced despite OpenAI's public 20% compute commitment. Key leaders including Jan Leike departed publicly citing safety deprioritization, and the team was effectively dissolved in 2024.
The governance crisis highlights vendor risk in enterprise AI strategy. Organizations should evaluate AI partners not only on technical capability but on governance transparency, leadership stability, safety-team retention, and whether safety commitments are structurally enforced rather than merely aspirational.
Ask three specific questions: What share of compute and headcount is dedicated to safety research? What governance mechanisms exist to enforce safety commitments independent of commercial leadership? And what is the retention rate among safety-focused researchers? Public commitments without structural enforcement can erode under commercial pressure.
Discover more content: