The OpenAI Board Coup: What It Revealed About AI Safety vs Profit

·13 min read

On the evening of Friday, 17th November 2023, OpenAI's board of directors did something nearly unprecedented in Silicon Valley: they fired one of tech's most celebrated founders with minimal explanation and no apparent plan for what came next.

Sam Altman, the CEO who had transformed OpenAI from research lab to the company behind ChatGPT, was out. The board's statement said only that he "was not consistently candid in his communications with the board" and that they had "lost confidence" in his leadership.

What followed was the most dramatic corporate governance crisis in AI history. Over five days, the world watched as employees threatened mass resignation, Microsoft deployed its leverage as primary investor, and the same board that fired Altman capitulated to his reinstatement under fundamentally changed governance.

The crisis wasn't just corporate drama. It exposed tensions that sit at the heart of AI development: safety versus speed, mission versus profit, governance versus capital. Understanding what happened—and why—matters for everyone thinking about how AI will be developed, deployed, and controlled.

The Timeline: Five Days That Shook AI

Friday, 17th November

3:28pm PST: The OpenAI board posts a brief statement announcing Altman's departure, effective immediately. No prior warning had reached employees, investors, or the public.

The board's official reason: Altman "was not consistently candid in his communications with the board." No specifics provided. No additional explanation offered.

Within hours, President Greg Brockman resigns in solidarity with Altman. Three senior researchers announce departures. The company's Slack channels reportedly explode with confusion and concern.

Notably absent from the decision: Microsoft, which had invested $13 billion in OpenAI and built its AI strategy around the partnership, was reportedly notified minutes before the public announcement.

Saturday-Sunday, 18th-19th November

Behind closed doors, frantic negotiations begin. Multiple parties attempt to understand what happened and whether reconciliation is possible.

The board reportedly offers Altman the opportunity to return if he accepts various conditions. Altman reportedly considers but ultimately declines.

Microsoft CEO Satya Nadella offers Altman and any departing OpenAI employees positions at Microsoft to lead a new AI research group. The offer is genuine—Microsoft would recreate the team if OpenAI collapsed.

An employee letter begins circulating demanding the board's resignation and Altman's reinstatement. It will eventually gather 700 of OpenAI's approximately 770 employees as signatories—over 90% of the company threatening to leave.

Monday, 20th November

The employee letter goes public. The mass resignation threat becomes real and visible. Employees make clear they will leave en masse and that Microsoft has offered to hire them all.

The board's position becomes untenable. With virtually the entire workforce threatening departure, the entity they govern would cease to exist as a functioning organisation.

Emmett Shear, former Twitch CEO, is briefly announced as interim CEO—a position he would hold for approximately 48 hours.

Tuesday-Wednesday, 21st-22nd November

Negotiations intensify. The board ultimately agrees to Altman's return with conditions including a new board composition.

The original board members who voted to fire Altman (Helen Toner, Tasha McCauley) agree to step down. A new board forms with different composition and, crucially, different relationship to the safety-focused mission.

Thursday, 22nd November

OpenAI announces Altman's return as CEO. The new initial board includes Bret Taylor (former Salesforce co-CEO), Larry Summers (former Treasury Secretary), and Adam D'Angelo (continuing from the original board).

The crisis ends where it began: Altman at the helm of OpenAI. But the governance structure that enabled the firing has been fundamentally altered.

The Governance Structure That Made This Possible

Understanding the crisis requires understanding OpenAI's unusual structure—a nonprofit board controlling a for-profit subsidiary.

OpenAI was founded in 2015 as a nonprofit research organisation with a mission to ensure artificial general intelligence benefits all of humanity. The nonprofit structure was intentional: AI safety was the mission, not profit maximisation.

In 2019, recognising the need for capital to compete in increasingly expensive AI research, OpenAI created a "capped-profit" subsidiary. Outside investors could invest and receive returns, but profits were capped at 100x the original investment. Beyond that cap, returns would flow to the nonprofit.

The critical design element: the nonprofit board retained control. The for-profit subsidiary was legally subordinate to the nonprofit's mission. The board's fiduciary duty was to the mission—beneficial AI for humanity—not to shareholder value.

This structure enabled OpenAI to raise billions whilst theoretically preserving safety-focused governance. Microsoft's $13 billion investment came with the understanding that the nonprofit board could, in theory, slow or stop development for safety reasons.

The November crisis tested whether this theoretical power was practical.

What the Board Actually Claimed

The board never provided detailed public explanation for firing Altman. "Not consistently candid" was the only official reason. But subsequent reporting and analysis suggest deeper concerns.

Board members reportedly believed Altman was moving too fast with commercialisation, prioritising product releases over safety considerations. The rapid evolution from GPT-3.5 to GPT-4 to ChatGPT's consumer explosion had outpaced the safety-focused research the organisation was supposedly prioritising.

Specific concerns reportedly included:

Premature product launches: Releasing ChatGPT and subsequent features before safety evaluation was complete.

Commercial prioritisation: Focusing resources on product development and revenue growth rather than safety research.

Communication issues: Withholding information from the board about company activities, partnerships, or AI capability developments.

Power consolidation: Acquiring influence over the organisation through hiring decisions, investor relationships, and public prominence in ways that compromised board authority.

None of these concerns have been officially confirmed. What's clear is that board members believed their safety-focused mission was being subordinated to commercial imperatives—and they used their unusual governance power to intervene.

What Actually Happened: Power Analysis

The board's intervention failed. Understanding why illuminates the actual power dynamics in AI development.

The board had legal authority to fire the CEO. But legal authority without practical power is merely words on paper.

Microsoft's leverage: With $13 billion invested and its entire AI strategy dependent on OpenAI's models, Microsoft had existential interest in continuity. When Altman's departure threatened that continuity, Microsoft deployed its leverage—not through board votes it didn't control, but through the credible offer to recreate OpenAI with the same people.

Employee alignment: The workforce viewed Altman as their leader, not the board. When forced to choose between a governance structure they barely knew and a CEO who had built the culture, created the products, and (for many) recruited them personally, employees overwhelmingly chose Altman.

Market pressure: OpenAI's commercial value came from its people and momentum. A governance intervention that destroyed both destroyed the value Microsoft was trying to protect, that employees were trying to preserve, and that the broader AI ecosystem had come to depend on.

The safety-focused board had authority. It lacked the network of aligned interests that makes authority operational.

The Stakeholder Map

Different stakeholders wanted different things, and understanding their interests explains the resolution.

The Original Board (Toner, McCauley, D'Angelo, Sutskever)

Interest: Preserve the safety mission that was OpenAI's founding purpose. Slow commercialisation to allow safety research to catch up. Maintain governance structure that prioritised mission over profit.

Power: Formal authority to hire/fire CEO. Control over organisational charter.

Weakness: No operational control. No employee loyalty. No capital of their own.

Sam Altman

Interest: Build OpenAI into dominant AI company. Ship products. Grow commercial relationships. Maintain personal leadership position.

Power: Employee loyalty. Investor relationships. Public prominence. Operational knowledge.

Weakness: Formal subordination to board. No ownership stake in nonprofit-controlled structure.

Microsoft

Interest: Protect $13 billion investment. Maintain access to OpenAI models for Azure and Microsoft products. Ensure continuity of partnership.

Power: Capital. Alternative employment offers. Commercial relationship leverage.

Weakness: No board representation (by design). Dependency on OpenAI for AI strategy.

Employees

Interest: Job security. Equity value (in capped-profit subsidiary). Continuing to work on cutting-edge AI. Working under leadership they respected.

Power: Collective indispensability. Knowledge and skills that made OpenAI valuable.

Weakness: Individual vulnerability. Scattered without coordination.

OpenAI Users and the Broader Ecosystem

Interest: Continued access to ChatGPT and API services. Stability of critical infrastructure. Progress in AI capabilities.

Power: Market demand that drove OpenAI's value.

Weakness: No direct voice in governance.

The resolution favoured the coalition of Altman, Microsoft, and employees against the board. Capital and labour aligned against governance.

What This Reveals About AI Safety Governance

The crisis offers several lessons about AI safety governance that extend beyond OpenAI.

Lesson 1: Voluntary Governance Structures Are Unstable

OpenAI's unusual governance was voluntary—designed by founders, not required by law. When voluntary governance conflicts with powerful interests, voluntary governance loses.

The board could fire the CEO but couldn't prevent employees from following that CEO to competitors. The board could prioritise safety but couldn't survive the capital withdrawal that prioritising safety provoked.

This suggests that voluntary safety governance by individual organisations is insufficient. If safety-focused governance can be dissolved whenever it becomes inconvenient, it provides limited actual constraint.

Lesson 2: Capital and Labour Often Align Against Safety

The standard narrative positions safety governance as protecting users against corporate interests. The OpenAI crisis revealed a different dynamic: capital (Microsoft) and labour (employees) aligned to defeat safety governance.

This alignment makes sense when you examine incentives. Employees' equity is worth more if OpenAI ships products and grows revenue. Microsoft's investment returns require commercial success. Safety governance that slows commercialisation threatens both.

The users who might benefit from safety governance—and the future humans affected by AI safety decisions—have no seat at the table and no leverage in the negotiation.

Lesson 3: Concentrated AI Development Creates Concentrated Risk

The crisis illustrated how much depends on a small number of organisations and people. When OpenAI's governance failed, there was no backup. The AI ecosystem had concentrated capability in one organisation, and that concentration created systemic risk.

Alternatives exist (Anthropic, Google DeepMind, open-source projects), but the concentration at the frontier creates dynamics where individual organisational decisions—and crises—have outsized impact.

Lesson 4: The Current Governance Framework Is Inadequate

Neither the OpenAI model (nonprofit control of for-profit) nor the standard corporate model (shareholder value maximisation) provides adequate AI safety governance.

The nonprofit model failed when tested. The standard model doesn't even attempt safety priority.

Something different is needed—likely regulatory rather than voluntary, likely governmental rather than organisational.

Implications for AI Development

The crisis resolution has ongoing consequences for how AI develops.

Short-Term: Accelerated Commercialisation

With the safety-focused board members departed and governance restructured, the constraints on OpenAI's commercialisation weakened. Product releases continue apace. GPT-4 variants, enhanced ChatGPT features, API expansion—the pace hasn't slowed.

Microsoft's leverage increased. The resolution demonstrated that when Microsoft's interests conflict with nonprofit governance, Microsoft wins. Future safety-focused interventions are less credible after this failure.

Medium-Term: Regulatory Momentum

The crisis accelerated regulatory attention on AI governance. Legislators watched a nonprofit board attempt to slow AI development for safety reasons and get steamrolled by commercial interests. This undermined the argument that self-governance is sufficient.

The EU AI Act, UK AI Safety Institute, and US executive orders on AI all proceeded partly in recognition that corporate governance—even unusually safety-focused corporate governance—is insufficient.

Long-Term: Governance Architecture Questions

The fundamental question the crisis raised remains unanswered: how should AI development be governed?

The options on the table include:

Corporate self-governance: The default, with safety considerations subordinate to commercial interests. The pre-crisis status quo.

Enhanced corporate structures: Mission-oriented governance like OpenAI attempted, but with stronger protections against the dynamics that overwhelmed them.

Regulatory frameworks: Government-mandated safety requirements, testing protocols, deployment restrictions. The EU AI Act represents this approach.

International coordination: Global agreements on AI development, analogous to nuclear non-proliferation frameworks.

Each approach has advocates and critics. What's clear is that the OpenAI crisis demonstrated the limitations of the first approach and the fragility of the second.

The Individuals: Who Did What

A full account requires acknowledging the individuals and their choices.

Sam Altman emerged strengthened. His leadership of OpenAI continues. His network of relationships proved more powerful than formal governance. Whether this represents victory for AI progress or defeat for AI safety depends on your priors.

Ilya Sutskever, OpenAI's chief scientist who initially voted to fire Altman before signing the employee letter demanding his return, embodies the tension. A leading AI safety researcher who helped build the capabilities that create safety concerns, Sutskever's reversal illustrated how difficult it is to act on safety concerns when institutional pressures point the other direction. He subsequently departed OpenAI to focus on safety-focused research elsewhere.

Helen Toner and Tasha McCauley, the board members who voted to fire Altman and subsequently departed, have largely remained silent about their reasoning. Their decision to prioritise mission over continuity was genuine governance. Its failure doesn't mean they were wrong about the concerns.

Microsoft's Satya Nadella demonstrated corporate power deployed effectively. His immediate offer to hire departing OpenAI staff was credible, timely, and decisive. It changed the game theory facing employees considering resignation.

The 700 employees who signed the letter showed collective power but also illustrated the limits of that power's direction. They chose continuity and familiar leadership over governance they didn't understand or support.

What We Still Don't Know

Important questions remain unanswered and may never be answered:

What specifically did Altman do that "was not consistently candid"? The board's stated reason was never elaborated. Without specifics, evaluation is impossible.

What AI capabilities or plans prompted the timing? Speculation has ranged from artificial general intelligence progress to commercial partnerships to internal conflicts. None has been confirmed.

What did Ilya Sutskever see that changed his mind? His initial vote to fire, then reversal to sign the employee letter, suggests something shifted his analysis. What?

What agreements were reached in reinstatement negotiations? The new governance structure is public. Any private commitments remain private.

How do current OpenAI employees view safety governance? The employee letter opposed the board's intervention but didn't necessarily oppose safety governance per se. Current attitudes are unclear.

Key Takeaways

The November 2023 OpenAI crisis exposed fundamental tensions between AI safety governance and commercial interests.

The unusual governance structure—nonprofit board controlling for-profit subsidiary—failed when tested. Legal authority without aligned practical power couldn't withstand capital and labour coalitions.

Stakeholder analysis reveals why: Microsoft's $13 billion investment, employees' equity value, and Altman's relationships created aligned interests that overwhelmed the board's formal authority.

The resolution favoured continuity over safety intervention. The safety-focused board members departed. The governance structure was reformed to prevent similar interventions.

Voluntary corporate governance of AI safety is demonstrably insufficient. The OpenAI model was more safety-focused than typical corporate structures, and it still failed.

Regulatory approaches become more credible when self-governance fails visibly. The crisis accelerated governmental attention to AI governance.

The question of how to govern AI development remains open. Corporate self-governance, enhanced corporate structures, regulatory frameworks, and international coordination all remain debated options.

For individuals navigating AI tool choices, the crisis illustrates that the organisations building AI have complex internal dynamics. Safety considerations compete with commercial pressures, and commercial pressures usually win.

Chaos operates in this evolving AI landscape, aiming to bring AI capabilities to productivity whilst maintaining focus on user value rather than hype. The industry's governance questions will shape what AI tools become possible and responsible over the coming decade.

Related articles