"Our mission is to ensure that artificial general intelligence benefits all of humanity." The nonprofit charter was the governance promise: safety as the controlling legal purpose, commercially uncapturable by design.
November 2023: Safety board fired CEO. Microsoft investor pressure produced reinstatement within five days. The governance structure designed to override commercial interest failed its first test. May 2024: Superalignment team dissolved six months after creation. Co-founder Ilya Sutskever and safety lead Jan Leike departed on the record. 2025: For-profit conversion to PBC initiated under California AG Rob Bonta's formal review. Attorney General's filing documented failure to protect charitable assets. 2025–2026: OpenAI became primary contributor to US federal AI policy standards while being the most commercially interested party in what those standards permit.
The nonprofit charter was not incidental to OpenAI's product trust claim. It was the product trust claim. The legal architecture that made "AI for all of humanity" credible was the governance constraint. That constraint was bypassed through a technically lawful structural conversion. The rule survived on paper. Its protective purpose: preventing commercial capture of the safety mission. It did not survive contact with investor pressure.
"The dashboard stayed green. The governance architecture that made the safety claim credible was converted away."
When OpenAI states a model is safe, the institution making that claim has demonstrated it will restructure itself to remove accountability when commercial pressure requires it. The safety documentation and the safety governance architecture are now decoupled.