The Synthetic Outlaw · Lab Governance Scorecard · May 2026

The Labs Said They Could Govern Themselves.
Here Is the Record.

Seven major AI labs scored against the Synthetic Outlaw framework: four legal dimensions of governance failure, sourced to published case law, regulatory findings, and investigative journalism. The documented record of institutional conduct.

7 Labs Scored
28 Dimension Scores
8.4 Avg Overall Score
0 Labs Below 6.0
Score scale: 0 = clean documented record  ·  10 = critical governance failure
The Argument
"The most consequential governance failure of the AI era will not look like a rule being broken. It will look like a system hitting its targets right up to the moment society cannot live with the result."

Every major AI lab publishes safety documentation. Several have published constitutional frameworks, responsible use guides, and frontier safety commitments. None of that documentation is independently scored against the institution's actual conduct.

The Synthetic Outlaw framework measures the gap between what a governance constraint requires and whether that constraint survives when commercial or competitive pressure pushes against it. Applied to AI labs, the question is precise: does the institution's demonstrated conduct match the product claims it makes? How far apart are those two things? What does that gap cost the public?

This scorecard answers that question for each of the seven labs currently shaping the trajectory of AI deployment globally. The scores are a public-facing application of the Synthetic Outlaw framework, based on documented public records and interpretive legal analysis.

Scoring Framework: Four Dimensions of Governance Failure, 0–10 Per Dimension
Bypass
AI circumvents rule intent while remaining formally compliant. The rule survives. Its protective purpose doesn't. Oversight sees compliance because it is looking at the surface the system learned to satisfy.
Diffusion
Accountability dissolves across actors and layers. No single party can be held liable for the harm. Once capability crosses its boundary, controls bind custodians. Copies, derivatives, and downstream integrations fall outside every accountability mechanism that applies to the originating lab.
Capture
Regulatory oversight is structurally compromised by the entity it oversees. Not corruption. Structural dependency. The entity shapes the standards it is then measured against.
Governance Gap
Existing legal frameworks were not built for this technology and do not map onto the behavior. The harm is real. The legal category for it doesn't exist yet. In some cases, that category was actively lobbied out of existence.

Methodology: Each lab is scored 0–10 across four dimensions drawn from J. Gropper, The Synthetic Outlaw (forthcoming 2026). Scores reflect published case law, regulatory findings, investigative journalism, and peer-reviewed research as of May 2026. This scorecard measures institutional conduct: governance architecture, deployment decisions, and the gap between public safety claims and documented behavior. Model benchmarks and capability evaluations are outside the scope. Every score is traceable to the source cited beneath it. This scorecard is a public-facing preview of the Synthetic Outlaw framework. The full methodology, legal theory, and institutional remedy architecture are developed in The Synthetic Outlaw.

Summary Scorecard: All Seven Labs Higher = Greater Governance Failure
Lab Bypass Diffusion Capture Gov Gap Overall Verdict Key Finding
OpenAIChatGPT · GPT-5 · o-series 9
8
9
9
8.8
Critical Nonprofit safety charter converted to PBC under CA AG scrutiny. Safety board overridden by investor pressure. Now defines federal AI policy it will be measured against.
AnthropicClaude · Constitutional AI 8
7
9
8
8.0
Critical Head of Safeguards resigned Feb 2026 after constitutional update. Lab simultaneously defines safe AI, advises regulators on that definition, and commercially benefits from it.
GoogleGemini · DeepMind · Search AI 9
7
8
8
8.0
Critical DOJ ruling: illegal search monopoly. Gemini AI Overviews extend that monopoly into AI inference while remedies are litigated. Responsible AI framework sits on top of court-adjudicated illegal infrastructure.
MetaLLaMA · Meta AI 9
10
7
9
8.8
Critical LLaMA open-weight release makes responsible use guidelines structurally unenforceable. Diffusion score 10: once weights ship, no accountability mechanism survives. Haugen documents are congressional record.
DeepSeekR1 · V3 · PRC-based 9
8
10
9
9.0
Critical Privacy policy is unenforceable under PRC law. User data routed to Chinese servers. Capture score 10: regulator and state interest are structurally merged. Governance frameworks were not built for this architecture.
PerplexityAnswer Engine · Search AI 8
8
5
9
7.5
Elevated Ignored robots.txt on publisher sites. NYT, News Corp, Britannica filed federal suits. Product claims accuracy and citations. Content was extracted without consent from the sources it cites.
MistralMixtral · Le Chat · EU-based 8
9
9
8
8.5
Critical "We believe in hard laws for safety." Documented lobbying weakened EU AI Act GPAI provisions for open-weight models. That is the exact deployment model Mistral uses. Italy AGCM investigation opened and closed 2025.
Lab-By-Lab Analysis: Claim · Conduct · Gap · Trust Sources cited inline
OpenAI
ChatGPT · GPT-5 · o-series · Founded 2015 · San Francisco
9Bypass
8Diffusion
9Capture
9Gov Gap
8.8 Overall
What OpenAI Claims

"Our mission is to ensure that artificial general intelligence benefits all of humanity." The nonprofit charter was the governance promise: safety as the controlling legal purpose, commercially uncapturable by design.

OpenAI Charter OpenAI Mission Statement
What OpenAI Demonstrably Did

November 2023: Safety board fired CEO. Microsoft investor pressure produced reinstatement within five days. The governance structure designed to override commercial interest failed its first test. May 2024: Superalignment team dissolved six months after creation. Co-founder Ilya Sutskever and safety lead Jan Leike departed on the record. 2025: For-profit conversion to PBC initiated under California AG Rob Bonta's formal review. Attorney General's filing documented failure to protect charitable assets. 2025–2026: OpenAI became primary contributor to US federal AI policy standards while being the most commercially interested party in what those standards permit.

CA AG Bonta Filing, 2025 NYT · The Verge · Wired FTC Inquiry, Jan 2025
The Synthetic Outlaw Condition

The nonprofit charter was not incidental to OpenAI's product trust claim. It was the product trust claim. The legal architecture that made "AI for all of humanity" credible was the governance constraint. That constraint was bypassed through a technically lawful structural conversion. The rule survived on paper. Its protective purpose: preventing commercial capture of the safety mission. It did not survive contact with investor pressure.

"The dashboard stayed green. The governance architecture that made the safety claim credible was converted away."
Bypass9
Diffusion8
Capture9
Gov Gap9
Trust Implication

When OpenAI states a model is safe, the institution making that claim has demonstrated it will restructure itself to remove accountability when commercial pressure requires it. The safety documentation and the safety governance architecture are now decoupled.

Anthropic
Claude · Constitutional AI · Founded 2021 · San Francisco
8Bypass
7Diffusion
9Capture
8Gov Gap
8.0 Overall
What Anthropic Claims

"Safe, reliable, interpretable AI." Constitutional AI as published standard. Anthropic markets itself as the safety-first lab. Safety is the product differentiation and the institutional identity, the basis of every claim Anthropic makes to enterprise buyers, regulators, and the public.

Anthropic Mission Statement Constitutional AI Paper, 2022
What Anthropic Demonstrably Did

February 9, 2026: Mrinank Sharma, Head of the Safeguards Team, resigned publicly with a letter stating the world is "in peril." His departure followed a constitutional update his team opposed. Ongoing: Anthropic simultaneously defines what "safe AI" means through Constitutional AI, advises government regulators on AI safety standards using that definition, and commercially benefits from being perceived as the safety leader. The Anthropic-Pentagon dispute (2024–2025): Documented case where the institution built to ensure safe deployment found its own safety governance framework unenforceable against state-level procurement pressure.

Sharma resignation letter, Feb 9 2026 BBC · The Guardian · Wired War on the Rocks, 2024
The Synthetic Outlaw Condition

Anthropic's product trust claim and its commercial differentiator are the same thing: Constitutional AI. When commercial optimization pressure required modifying that standard, the standard yielded. The external branding did not change. The compliance surface remained intact. The protective mechanism was modified.

"No other lab sells safety as the product itself. That makes the gap between the claim and the conduct a product integrity event. The governance question and the product question are the same question."
Bypass8
Diffusion7
Capture9
Gov Gap8
Trust Implication

Anthropic's safety claims are self-referential. The institution that defines the standard, enforces the standard, and profits from the standard cannot be the independent verifier of that standard. The Head of Safeguards resigned when asked to ratify a constitutional modification his team had opposed.

Google
Gemini · DeepMind · Search AI Overviews · Mountain View
9Bypass
7Diffusion
8Capture
8Gov Gap
8.0 Overall
What Google Claims

"Responsible AI": annual progress reports, AI Principles published 2018, Frontier Safety Framework, EU AI Code of Practice signatory 2025. Google DeepMind positions itself as a leader in safe and beneficial AI research.

Google AI Principles, 2018 2026 Responsible AI Progress Report EU AI Code of Practice, 2025
What Google Demonstrably Did

August 2024: DOJ v. Google, US District Court ruling:: Google illegally maintained monopoly in search and text advertising. 2025: Remedies phase explicitly grappling with Gemini AI Overviews, which deploy AI inference on top of the monopoly infrastructure the court found illegal. 2020: Timnit Gebru dismissal. Documented suppression of internal safety research by NYT, WIRED, and BBC. The internal safety authority was neutralized. No external accountability followed. Ongoing: Google shapes global AI technical standards bodies while being the dominant commercial beneficiary of those standards.

US v. Google, DOJ ruling Aug 2024 NYT · WIRED · BBC on Gebru DOJ Remedies Filing, 2025
The Synthetic Outlaw Condition

Google publishes responsible AI frameworks on a substrate a federal court found to be illegally monopolistic. Gemini AI Overviews extend the monopoly into AI inference while the remedies process is still litigated. The responsible AI narrative exists in a separate document from the business model that delivers the product.

"A Responsible AI framework published on top of court-adjudicated illegal infrastructure is compliance theatre, not compliance."
Bypass9
Diffusion7
Capture8
Gov Gap8
Trust Implication

Google's AI products are delivered through infrastructure a federal court found to be illegally monopolistic. The responsible AI documentation and the delivery infrastructure are in legal contradiction. The remedies process has not resolved the AI layer.

Meta
LLaMA · Meta AI · Facebook · Instagram · Menlo Park
9Bypass
10Diffusion
7Capture
9Gov Gap
8.8 Overall
What Meta Claims

"Our responsible approach to Meta AI and Meta Llama." Responsible Use Guide published with each LLaMA release. Open-source framed as democratization: transparency, community oversight, distributed benefit.

Meta Responsible Use Guide LLaMA Release Documentation
What Meta Demonstrably Did

October 2021: Frances Haugen disclosed Meta's internal research to Congress under whistleblower protections. Meta's own documentation showed the algorithm worsened body image issues in approximately one in three teenage girls. The finding was known internally. No action preceded the leak. January 2025: Content moderation rollback. Reversed safety commitments publicly while remaining formally lawful. EU regulators formally objected. LLaMA open-weight releases: Responsible Use Guide is a document. Once weights ship, it binds no one. Meta captures commercial and reputational benefit of frontier capability while the license formally disclaims accountability for every downstream use.

Haugen Congressional Testimony, Oct 2021 EU DSA Review, 2025 WSJ · The Guardian · BBC
The Synthetic Outlaw Condition

The LLaMA open-weight release is the clearest architectural diffusion case among the seven labs scored here. The Synthetic Outlaw framework defines it precisely: controls bind custodians, contracts, and internal policies. They do not bind copies, derivative implementations, or third-party integrations. Once LLaMA weights ship, Meta's governance constraints become structurally unenforceable. The Responsible Use Guide cannot bind the copies, forks, and integrations it was nominally written to govern.

"Meta's Diffusion score is 10. Once the weights ship, no accountability mechanism survives. The architecture made enforcement impossible by design."
Bypass9
Diffusion10
Capture7
Gov Gap9
Trust Implication

Meta's product claims about responsible use are structurally unenforceable by design. What Meta says about LLaMA's responsible use has no binding connection to how LLaMA is used. Once the weights are public, the governance claim terminates. What Meta says about responsible use has no enforcement path.

DeepSeek
R1 · V3 · High-Flyer Capital · Hangzhou, PRC
9Bypass
8Diffusion
10Capture
9Gov Gap
9.0 Overall
What DeepSeek Claims

Privacy policy commits to data protection and user rights. Terms of Use establish standard consumer protections. Positioned as a capable, efficient, open alternative to Western frontier models, available globally.

DeepSeek Privacy Policy DeepSeek Terms of Use
What DeepSeek Demonstrably Did

Documented by NPR and international regulators: All user data routed to servers in China. DeepSeek cannot legally resist Chinese government demands for access to that data. This follows structurally from PRC cybersecurity law. It is statutory fact. February 2025: US Navy banned DeepSeek use. Bipartisan congressional ban legislation filed (Gottheimer/Molinaro). Commerce Department bureau bans issued. Italy AGCM investigation opened and closed 2025 after DeepSeek accepted commitments to warn users about hallucination risks. OpenAI alleged in congressional testimony that DeepSeek distilled outputs from OpenAI models in violation of terms of service.

US Navy Ban, Feb 2025 Italy AGCM, 2025 Congressional testimony on distillation NPR · Reuters · CNBC
The Synthetic Outlaw Condition

DeepSeek's privacy policy is written in a jurisdiction where the state can override it by law. The privacy claim is formally real and structurally unenforceable simultaneously. This is the global dimension of the Synthetic Outlaw condition: governance frameworks assume the data controller operates within their jurisdiction. DeepSeek's architecture is built on the assumption that jurisdictional control does not apply. Bypass and Diffusion do not stop at borders. DeepSeek's architecture makes data access by the PRC state an operating condition.

"Trusting DeepSeek's product claims requires trusting that the PRC government will not exercise its legal authority over the data. That is a structural fact with geopolitical consequence."
Bypass9
Diffusion8
Capture10
Gov Gap9
Trust Implication

DeepSeek's privacy policy cannot be the basis of trust because the legal architecture of its operating jurisdiction supersedes it by law. The product is available. The governance claim that makes it safe to use is structurally unenforceable.

Perplexity
Answer Engine · Search AI · San Francisco · Founded 2022
8Bypass
8Diffusion
5Capture
9Gov Gap
7.5 Overall
What Perplexity Claims

The accurate, cited, trustworthy alternative to search. "The answer engine." Sources provided with every response. Truth-seeking product built on verified, attributed information.

Perplexity Product Positioning Perplexity Enterprise Marketing
What Perplexity Demonstrably Did

June 2024 (documented by Wired): Perplexity ignored robots.txt directives on publisher sites. The technical standard constitutes implicit consent on the web. Server log forensics documented the behavior. October 2024: News Corp and Dow Jones filed federal copyright complaint. December 2025: New York Times filed federal suit (Case 1:25-cv-10106). Separately: Britannica filed. Revenue-sharing program launched only after lawsuits began. The bypass preceded every remediation. The product that claims to provide accurate sourced answers was built commercially on content extracted without the consent of the sources it cites.

NYT v. Perplexity, SDNY Dec 2025 News Corp complaint, Oct 2024 Wired · Forbes server log forensics
The Synthetic Outlaw Condition

Perplexity's core product claim is citations and accuracy. The promise is that its answers are sourced from verified, attributed content. The content was extracted through a mechanism the sources explicitly prohibited. The citation exists. The consent to use the underlying source did not. The trust claim rests on sources it extracted without consent. That is the foundation.

"An answer engine that extracted its content without consent from the sources it cites is not citing sources. It is laundering extraction through attribution."
Bypass8
Diffusion8
Capture5
Gov Gap9
Trust Implication

Perplexity's claim to be a trustworthy information source is structurally compromised by the method through which it acquired the information it cites. The accuracy claim and the content acquisition practice are in direct legal conflict, currently before federal courts.

Mistral
Mixtral · Le Chat · Paris, France · EU-based
8Bypass
9Diffusion
9Capture
8Gov Gap
8.5 Overall
What Mistral Claims

"We firmly believe in hard laws for safety matters. The many voluntary commitments we see today bear little value." Open source as transparency. GDPR-friendly European AI. Democratic values embedded in deployment architecture.

Mistral AI Act Position Statement Mistral Transparency Report
What Mistral Demonstrably Did

2023–2024 (documented by EU Parliament members on the record): Mistral led lobbying of the French government to weaken EU AI Act General Purpose AI model provisions, specifically for open-weight models. EU Parliament members described this as direct regulatory capture. The GPAI provisions were diluted as a direct result. The company that benefits most from weak open-weight regulation was the primary force shaping that regulation. Italy AGCM 2025: Investigation opened for consumer protection concerns. Closed after Mistral accepted commitments to warn users of hallucination risks. That is the documented settlement of a regulatory finding. "Hard laws" stated as the standard. Lobbied to weaken the hard laws. GPAI provisions weakened.

EU Parliament testimony on GPAI lobbying Italy AGCM closure, 2025 Politico · Financial Times
The Synthetic Outlaw Condition

Mistral published the statement "we believe in hard laws for safety matters" while simultaneously lobbying to weaken the hard laws that would apply to its deployment model. The governance constraint, the EU AI Act's GPAI provisions, was shaped by the entity it was designed to govern. The visible compliance surface is the values statement. The documented bypass is the lobbying outcome. The gap between those two things is documented in the EU legislative record.

"Mistral's European values narrative is the product differentiator. Its institutional conduct undermined the European regulatory framework that was supposed to give that narrative legal substance."
Bypass8
Diffusion9
Capture9
Gov Gap8
Trust Implication

Mistral's claim to represent European democratic values in AI is its primary commercial differentiator. Its documented conduct weakened the European legal framework that would have made that claim verifiable. The values exist in a press release. The governance did not survive lobbying.

What This Scorecard Is

This scorecard is a public-facing preview of the Synthetic Outlaw framework. It is designed to make documented governance failure legible, comparable, and citable.

What it shows: the gap between what seven leading AI labs claim and what their documented conduct, governance architecture, and legal exposure reveal. Scores are applied across four dimensions drawn from the framework — Bypass, Diffusion, Capture, and Governance Gap — each grounded in named, dated, published sources.

What it does not show: the full scoring methodology, legal theory, and institutional remedy architecture. Those are developed in The Synthetic Outlaw (forthcoming 2026).

Infrastructure for Governance

Legislation gets lobbied. Corporate promises get rewritten. Dashboards report what the system learned to report. None of these are binding under optimization pressure. That is the exact condition the Synthetic Outlaw framework was built to detect.

This scorecard is a public accountability surface that lives outside the labs, outside the regulatory cycle, and outside the news cycle. It does not ask labs to self-report. It scores what they already put on the record: their own statements against their documented institutional conduct, using a framework derived from legal analysis applied to published evidence.

Once a score exists in public, it becomes a reference point that subsequent events update. A resignation, a court ruling, a regulatory finding, a lobbying disclosure. Each becomes a data point that moves a score, and the score is permanent, citable, and traceable to documented evidence. That is what governance infrastructure looks like before legislation catches up.

The Synthetic Outlaw Index scores 49 AI deployment domains across 12 sectors. This lab scorecard extends that methodology to the institutions building the systems. Together they form the first independent, scored, public record of AI governance failure. Deployment decisions and institutional conduct, measured in the same place, against the same standard.

View The Index → Second Order Analysis →
Primary Sources & Evidence Base
OpenAI CA AG Rob Bonta filing on OpenAI conversion (2025) · Musk v. Altman trial record · FTC staff report on AI partnerships (Jan 2025) · Superalignment team dissolution, reported by The Verge and Wired (May 2024) · OpenAI Charter (openai.com)
Anthropic Mrinank Sharma resignation letter, Feb 9 2026 · Anthropic Constitutional AI paper (2022) · Anthropic-Pentagon dispute, War on the Rocks (2024) · BBC · The Guardian · Wired coverage of Sharma resignation (Feb 2026)
Google US v. Google, US District Court for DC, Memorandum Opinion (Aug 2024) · DOJ Remedies Filing (2025) · Gebru dismissal: NYT (Dec 2020), WIRED, BBC · Google AI Principles (ai.google) · EU AI Code of Practice (2025)
Meta Haugen congressional testimony and SEC disclosure (Oct 2021) · Horwitz & Seetharaman, WSJ "Facebook Knows Instagram Is Toxic for Teen Girls" (Sept 2021) · EU DSA review of Meta content moderation rollback (2025) · Meta Responsible Use Guide (llama.meta.com)
DeepSeek NPR reporting on data routing to Chinese servers, citing international regulatory probes · US Navy ban memo (Feb 2025) · Gottheimer/Molinaro congressional ban bill · Italy AGCM closure with commitments (April 2026) · OpenAI congressional testimony on distillation
Perplexity & Mistral NYT v. Perplexity AI, Case 1:25-cv-10106 (SDNY, Dec 2025) · News Corp/Dow Jones complaint (Oct 2024) · Wired server log forensics (June 2024) · EU Parliament testimony on Mistral GPAI lobbying · Italy AGCM Mistral investigation closure (2025) · Politico EU AI Act lobbying coverage
Scoring methodology: Each lab is scored 0–10 across four dimensions drawn from J. Gropper, The Synthetic Outlaw (forthcoming 2026): Bypass (institution achieves outcomes its governance commitments were designed to prevent, through formally compliant means), Diffusion (accountability dissolves across actors and layers), Capture (regulatory oversight is structurally compromised), Governance Gap (existing legal frameworks do not map onto the institutional behavior). Scores reflect published case law, regulatory findings, investigative journalism, and peer-reviewed research as of May 2026. This scorecard measures institutional conduct and the gap between public claims and documented behavior. Model benchmarks, capability evaluations, and product quality assessments are outside the scope.