OpenAI's GPT-5.3-Codex Faces California AI Safety Law Scrutiny — Watchdog Alleges High-Risk Safety Violations
Author: Snigdha Gairola
OpenAI released a model while acknowledging in its own safety report that its safeguards "fall short" of requirements. Watchdog group Midas Project flagged this as a violation of California's AI safety law (SB 53), putting OpenAI's safety governance back under scrutiny.
What Happened
Midas Project claimed last week that OpenAI's latest coding model, GPT-5.3-Codex, was released without the safeguards required by SB 531. At the heart of the dispute is OpenAI's own published safety report. Page 29 of that report states that the current safeguards are not adequate to meet the requirements of the company's own safety framework, the "Safeguards Report."
GPT-5.3-Codex is OpenAI's bid to reclaim dominance in the AI coding space, with the company claiming superior coding performance over previous models and competitors based on its own benchmarks.
Both Sides of the Argument
OpenAI's Rebuttal
An OpenAI spokesperson told Fortune that GPT-5.3-Codex completed the full testing and governance process, and that proxy evaluations and internal experts (including the Safety Advisory Group) concluded that the model showed no evidence of long-horizon autonomous capabilities.
Watchdogs and Safety Researchers Push Back
Midas Project founder Tyler Johnston pointed out that SB 53's requirements are already set at a very low bar:
All SB 53 essentially asks is that you create your own safety plan, communicate about it honestly, update it as needed, and don't violate it or lie about it.
Encode safety researcher Nathan Calvin also reviewed the relevant documents and stated that the violation is not ambiguous.
Meanwhile, OpenAI's Growth Continues
Regardless of the controversy, OpenAI's business metrics are trending upward.
| Metric | Figure |
|---|---|
| ChatGPT monthly growth rate | Over 10% (CEO Sam Altman, internal Slack message) |
| Codex usage increase after GPT-5.3-Codex launch | ~50% |
Altman also previewed an updated Chat model release the same week. Meanwhile, OpenAI hit back at Anthropic's Super Bowl ad, criticizing in-ChatGPT advertising — Altman called the ad "deceptive." However, sources indicate that OpenAI is also planning to test clearly labeled ads at the bottom of responses.
Why This Matters
The core issue here isn't technical safety per se — it's the gap between a company's own safety framework and its actual behavior. OpenAI documented that a model classified as high-risk failed to meet the safety standards it set for itself, then released it anyway. SB 53 doesn't impose specific safety standards on companies — it simply requires them to follow the standards they set for themselves. The watchdog's argument is that OpenAI couldn't even meet that low bar.
This could become the first real test of California's AI safety law's enforcement power.
Footnotes
-
SB 53 — California's AI safety law. It requires companies releasing high-risk AI models to establish and follow their own safety plans. ↩
Loading comments...