California AI Safety Bill SB 1047: What It Means for Startups
Category: News · Stage: Awareness
By Max Beech, Head of Content
Updated 22 May 2025
California's SB 1047, signed into law in September 2024, requires developers of large AI models to implement safety protocols and report "critical failures" to the state. Penalties for non-compliance start at $10 million per violation.^[1]^ For startups building AI products, the question isn't whether the law matters, but whether your model triggers the thresholds.
TL;DR
- SB 1047 applies to models trained with ≥$100M compute or ≥10^26 FLOPs
- Requires safety testing, incident reporting, and "kill switch" capability
- Most AI application developers are exempt—this targets foundation model creators
- Chaos and similar tools don't meet thresholds, but understanding the landscape helps procurement decisions
Jump to: 1. What SB 1047 requires | 2. Who's affected | 3. Exemptions | 4. Procurement implications
What SB 1047 requires
Covered models ("large-scale AI systems")
The law applies if your AI model meets either threshold:
- Compute cost: Training cost ≥$100 million
- Computational power: Training used ≥10^26 floating-point operations
This captures frontier models like GPT-4, Claude, Gemini. It does not apply to most startups or fine-tuned models.
Safety obligations
Developers of covered models must:
- Pre-deployment testing: Conduct safety evaluations for "critical harms" (catastrophic cyberattacks, creation of biological weapons, autonomous harm)
- Incident reporting: Report any "critical failure" to California's AI Safety Board within 72 hours
- Kill switch: Implement ability to shut down model if misuse detected
- Third-party audits: Annual review by independent auditor
Penalties
- Civil penalties: $10-30 million per violation
- Criminal liability: For developers who knowingly cause catastrophic harm through negligence
Who's affected
Foundation model developers
- OpenAI (GPT-4, GPT-5)
- Anthropic (Claude 3, future models)
- Google DeepMind (Gemini)
- Meta (Llama 3 if training scales up)
These companies must comply if they offer models to California residents (which they do).
Cloud providers
AWS, Google Cloud, and Azure may face obligations if they're deemed to be "developing" models by providing compute to model creators. The law's language here is ambiguous and will likely require regulatory clarification.
NOT affected: Application developers
If you're building an AI productivity tool (like Chaos) that uses third-party APIs (OpenAI, Anthropic), SB 1047 does NOT apply to you. Your vendor bears compliance responsibility.
Exemptions
- Fine-tuned models: Customising existing models with <$10M additional compute is exempt
- Research models: Academic and non-profit research with safety controls
- Open-source models: Models released under open licences with safety documentation
The open-source exemption is controversial. Critics argue it creates a loophole; supporters say it protects innovation.
Procurement implications
Vendor due diligence
When evaluating AI vendors, ask:
- Does your model trigger SB 1047 thresholds?
- Have you completed required safety testing?
- Can you share your incident response protocol?
- Do you have third-party audit reports?
Vendors who can't answer are either non-compliant or below thresholds (and should clarify which).
Contractual protections
Include clauses in AI vendor contracts:
- Compliance warranty: Vendor represents they comply with applicable AI safety laws
- Indemnification: Vendor covers losses from their non-compliance
- Termination rights: You can exit if vendor faces regulatory action
Diversify model providers
Relying on a single foundation model creates risk if that provider faces regulatory shutdown. Architect your application to support multiple backends (OpenAI, Anthropic, open-source models).
How does SB 1047 affect Chaos?
Chaos doesn't train large-scale models, so SB 1047 doesn't directly apply. However, Chaos uses third-party AI services (like Claude or GPT) that are covered. We monitor our vendors' compliance and maintain multi-provider architecture to mitigate regulatory risk.
For broader AI governance strategies, see our AI Compliance Readiness Roadmap and EU AI Act Operations for comparison with European regulations.
Key takeaways
- SB 1047 applies to models trained with ≥$100M compute—not most startups
- Foundation model developers must test for critical harms, report incidents, and enable kill switches
- Application developers using third-party APIs aren't directly covered
- Smart procurement includes vendor compliance checks and multi-provider fallbacks
Summary
California's SB 1047 targets frontier AI developers, not the typical startup building on APIs. If you're creating applications (not training billion-parameter models), your main concern is vendor compliance. Ask hard questions, include contractual protections, and maintain the ability to switch providers if regulatory action disrupts your current vendor. SB 1047 is the start, not the end, of AI safety regulation.
Next steps
- Confirm whether your AI development meets SB 1047 thresholds (likely it doesn't)
- Review your vendor contracts and add compliance warranties if missing
- Document which AI services you use and their compliance status for audits
- Build multi-provider capability so regulatory action against one vendor doesn't halt your product
About the author
Max Beech tracks AI regulation and helps teams navigate compliance requirements without over-interpreting nascent laws. Every analysis distinguishes legal requirements from best practices.
Compliance disclaimer: This guide provides general information, not legal advice. Consult an attorney specialising in technology law for specific compliance questions.
Review note: SB 1047 text reviewed May 2025. Monitor California's AI Safety Board for implementation guidance.