In 2026, California is leading the United States in codifying how AI can be used in the workplace. The state has moved beyond policy debates into a “compliance and enforcement” era, with several landmark regulations and laws taking effect between late 2025 and 2026.
Here is how California’s legal landscape has shifted to address AI:
1. The FEHA “Algorithmic Discrimination” Rules (Oct 2025)
Effective October 1, 2025, the California Civil Rights Council finalized major amendments to the Fair Employment and Housing Act (FEHA). These are the most direct adjustments to date:
- Liability is Non-Transferable: Employers cannot blame a third-party vendor if an AI tool (like a resume screeners or interview analyzer) produces discriminatory results. The employer remains fully liable.
- Broad Scope of “ADS”: The law regulates Automated Decision Systems (ADS), which includes everything from simple algorithms to advanced generative AI used for hiring, firing, promoting, or monitoring productivity.
- Anti-Bias Testing as Evidence: While testing isn’t strictly mandatory, the presence (or absence) of anti-bias audits is now admissible evidence. If an employer hasn’t tested their AI for “proxy discrimination” (where the AI uses zip codes or hobbies as a stand-in for race or age), they have a much weaker defense in court.
- Four-Year Record Retention: Employers must now keep all ADS data, including inputs and outputs, for four years—up from the previous two-year requirement.
2. Transparency in Frontier AI Act (SB 53)
Effective January 1, 2026, this law targets the “frontier” developers (the companies building the models like GPT or Gemini).
- Whistleblower Protections: It creates specific legal shields for tech workers who report safety or bias risks in large-scale AI models.
- Risk Frameworks: Developers of models trained above a certain compute threshold ($10^{26}$ FLOPS) must publish their risk mitigation strategies.
3. The “No Robo Bosses” Act (SB 947)
Introduced in February 2026, this landmark legislation aims to prevent “automated termination.”
- Human-in-the-Loop: It prohibits employers from relying solely on AI to discipline or fire workers.
- Predictive Policing: The bill seeks to ban “future-predicting” AI that attempts to guess which employees might quit or cause trouble based on their personal data.
4. Public Sector Protections (SB 1220)
Specifically targeting call centers and public benefit programs, this law prohibits state and local agencies from using AI to eliminate core job functions. It mandates that AI can assist but not replace the human workers who handle sensitive public services like Medi-Cal or unemployment benefits.
Summary of 2026 Compliance for Employers
| Legal Requirement | Status in 2026 |
| Notice to Workers | Mandatory before an AI tool is used to make a “consequential decision.” |
| Opt-Out Rights | Employees may request human review of any AI-driven decision. |
| Bias Auditing | High priority for legal defense; required for some state contracts. |
| Generative AI Transparency | Developers must disclose the “high-level” makeup of training data (AB 2013). |
