left-caret

Client Alert

California Enacts New AI Safety and Transparency Laws While Vetoing ‘No Robo Bosses Act’

November 10, 2025

By Sara B. TomezskoKenneth W. GageBrian A. Featherstun, Dan Richards and Julia Beauchemin

In the absence of comprehensive federal legislation addressing artificial intelligence (AI) safety and permissible uses in the workplace, state legislatures and agencies continue to promulgate bills and regulations that would place significant transparency, reporting and testing obligations on employers’ use of AI. These past few weeks have seen significant activity out of California, with the passage of regulations and bills that change the legal landscape for employers operating in the state. These developments, taken together, reflect the state’s efforts to balance public and industry concerns regarding the safety of increasingly sophisticated AI tools, with the desire to support technological innovation in this space.

New Regulations Will Require Pre-Use Notice and Risk Assessments When Technology ‘Substantially Replaces’ Human Decision-Making

On Sept. 29, the California Privacy Protection Agency (CPPA) announced that the California Office of Administrative Law approved the CPPA’s long-awaited regulations on cybersecurity audits, risk assessment, automated decision making and other updates to existing regulations. Among other things, the regulations focus on California businesses’ use of automated decision-making technologies (ADMT), defined as “technology that processes personal information and replaces or substantially replaces human decision-making.”

Beginning Jan. 1, 2027, the regulations require businesses to provide a pre-use notice to Californians upon or before collecting personal information that will be used or processed by ADMT. Such notice must describe the specific purpose for which the business will use ADMT to make “significant decisions” (which include decisions relating to employment), give consumers the right to opt out, explain how the decision will be made if the consumer opts out and describe where to access information about how the ADMT works. In addition, business must conduct risk assessments before initiating any processing activity that involves a “significant risk” and report the results of those assessments to the CPPA as early as April 1, 2028. Notably for employers, the regulations identify the use of ADMT for employment-related decisions or training ADMT for this purpose as examples of what constitutes a “significant risk” requiring an assessment.

For more information on the new regulations and details as to the obligations placed on employers and other businesses operating in California, please see our recent client alert covering this important update.

Large Frontier Developers Will Be Subject to New Requirements Under SB 53 — Transparency in Frontier Intelligence Act

On Sept. 29, California Gov. Gavin Newsom signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (Act). The Act creates a comprehensive regulatory framework to standardize safety disclosures for developers of “frontier models,” i.e., those foundation models trained on computing power greater than 10^26 floating-point operations, or FLOPs,[1] the same compute threshold referenced in the rescinded Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order that the Biden administration published in 2023. The Act also creates additional obligations for “large frontier developers” with annual revenue exceeding $500 million in the preceding calendar year.

In addition to the safety disclosure requirements, the Act includes whistleblower protections that safeguard employees who report or assist in investigating potential violations of the Act. Most employers who simply deploy AI solutions rather than developing them in-house will likely fall outside the scope of the Act, as will developers who presently do not meet the FLOP threshold. However, more developers are expected to come within the ambit of this statute as AI technology continues to develop, and employers should consider the obligations imposed on their business partners and the vendors from whom they are purchasing AI tools for use in the workplace. A summary of the requirements under SB 53 are as follows.

Establishing AI Safety Frameworks

Beginning Jan. 1, 2026, developers of large frontier models must create, implement and publish on their websites a safety framework that addresses the developer’s approach to, among other things: incorporating national and international industry standards and best practices, thresholds used to assess if the model is capable of posing a “catastrophic risk,” mitigation measures and how the efficacy of those mitigation measures informed the decision to deploy the model, the use of third parties to assess the potential for “catastrophic risk,”[2] cybersecurity practices to eliminate unauthorized modification or transfer of the model by internal or external parties, identifying and responding to critical safety incidents, and assessing and managing “catastrophic risk” resulting from the internal use of frontier models. Although the law does not specify which industry standards the framework should address, we assume that the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST) in July 2024 is likely the industry standard being referenced.

In addition, frontier model developers must establish and disclose in their framework an internal governance process for ensuring the implementation of the cybersecurity mandates imposed by the law, and the process for updating and revisiting the framework. These frameworks must be kept current, updated either annually or within 30 days after a material modification is made to a frontier model.

Transparency Reporting

The law further requires covered developers of frontier models to publish public transparency reports before or concurrently with making the frontier model available to a third party. The transparency report must include the model’s release date, intended use and any restrictions on deployment, among other information. Frontier developers must also notify the California Office of Emergency Services (OES) of all “critical safety incidents” (i.e., model behavior that results in or materially risks death, serious injury or loss of control over the model) within 15 days of discovering the incident or report the incident within 24 hours to the appropriate public safety authority if the incident presents an imminent risk to life or public safety. These incidents may be reported via an online portal established by the OES, which will issue anonymized summaries of reported incidents beginning Jan. 1, 2027.

Large frontier model developers have additional obligations. Their transparency reports must also disclose the results of any risk assessments of catastrophic risks posed by the model and the extent to which third-party evaluators were involved in the assessments. Large frontier model developers are also required to transmit to OES a summary of any assessment of catastrophic risk resulting from the internal use of its frontier models every three months or other “reasonable” alternate schedule. These summaries may be submitted confidentially.

Whistleblower Protections

The Act includes whistleblower protections to encourage employees to report catastrophic risks associated with a frontier model without fear of retaliation. Notably, the law’s whistleblower protections extend only to “covered employees,” defined as those responsible for assessing, managing or addressing the risk of critical safety incidents. The Act prohibits employers from retaliating against covered employees who report a good-faith belief of (1) a specific and substantial danger to the public health or safety resulting from a catastrophic risk or (2) the frontier developer’s alleged violation of the Act. Large frontier model developers must also develop a “reasonable” internal process through which a covered employer may anonymously disclose this information and provide monthly updates to the covered employee regarding the status of any ensuing investigation or actions taken in response to the disclosure.

The Act establishes a private right of action for covered employees who can show by a preponderance of the evidence that their protected activity was a contributing factor in the alleged adverse action. The Act specifies that if the covered employee can make such a showing, the burden shifts to the frontier developer to prove by clear and convincing evidence that the alleged adverse action would have occurred for legitimate, independent reasons even if the covered employee had not engaged in protected activity. Injunctive relief is also available in civil actions and administrative proceedings.

The Act provides for an award of attorneys’ fees to a successful plaintiff but the scope of other available remedies and penalties for violations of these whistleblower protections is presently uncertain. As California courts begin to interpret the law, they may look to California’s existing whistleblower protections under Labor Code § 1102.5, which imposes a civil penalty of up to $10,000 per violation. The Act expressly authorizes civil penalties of up to $1 million per violation for failures tied to its disclosure and incident reporting requirements. However, these penalties are only enforceable by the state attorney general and are not clearly applicable to private retaliation claims.

Governor Newsom Vetoed SB 7, the ‘No Robo Bosses Act’

Not all legislative efforts in California have been as successful as SB 53. On Oct. 13, Newsom vetoed SB 7 (the ‘No Robo Bosses Act’), a bill that had sat on his desk until the deadline to act as industry groups voiced their concerns over the potential cost associated with implementation.

SB 7 aimed to regulate employers’ use of automated decision systems (ADSs) across all stages of employment by requiring human oversight, mandating written notice to affected employees and establishing procedural safeguards that would have limited the scope of appropriate ADS use. The bill defined “employment-related decision[s]” broadly as “any decision … that materially impacts a worker’s wages, benefits, compensation, work hours, work schedule, performance evaluation, hiring, discipline, promotion, termination, job tasks, skill requirements, work responsibilities, assignment of work, access to work and training opportunities, productivity requirements, or workplace health and safety.” ADS was also broadly defined as “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decision making and materially impacts natural persons.” Opposition to the bill consistently pointed to the breadth of this definition and the potential that it would encompass routine tools such as employee scheduling software, spreadsheets with certain formula inputs and other administrative tools that arguably fell within the definition of an ADS.

A key feature of SB 7 was its explicit prohibition on the exclusive use of ADSs when making a “discipline, termination, or deactivation decision.” Specifically, the bill prohibited employers from relying “solely” on an ADS when making any of the three foregoing decisions. Employers were, however, permitted to rely “primarily” on ADS output when making these decisions, provided that a human reviewer conducted an independent investigation and compiled corroborating evidence supporting the decision. The legislature left it to the courts to interpret the difference between “solely” and “primarily.”

SB 7 also included detailed requirements that employers utilizing ADSs provide workers with pre- and/or post-use written notice that such systems were being used. Pre-use notice was to be required whenever an ADS was used for hiring or other employment decisions that would “foreseeably and directly affect” employees, leaving potentially every material employment decision within the bill’s regulatory ambit. Post-use notice would have been required whenever an employer relied primarily on ADS to make a termination, discipline or deactivation decision and would have informed the employee of the identity of a human to contact to request the data relied upon, the employee’s right to correct the data used and a reiteration of the bill’s anti-retaliation protections. Rather than creating a private right of action, the bill charged the state labor commissioner with enforcing the law.

Newsom cited the ambiguity in the bill and its failure to distinguish between high-risk AI tools and low-risk administrative technologies as one of the bases for his decision. “Rather than addressing the specific ways employers misuse this technology, the bill imposes unfocused notification requirements on any business using even the most innocuous tools,” Newsom said in his veto statement. He also cited the new CPPA regulations and their expanded protections of consumer privacy as the appropriate vehicle to address many of the concerns targeted by SB 7 and suggested the state should “assess the efficacy of [those] regulations” before passing new legislation in this space.

Takeaways for Employers

  • Review All ADS Currently in Use. SB 7 may have been vetoed, but employers can expect to see heightened scrutiny around the use of AI tools in the workplace, particularly those relied upon to make hiring, termination, or disciplinary decisions, and should take steps to prepare for the CPPA regulations. As a first step, employers should identify and catalogue all AI-related tools in use to assess the current risk portfolio and carefully evaluate potential legal and operational consequences before implementing any new system.
  • Be Aware of Whistleblower Protections. SB 53’s whistleblower protections could increase covered employers’ exposure to wrongful retaliation claims for covered employees who report AI-related safety concerns. To avoid violations of SB 53, covered employers should establish clear reporting channels and managers should be trained to respond appropriately to reported concerns without resorting to retaliatory action.
  • Continue to Monitor the Shifting Regulatory Landscape. California has signaled a clear intent to position itself at the forefront of the regulatory landscape over past months and years. Other states, including New York, are not far behind with bills like NY S6953/A6453, more commonly known as the Responsible AI Safety and Education (RAISE) Act, which similarly aims to regulate the training and use of frontier models. Employers of all sizes should closely monitor this evolving situation and not hesitate to contact counsel with any questions about either existing or potential future law in the space.
 

[1] FLOPs (Floating Point Operations) measure the total number of arithmetic calculations performed during the training of an AI model. FLOPs are used to describe the overall scale and computational cost of training. For example, a model trained at ≥10^26 FLOPs has executed roughly 10^26 individual math operations, which indicates a frontier-level AI system.

[2] “Catastrophic risk” is defined as a foreseeable and material risk that the frontier model will “materially contribute to the death of, or serious injury to, more than 50 people or more than a billion dollars in damage” arising from a single incident involving defined conduct, such as the frontier model evading the control of its developer or user, or “engaging in conduct with no meaningful oversight, intervention, or supervision that is either a cyberattack” or would constitute murder, assault, extortion or theft (including theft by false pretenses) if committed by a human.

Click here for a PDF of the full text