By: Bre Timko (DCI Consulting), Niloy Ray (Littler Mendelson), & Dave Schmidt (DCI Consulting)
The State of California has had a flurry of activity over the past two years intending to regulate the use of artificial intelligence (AI) in making consequential decisions, including in employment. The purpose of these laws is to mandate transparency and explainability regarding the use of automated decision tools (ADTs). This movement is not surprising given that California is an international tech hotspot. However, two key bills – AB331 and AB2930 – failed in 2024. The proposed revisions to California’s Fair Employment and Housing Act (FEHA) are now the only proposed regulations remaining in the state related to anti-discriminatory use of ADTs in employment, and even that proposal is being scaled back. All of this raises the question: Is California running out of steam?
Assembly Bill 331
Assembly Bill (AB) 331, which was specific to the employment context, was introduced in January 2023 and would have required notice to individuals that an ADT was being used. It also would have required both AI developers and organizations that use AI tools to conduct annual impact assessments, which were to include details about the ADT, an analysis of adverse impact, a description of risk management efforts, and information about how the ADT was evaluated for validity or relevance to the job(s) for which the ADT was being used. AB 331 was pivotal in the AI regulation space because no prior state or local law included any reference to validity. Other laws, such as New York City’s Local Law 144, require annual “bias audits” (akin to California’s “analysis of adverse impact”), however, it was notably silent on considering the relatedness of the assessment content to the job content or the ability of the assessment to predict important job outcomes. Nevertheless, AB 331 died in January 2024.
Assembly Bill 2930
Only one month passed after AB 331 became inactive before a second bill – AB 2930 – was introduced by the California State Assembly Privacy and Consumer Protection Committee. Like AB 331, AB 2930 sought to impose certain requirements on entities doing business in the state and using ADTs to make employment decisions. However, the requirements of AB 2930 were not isolated to the employment context. The bill imposed certain requirements on entities doing business in the state and using ADTs to make or play a substantial role in making consequential decisions. The law defined a “consequential decision” as one that has a significant effect on an individual’s life relating to 1) employment, 2) education and vocational training, 3) housing, 4) essential utilities, 5) family planning, 6) adoption or reproductive services, 7) healthcare or health insurance, 8) financial services, 9) certain aspects of the criminal justice system, 10) legal services, 11) private arbitration, 12) mediation, or 13) voting. AB 2930 aligned with AB 331 in that it included notice and impact assessment requirements. It also went a step further in that it added reliability to the requirements, stating that impact assessments must include a “description of how the automated decision system has been or will be evaluated for validity, reliability, and relevance.” In other words, employers using ADTs would not only need to disclose information about the job-relatedness and predictability of the tool, but also the tool’s ability to produce consistent results.
What Remains: Proposed Revisions to California’s Fair Employment and Housing Act
Although AB 331 and AB 2930 are no longer moving forward, proposed revisions to FEHA that were released in March 2022 remain pending. The FEHA applies to any business regularly employing five or more individuals and prohibits, among other things, employment discrimination unless the discriminatory policy or practice is job-related and consistent with a business necessity. The revisions would formally add “automated decision systems” (ADS) to the Act. The amendments initially established liability for third parties (such as vendors) offering services connected to hiring or employment decisions, as well as aiding-and-abetting liability for anyone involved in advertising, selling, or enabling the use of an ADS if its use led to discriminatory outcomes. The proposal also implied that personality-based questions would be deemed a prohibited pre-offer medical test. Further, full-ADS background checks were prohibited. In October 2024, however, almost all of these draft amendments were either eliminated or scaled back significantly. In addition to formally incorporating ADS into the Act, the remaining provisions suggest that employers may need to offer accommodations for ADS that assess an applicant’s skills, dexterity, reaction time, or other abilities—paralleling the accommodations required in other selection processes. The California Civil Rights Council is currently considering submitted comments about the proposed regulations after voting in October to double the length of the public comment period.
While the proposed FEHA amendments may no longer be as onerous as once contemplated, they still represent the state’s desire to further regulate the use of AI in pivotal circumstances. In fact, California Governor Gavin Newsom, who has signed 17 bills covering the deployment and regulation of GenAI technology, announced in late September 2024 the continuation of a series of initiatives to further protect Californians from the fast-moving technology.1 While not all of these laws will relate directly to employment, it is likely that several will have implications for the workplace. Additionally, California’s Consumer Privacy Protection Agency is working on its own ADMT (Automated Decision-making Tool) rules; so there still appears to be some momentum remaining in California’s drive to regulate AI.
Organizations – especially those that operate across the United States – must remain cognizant of the enacted and proposed laws and be proactive about making any adjustments necessary for compliance. This is something that will likely continue to increase in complexity without an overarching federal law to guide specific and uniform steps necessary for compliance. It is imperative that organizations are knowledgeable of any recent or pending laws set to take effect as soon as 2026 because the time to prepare for compliance is now. Stay tuned to DCI Blogs and Littler’s AI News & Analysis publications page to keep up-to-date on future developments in this space.
References
1 Governor Newsome announces new initiatives to advance safe and responsible AI, protect Californians. (2024). Retrieved from https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians/.