Colorado Passes First Statewide AI Bias Audit Law

By: Barbara Nett

On May 17, 2024, Colorado became the first US state to enact comprehensive legislation governing the development and use of artificial intelligence (AI) systems. Senate Bill 24-205, which takes effect on February 1, 2026, covers "high-risk” systems, defined as “any artificial intelligence system that, when deployed, makes or is a substantial factor in making, a consequential decision. “Substantial factor” includes any use of the AI system to generate any content, decision, prediction, or recommendation used to make a consequential decision. This includes AI systems that may be implemented by employers to help with employment activity such as recruitment, hiring, promotion, or termination. 

The law imposes a duty of reasonable care on developers and deployers of high-risk AI systems to avoid algorithmic discrimination and requires that any discovery of algorithmic discrimination be disclosed to Colorado’s attorney general within 90 days. The law presumes reasonable care if the deployer takes certain compliance steps, some of which include: 

  1. Risk management policy and program to identify and mitigate the risks of algorithmic discrimination. This must meet certain defined criteria and must be an “iterative process” that is regularly and systematically reviewed and updated.
  2. Impact assessment. Similar in concept to the annual bias audits required under New York City Local Law 144, deployers must complete annual impact assessments for high-risk AI systems. At a minimum, these must include: 

    (a) a statement of the purpose, intended use, and benefits of the AI system;  

    (b) an analysis of whether the system poses any known or reasonably foreseeable risks of algorithmic discrimination and a description of how the deployer mitigates those risks;  

    (c) a summary of the data processed as inputs and outputs of the AI system; 

    (d) an overview of the categories of data, if any, the deployer used to customize the AI system; 

    (e) any metrics used to evaluate the performance and known limitations of the AI system;  

    (f) a description of any “transparency measures” taken, including any measures taken to notify consumers of the use of the AI system; and 

    (g) a description of the post-deployment monitoring and any user safeguards provided. 

  3. Notices to consumers. Prior to using an AI system to make employment-related decisions, consumers must be notified that an AI system has been deployed to make, or be a substantial factor in making, a consequential decision. Employers must provide a statement disclosing the purpose of the AI system, the nature of the consequential decision, and a description of the AI system in plain language. Further notifications must be provided for any consumer adversely affected by the AI system, including a statement disclosing the principal reason(s) for the consequential decision, an opportunity to correct any incorrect personal data used by the AI system, and an opportunity to appeal the adverse decision. If feasible, the appeal must allow for human review.
DCI Consulting Group is currently conducting a full analysis of the law and its applicability to employers and will publish updates as necessary. Developers and deployers will likely have significant work to do to comply with it. 

Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.

Expert_Witness_1st_Place_badge

Stay in the Know!