By Bre Timko & Dave Schmidt
BLOG OVERVIEW: State and local AI laws governing hiring and selection procedures are already in effect with more on the way. Employers can take seven practical steps right now to build a scalable AI compliance strategy, from inventorying selection procedures and understanding bias audit requirements to conducting proactive audits and gathering validation evidence. While each law carries its own requirements, these foundational practices apply broadly across jurisdictions and can help organizations reduce risk, strengthen legal defensibility, and stay ahead of future regulatory changes.
With over a half dozen state and local artificial intelligence (AI) laws already in effect with several more in process, it can be overwhelming to disentangle the nuances of each law and plan an associated compliance strategy. While each law imposes its own legal obligations (see DCI’s State Legislation Tracker within DCI’s AI Toolkit for quick-hit information on each law’s status and requirements), there are steps employers can take now to prepare for AI compliance across jurisdictions.
It is important to note that when planning for compliance, there is always relevant context that must be considered, including which specific laws or regulations are relevant, the nature of the selection procedure, and the manner in which it is used, and the broader organizational goals. While specific plans should be tailored to the specific situation, the recommendations below reflect foundational practices that will likely be applicable across multiple enacted and proposed state or local AI laws. Following these recommendations can help organizations build scalable, future-ready compliance frameworks and establish or strengthen governance for their AI-enabled selection procedures.
While the steps above provide a practical starting point, organizations often benefit from a more comprehensive understanding of how AI regulation is evolving and how it intersects with selection science. We discuss these issues in greater depth in our book chapter, “Regulating Artificial Intelligence: The Evolving Landscape and Its Impact on Industrial-Organizational Psychology,” within Artificial Intelligence for I-O Psychologists. The chapter expands on the themes introduced here, including broader regulatory trends, emerging legal considerations, and the implications for practice.
As state and local AI regulation continues to evolve, organizations that take early, deliberate action will be best positioned to deploy AI responsibly, mitigate and manage potential risks, maintain operational continuity, and adapt to future legal and regulatory developments. While individual laws may differ in scope and application, building a strong compliance foundation now can reduce uncertainty and prevent reactive, last-minute efforts later.
DCI will continue to monitor legislative developments and provide practical, actionable guidance to help organizations navigate this rapidly evolving landscape. As always, stay tuned to DCI Blogs for the latest developments, and feel free to reach out with any questions about state and local AI laws or your compliance strategy.
[1] This determination of documentation availability is useful not only for ensuring legal compliance for AI-enabled selection procedures but also for traditional or long-standing selection procedures.
[2] Note that even though state AI laws often refer to adverse impact analyses as a “bias audit,” adverse impact is not the same as bias. Adverse impact is a legal concept that describes differences in selection or outcome rates across demographic groups, regardless of the cause. By contrast, bias refers to a systematic and unjustified distortion in prediction or measurement that unfairly disadvantages individuals or groups. A finding of adverse impact signals that further review is warranted, but it does not by itself establish that a practice is biased, unlawful, or unfair.