Federal Agencies: AI in Employment Subject to EEO Regulations

By: Kristen Pryor and Dave Schmidt

On April 4, 2024, multiple federal agencies charged with enforcing various laws signed a joint statement asserting that: 

  1. Their existing enforcement authority encompasses and applies to the use of automated systems, including artificial intelligence (AI) and other innovative technologies, and 
  2. These agencies can and will “vigorously use…authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies."

The agencies signing the joint statement include: 

  • Department of Labor (DOL) 
  • Equal Employment Opportunity Commission (EEOC) 
  • Consumer Financial Protection Bureau (CFPB) 
  • Department of Justice (DOJ) 
  • Department of Housing and Urban Development (HUD) 
  • Department of Education (DOE) 
  • Department of Health and Human Services (HHS) 
  • Department of Homeland Security (DHS) 
  •  Federal Trade Commission (FTC) 

The joint statement provides bulleted examples of the existing laws each agency is tasked with enforcing, and the actions taken or guidance promulgated by that agency to clarify how they are addressing the enforcement of laws with regard to automated systems or AI. Additionally, links to relevant guidance and laws are included in the statement. 

EEOC and OFCCP  

In this statement, EEOC describes their enforcement authorities and the protected classes they focus on, and provides links to the two technical assistance documents on AI EEOC has published – one from 2022 relating to the use of automated systems and the Americans with Disabilities Act (ADA) and the other from 2023 relating to evaluating adverse impact for AI used in employment decisions under Title VII of the Civil Rights Act. Both of these technical assistance documents align to long-standing legal frameworks and authorities that are applicable to all tools used in employment decision-making, including those based in AI methods.  

Elsewhere in this statement, DOL provides a description of the Office of Federal Contract Compliance Programs’ (OFCCP) enforcement authorities and provides links to their 2019 FAQ on evaluating selection procedures. This FAQ reinforces the standard that any selection procedure demonstrating adverse impact must be validated, and that new technology screening devices will be evaluated like any other selection procedure under disparate impact theory. This section also notes the recent update to OFCCP’s itemized listing requirement to document where AI and automated systems are in use. DOL also describes the Civil Rights Center’s enforcement authorities for public entities.  

Actions from both EEOC and the OFCCP make it clear that long-standing rules that have been used to evaluate selection procedures will continue to be highly relevant as core principles, even as the landscape shifts and additional rules are considered. The Uniform Guidelines on Employee Selection Procedures (or Uniform Guidelines) in particular, appear to be the framework that EEOC and OFCCP will use to evaluate AI-based systems. 

Looking Forward 

While this joint statement does not provide any new guidance or approaches, it is an emphatic indicator by the agencies responsible for most regulatory enforcement that they are, and will continue to be, focused on potential violations arising from the use of AI and automated systems. In fact, it is reminiscent of another time, almost 50 years ago, when several civil rights enforcement agencies came together to establish the previously mentioned Uniform Guidelines for the responsible development, validation, implementation, monitoring, and documentation of selection procedures. However,  this is even more broad in scope. 

This is another development in the AI regulatory space driven by Executive Order 14110, and it will not be the last we see in 2024. The use of AI in employment decision making, and the regulation of that use, is becoming increasingly complex, but the foundational framework for that regulation is not new. Assessing selection procedures for bias, evaluating adverse impact, and demonstrating job relatedness are not novel concepts created in response to contemporary tools that leverage technology. Rather, they represent longstanding challenges that Industrial/Organizational Psychologists have been thinking about and researching for many years. 

Stay tuned here for future DCI blogs on AI as we continue to monitor the ripple effect from Executive Order 14110. 

Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.

Expert_Witness_1st_Place_badge

Stay in the Know!