President Biden Signs an Executive Order Related to the Use of AI

By: Dave Schmidt and Bre Timko

On October 30, 2023 President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It covers a very wide range of areas related to the use of artificial intelligence (AI). The goal of the Executive Order is to help facilitate the benefits of AI and mitigate the associated risks. A prevalent theme throughout the Executive Order is the need to balance risks against rewards with AI—promoting innovation and the promise of AI while demanding oversight and responsibility in its use. 

The purpose of this blog is not to summarize the entirety of the Executive Order, but rather, to highlight some areas of particular relevance to the use of AI in employee selection practices. 

1. Of critical relevance to employee selection practices is Section 7 (Advancing Equity and Civil Rights), sub-section 3 (Strengthening AI and Civil Rights in the Broader Economy) in which 7.3.a. states: 

“Within 365 days of the date of this order, to prevent unlawful discrimination from AI used for hiring, the Secretary of Labor shall publish guidance for Federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.” 

Given the specific mention of federal contractors, it seems likely that any such guidance will require involvement from the OFCCP, and that this guidance will be available by late October 2024. While this is focused on federal contractors, it would behoove all employers to take note of any guidance issued on this as the broad legal framework and authorities considered are very similar regardless of federal contractor status. 

DCI expects that guidance issued will be grounded in long-standing frameworks and authorities (Title VII, Executive Order 11246, Uniform Guidelines on Employee Selection Procedures, and so on). However, it is reasonable to expect that the guidance issued may serve as an addendum addressing emerging areas and closing gaps that have surfaced in recent years as the use of AI in employee selection practices has dramatically increased and significantly evolved. As such, topics including (but not limited to) algorithmic explainability and transparency, job applicant notifications, and considerations of patterns in training data are some that would be useful to see addressed in any guidance coming out of this Executive Order. 

2. Section 7.3.d. is also one to attend to in that it addresses concerns associated with the use of AI for individuals with disabilities:

“to help ensure that people with disabilities benefit from AI’s promise while being protected from its risks including unequal treatment from the use of biometric data like gaze direction, eye tracking, gait analysis, and hand motions” 

And encourages agencies to consider issuing: 

“technical assistance and recommendations on the risks and benefits of AI in using biometric data as an input” 

While the Executive Order does not explicitly focus this section on employee selection practices, it is reasonable to expect that this will be addressed as well. While many vendors and employers have recently stopped using biometric features in employee selection algorithms (e.g., facial features, auditory features), this is still an area on which guidance might be useful. Further, the EEOC issued guidance on the use of AI systems used in employee selection for individuals with disabilities last year, so further guidance on this front might build on that foundation. 

3. Section 3.d (Definitions) both defines and encourages the use of AI red-teaming: 

“The term ‘AI red-teaming’ means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI … [in which red teams] adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.” 

This approach can be quite useful when evaluating AI-enabled employee selection procedures, particularly in instances where the algorithm may involve deep learning, natural language processing, or other similar machine learning methods and should consider all aspects of the AI model build and the operational use of the selection procedure (data inputs, processing, outputs, and so on). If this has not previously been on the radar, it is certainly worth considering the addition of a red-teaming approach to the plan. This concept maps onto research methods from the Industrial/Organizational (I/O) Psychology literature and practice, and we expect that these approaches will become a valuable research step in AI tool development and validation. I/O Psychologists are particularly well suited to align such red-teaming practices to professional standards. DCI has developed a set of related red-teaming methods and services. 

4. Section 2.d (Policy and Principles) emphasizes that there is a need to ensure that AI does not: 

“disadvantage those who are already too often denied equal opportunity and justice.  From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.” 

This Executive Order is intended to build upon prior efforts such as the Blueprint for an AI Bill of Rights issued by the OSTP released about a year ago and the NIST AI Risk Management Framework released earlier this year. 

While there is a lot to this Executive Order, these were themes that seemed particularly relevant to employee selection. Stay tuned to the DCI blogs for regular updates on the use and governance of AI in employee selection. 

Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.


Stay in the Know!