By: Bre Timko, Dave Schmidt, Grace Boudjalis
Specifically, the EEOC alleged that the software was programmed to automatically reject female applicants over the age of 55 and male applicants over the age of 60. In total, iTutor failed to hire over 200 qualified older applicants from the United States because of their age in early 2020.
Applicants generally have no idea why they are rejected during the selection process. In the iTutorGroup case, one applicant decided to resubmit an application identical to one that iTutorGroup rejected but with a more recent birthdate. iTutorGroup offered this presumably younger applicant an interview, raising suspicions of age discrimination. The consent decree stipulates that iTutorGroup adopt anti-discrimination policies and conduct anti-discrimination trainings in addition to reporting requirements to confirm attendance at the aforementioned training sessions. It also requires iTutorGroup to shell out $365,000 to affected job seekers and mandates that the organization invite all affected applicants to reapply. The consent decree specifically notes that iTutorGroup must ensure no future violations with regard to age or sex discrimination.
Nearly one in four organizations and two in five large organizations use AI to support HR decisions, with seventy-nine percent of those using AI doing so in recruitment and hiring.1 In some circumstances, the use of AI tools can result in intentional or unintentional discrimination if applicants are disproportionately rejected or otherwise disadvantaged based on a federally protected characteristic such as age, race, or sex without sufficient evidence of job-relatedness. It is worth noting that while this challenge is being classified as an AI settlement, the algorithm at issue was simple in nature; unlike more advanced artificial intelligence-based tools that involve many features and machine learning, the algorithm at issue here disqualifies applicants who met certain age and sex criteria. Furthermore, the allegation falls under the disparate treatment theory of discrimination (i.e., intentional discrimination) as opposed to disparate impact theory (i.e., where a facially neutral tool results in discrimination). While the EEOC and other federal agencies have demonstrated a heightened focus on the use of AI in the workplace and recent guidance has focused on facially neutral tools under disparate impact theory, this settlement indicates that the EEOC is interested in both disparate treatment and disparate impact, and the agency is using a broad definition of what constitutes AI.2
This will likely be the first of many challenges ahead in this space. Any assessment – AI-based or otherwise – that is used in an organization’s selection process should be regularly evaluated for compliance with state and federal anti-discrimination laws.
DCI will continue to monitor cases involving AI bias in hiring as they occur.
References
1 Society for Human Resource Management (2022). Retrieved from https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf?_ga=2.112869508.1029738808.1666019592-61357574.1655121608.
2 As compared to a definition requiring an assessment to specifically use artificial intelligence or machine learning methods, or to a definition requiring an assessment include certain relevant characteristics of these methods (e.g., New York City’s local law 144).