By: Bre Timko and Dave Schmidt
Federal leaders, including Equal Employment Opportunity Commission Chair Burrows, U.S. Department of Justice’s Civil Rights Division Assistant Attorney General Kristen Clarke, Consumer Financial Protection Bureau Director Rohit Chopra, and Federal Trade Commission Chair Lina Khan, released a joint statement early this week on enforcement efforts targeted at discrimination and bias in automated systems, including those using artificial intelligence (AI). The agencies define automated systems “to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.” This follows other recent actions in the federal sector focused on artificial intelligence in employee selection and further amplifies the trend of increased federal attention to the use of AI in hiring contexts.
Agency leaders emphasized three potential problem areas that may contribute to unlawful discrimination:
- Training data sets may be imbalanced, unrepresentative, or include inputs that are correlated with protected class characteristics, which can skew the outputs of automated systems.
- Lack of transparency in the use and development of automated systems (“model opacity”) can make it difficult to determine if an automated system is fair and unbiased.
- Developers of automated systems may develop tools absent appropriate context about how the technology will be used or who will be using it.
The federal agencies reiterated their commitment to continuously monitor the development of automated systems and their potential contribution to bias and discrimination. They jointly pledged to earnestly protect individual rights, whether they may be at risk from traditional tools or more advanced AI technologies.
Stay tuned to DCI’s Consulting Blog for future updates on AI-centered regulations at the local, state, and federal levels.