By: Dave Schmidt and Bre Timko
- (North Star) Centering Worker Empowerment
- Ethically Developing AI
- Establishing AI Governance and Human Oversight
- Ensuring Transparency in AI Use
- Protecting Labor and Employment Rights
- Using AI to Enable Workers
- Supporting Workers Impacted by AI
- Ensuring Responsible Use of Worker Data
Consistent with themes across a wide range of guidance and enacted or proposed regulations for AI, transparency, governance/human oversight, and ethical development are critical for developing and deploying AI. There is also an emphasis on responsible use of worker data, limiting it to only those data necessary – in other words, it is important to have a clear rationale for data being used. The DOL is explicit that the AI should assist workers and should not undermine worker rights. Additionally, in instances where AI may significantly impact worker jobs or responsibilities, employers should support and upskill workers to make the transition.
Finally, their “North Star” guiding principle calls for not only notifying workers and their representatives of the use of AI, but also to ensure there is “genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace” – particularly from historically underserved groups. This principle reminds us of one of the practices the OMB highlighted in their March memorandum around AI (i.e., to “Consult and incorporate feedback from affected communities and the public”) – this may be an emerging theme in federal responses that AI developers should take note of.
Expect to see continued activity from federal agencies in response to Executive Order 14110 in the upcoming months. Stay tuned to DCI blogs for highlights and insights as these responses are published.