By: Bre Timko
On October 16, 2024, the Department of Labor (DoL) released Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers. This was an extension of the Principles for AI and Worker Well-Being guidance released in May of this year. The DoL initially released eight guiding principles related to the use of artificial intelligence (AI) in the workplace in response to Biden’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The update to the Best Practices elaborates on each principle by providing bulleted action items to guide developers and employers on AI implementation and use.
The guidance starts with defining essential terms such as “automated system,” “algorithmic discrimination,” and “significant employment decisions,” adding to the conglomerate of definitions in this space. The guidance then provides a detailed overview of best practices relating to the use of AI in the workplace. A summary of the best practices associated with each guiding principle is included below.
AI Principle |
Best Practices |
[North Star] Centering Worker Empowerment |
Developers and employers should prioritize workers’ experiences and empowerment, particularly those from underserved communities, throughout all stages of AI design, deployment, and oversight. Employers should also engage in good-faith bargaining with unions regarding AI and electronic monitoring in the workplace. |
Ethically Developing AI |
Developers should establish standards to ensure AI products protect workers' civil rights, prioritize safety, and meet performance requirements. They should conduct impact assessments and/or independent audits and publish results. They should ensure roles focused on data review are high quality. AI systems should be designed to support employers’ ongoing monitoring and allow for proper human oversight. |
Establishing AI Governance and Human Oversight |
Employers should establish governance structures for consistent AI adoption, incorporating worker input and providing widespread training. They should avoid relying solely on AI or electronic monitoring for significant employment decisions. They should document these decision types, along with appeal and remedy procedures. Regular independent audits of AI systems should be conducted, with public reporting on rights and safety evaluations. |
Ensuring Transparency in AI Use |
Employers should provide workers with advanced notice before implementing AI systems that may impact them. There should be transparency about what data are collected, stored, and its intended purpose. Workers should also be able to request, view, and correct their personal data that are used to make significant employment decisions. |
Protecting Labor and Employment Rights |
Employers should avoid using AI systems that could interfere with labor organizing, reduce legally owed wages or benefits, or negatively impact health and safety. They, along with developers, should audit AI systems for adverse impact against protected classes, including race, color, national origin, religion, sex, disability, age, and genetic information; consideration should also be given to design and use of AI for individuals with disabilities. Audit results should be shared publicly before deployment. Workers should be encouraged to voice concerns about AI use without fear of retaliation. |
Using AI to Enable Workers |
Employers should evaluate how AI systems impact job tasks, required skills, opportunities, and risks, ideally piloting these systems before broader deployment. They should minimize electronic monitoring, using only the least invasive methods necessary, and ensure AI-driven scheduling supports fair, predictable work schedules. Additionally, employers should consider ways for workers to share in the productivity gains or increased profits from AI use. |
Supporting Workers Impacted by AI |
Employers should offer training to help workers use AI systems and prioritize retraining and reallocating workers displaced by AI to other roles within the organization when possible. Additionally, they should collaborate with state and local workforce systems to support upskilling through education and training partnerships. |
Ensuring Responsible Use of Worker Data |
Developers should design AI systems with safeguards to secure and protect worker data. Employers should limit data collection, retention, and handling to only what is necessary for legitimate business purposes, secure these data rigorously, and avoid sharing of data outside the organization and its agents. |
The best practices outlined above provide both AI developers and users with comprehensive and actionable strategies to guide them in the development and implementation of AI in the workplace. We will likely continue to see guidance from the federal government regarding the use of AI in employment.
Stay tuned to the DCI Consulting Blog for up-to-date news on all things AI. Looking for independent auditing services? Check out DCI’s service offerings and read about DCI’s 20+ years of independent audit experience on our website.