OMB Policy Regarding Federal Agencies’ Use of Artificial Intelligence

By: Kristen Pryor and Dave Schmidt

To meet obligations set forth in the President’s Artificial Intelligence (AI) Executive Order 14110, the Office of Management and Budget (OMB) published a memorandum for Federal Agencies on March 28, 2024. This document directs Federal Agencies to 1) strengthen AI governance, 2) advance responsible AI innovation, and 3) manage risks from the use of AI. This memorandum covers AI used by Federal Agencies; most required actions below focus on AI with the potential to impact the rights or the safety of the public.1

To strengthen AI governance, Federal Agencies are directed to designate Chief AI Officers, who will coordinate the use of AI across their agencies, by May 27, 2024. Additionally, Chief Financial Officers (CFO) Act Agencies are directed to convene AI Governance Boards, with relevant senior officials, by May 27, 2024. Within 180 days, and every two years thereafter, each agency must submit and publish a compliance plan either outlining the approach to achieve consistency with the memorandum or declaring no covered AI is in use. Annually, each agency, excepting the Department of Defense and Intelligence Community, must submit and publicly post an inventory list of AI in use, including identifying potential safety or rights impacting uses and the how those risks are being managed. 

To advance responsible AI innovation, CFO Act agencies are directed to develop and release a strategy for identifying and removing barriers related to AI use and improvements, within one year. Agencies are directed, at a minimum, to consider IT infrastructure, data usage and quality, cybersecurity, and the potential and pitfalls of generative AI. Agencies are also encouraged to prioritize hiring and retaining AI and AI-enabling talent roles. Further, the memorandum directs agencies to share their AI code, models and data across government, to the extent possible, and considering appropriate data security. The memorandum also advocates collaboration amongst agencies to share best practices, lessons learned, technical resources, and examples of effective governance and uses of AI to address key challenges. 

To manage risks from the use of AI, Federal Agencies are required, by December 1, 2024, to implement concrete safeguards when using AI that could impact Americans’ rights or safety. These safeguards include mandatory actions to reliably assess, test, and monitor AI. The minimum actions include: 

  • Conducting impact assessments that document the purpose of the AI, the expected benefits, the potential risks, and the quality and relevance of data used to develop, train, and test the AI. 
  • Conducting real world testing to ensure the AI will achieve the intended benefits and mitigate the potential risks. 
  • Conducting independent evaluations of the relevant AI documentation by a group or individuals who were not directly involved in developing the AI.  

Further, the memorandum requires ongoing monitoring, regular evaluation of risks, mitigating risks, ensuring adequate human training and oversight, and actions to manage risks in procurement of AI, among others. 

One of the most interesting requirements associated with AI that has the potential to impact citizen rights or safety, is to determine whether an AI model results in significant disparities across Federally protected demographic classes (e.g., race, gender, age). The memorandum calls for identifying and documenting these situations in a real-world context, as well as considering if the AI model might create proxies of demographic classes that influence the performance of a model. The memorandum further requires that agencies mitigate disparities that lead to or perpetuate unlawful discrimination or harmful bias, or that decrease equity. While it requires ensuring consistency with applicable law, it also indicates that the agency should cease use of the AI if the agency cannot adequately mitigate risk of unlawful discrimination2. This will require a nuanced analysis and evaluation that DCI is frequently involved in and experienced with. 

The memorandum also highlights “red-teaming”3 as relevant in some situations to appropriately stress test AI models and evaluate if the behavior of the AI model is clear, logical, and functioning as intended in a wide range of situations. Further, the memorandum highlights several frameworks related to AI risk4 for agencies to consider, in addition to long-standing legal frameworks used to evaluate efficacy and disparate impact of decision-making mechanisms that are relevant to consider when agencies are seeking to meet the requirements set forth in the memorandum. Fortunately, agencies are encouraged to engage with external experts in seeking to comply with these requirements. 

This is the first of many developments expected in the AI regulatory space, as actions driven by Executive Order 11460 will continue to surface over the course of the calendar year. Regulations related to AI are becoming increasingly complex, and it will be critical that agencies seeking to comply with the requirements consider utilizing the help of experts such as Industrial-Organizational psychologists.


References/Notes

1 AI that impacts the rights of the public is related to when outputs assist in decision-making related to areas including, but not limited to, civil rights and liberties, privacy, equal opportunity to key areas such as education, housing, or employment, access to critical government services/resources in areas such as healthcare, financial services, social services, housing, or transportation.; AI that impacts the safety of the public is related to when outputs potentially impact the safety of human life or well-being, climate or environment, critical infrastructure, or strategic assets/resources.

2 Agencies tasked with enforcing relevant laws have recently released a joint statement affirming their focus on applying these laws to automated systems: https://www.dol.gov/sites/dolgov/files/OFCCP/pdf/Joint-Statement-on-AI.pdf

3 “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated ‘red teams’ that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.” (Executive Order 11460, Section 3(d))

4 National Institute of Standards and Technology’s AI Risk Management Framework; Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights; international standards such as the ISO/IEC AI Guidance on risk management.

Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.

Expert_Witness_1st_Place_badge

Stay in the Know!