16 April 2024

White House Drops Clues to Future AI Guidance for Employers


On March 28, 2024, the Biden administration published “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” which contains the administration’s guidance to federal agencies regarding risk management steps they must take for their own use of AI.

This guidance for federal agencies’ use of AI is likely a prelude to future federal guidance for private employers, as any such guidance will very likely align with the principles expressed in the new document.

We already know that the U.S. Department of Labor (DOL) and other federal agencies are actively developing guidance regarding employers’ use of artificial intelligence.  The DOL is working on a “broader value-based document” that contains “principles and best practices” for employers using AI, as well as a separate document to share “promising practices” for employers using AI selection tools for hiring or promotion decisions.  The first of these documents should be released by the end of April.

“Safety-Impacting” and “Rights-Impacting” AI Risk

The new guidance document establishes minimum AI risk management practices federal agencies must follow for “safety-impacting” and “rights-impacting” uses of AI.  The document defines “Safety-Impacting AI” as AI whose output produces an action or serves as a principal basis for a decision that could significantly impact the safety of individuals, the environment, critical infrastructure, or strategic government resources.  Similarly, “rights-Impacting AI” is defined as AI whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a significant effect on the individual’s or entity’s civil rights, and equal opportunity, or access to critical government resources.

The document also includes a broad list of employment-related use that are presumed to be “rights-impacting.”  This list includes AI applications that “control or significantly influence the outcome[] of” —

Determining the terms or conditions of employment, including pre-employment screening, reasonable accommodation, pay or promotion, performance management, hiring or termination, or recommending disciplinary action; performing time-on-task tracking; or conducting workplace surveillance or automated personnel management . . .

The list of presumptively “rights-impacting” uses of AI also includes any application that seeks to “[d]etect[] or measur[e] emotions, thought, impairment, or deception in humans.”

The document will require federal agencies to “conduct adequate testing to ensure the AI, as well as components that rely on it, will work in its intended real-world context,” including an “independent evaluation” “to ensure that the system works appropriately and as intended, and that its expected benefits outweigh its potential risks.”  Agencies must also monitor their use of AI to detect potential “degradation of the AI’s functionality” and “changes in the AI’s impact on rights and safety.”

As an example of some of the Biden administration’s concerns, the chairwoman of the Equal Employment Opportunity Commission stated last month that her agency’s analysis of AI systems has discovered examples where people with protected characteristics were disproportionately over-represented in “bad” data sets, and disproportionately under-represented in “good” data sets, that were used to train AI.  Related to this concern, the guidance document directs federal agencies to assess the quality and representativeness of the data used in the AI’s “design, development, training, testing, and operation and its fitness to the AI’s intended purpose,” even if the AI tool has been provided by a vendor.

Coming Attractions?

Although the DOL has an end-of-April deadline (based on a President Biden executive order) to publish “principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits,” the guidance document is very likely a preview of coming attractions for private employers as the administration presents a consistent message on the risks and opportunities of AI.

Employers already using AI for employment purpose should consider evaluating the extent to which their use of AI aligns with the administration’s recent guidance, and whether changes should be made as a result. Although the degree of litigation and enforcement risk is still unclear, there is no doubt that AI is becoming a significant priority for federal enforcement agencies like the DOL.

Questions?

Contact the author Brett Swearingen.