13 June 2023

When the Robot Is Racist: What Employers Should Consider When Adopting AI


Artificial Intelligence is exploding.  Everywhere we look, there are those concerned or excited by the potential that AI brings.  On one hand, Hollywood writers, novelists, and artists are concerned that AI will replace their creative works with lower-quality combinations of others’ intellectual property.  On the flip side, businesses see opportunities to streamline workflow, decreasing expenditures and increasing profit.  We may even encounter AI in less controversial ways: navigation, social media, music and streaming services, etc.  Everyone can agree on one point—AI is here to stay, and it will only increase its footprint on our lives.

Of course, AI is not flawless.  One attorney found that out the hard way in a recent high-profile case.  There, the attorney filed a brief with a court, supporting his arguments with legal citations.  The problem?  He used ChatGPT’s AI capabilities to do his legal research.  The AI tool provided him with many citations, but, unfortunately for the attorney, the cases were fake.  Now, he is in hot water with the judge and faces sanctions.

That’s a sobering reminder of the risks that cutting-edge technology poses when adopted without significant care.  Like the attorney above, employers may want to take advantage of AI tools to offload work.  After all, if AI can recruit, screen, hire, promote, transfer, monitor performance, demote, dismiss, and refer, why not allocate resources in the most beneficial way possible?  Still, unwary employers who let an AI loose to make employment decisions unsupervised create exposure for themselves that can be far more costly in the long run.

By way of example, consider how AI tools intersect with Title VII, which prohibits discrimination in employment on the basis of race, color, religion, sex, or national origin.  Title VII makes it unlawful for an employer to intentionally discriminate against employees based on those characteristics.  But that type of violation, or “disparate treatment,” is not the only way to violate Title VII.  It is also generally unlawful for an employer to use seemingly neutral practices and tests that disproportionately impact certain groups.

Take an example outside of the employment context.  Imagine an insurer uses AI to determine insurance rates based on zip code.  The AI uses various neutral factors to differentiate the geographic areas—accidents, crime rates in the area, etc.  But because of demographic distribution, the AI’s results based on these neutral factors means that individuals of one race are charged much higher insurance rates than others.  That could be a problem in some circumstances.

Back in the realm of employment, the concerns are much the same for employers.  In fact, the Equal Employment Opportunity Commission recently released technical guidance on the subject of AI and disparate impact under Title VII.  Here’s what the EEOC said in a nutshell:

  • In many cases, employers are responsible for disparate impact caused by AI tools. That’s true even if the AI tools are provided by a third party, such as a software vendor.  As a result, employers who use AI tools are wise to consult with those third parties to determine what steps those third parties have taken to ensure its AI tools will not inadvertently discriminate against applicants or employees.
  • If an employer’s AI tools unlawfully disparately impact applicants or employees, the employer must show that the tool’s use is job-related and consistent with business necessity. That’s going to be tough in most cases, and employers would do well not to find themselves in that position.  Again, due diligence on the front end will often save headaches on the back end.
  • If an employer develops an AI tool and discovers its use would disparately impact applicants or employees, the employer should take steps to reduce the impact or select a different tool. Because the process of developing an algorithmic tool has many moving parts, it is often the case that slight alterations may create similar AI tools that, as alternatives, do not disparately impact applicants or employees.

The EEOC’s guidance won’t be the last word on this frontier.  We can expect significant pushes for agencies to regulate AI as algorithms, bots, and the like become further integrated into our daily decision‑making.  However, employers shouldn’t wait for government intervention.  AI tools create significant opportunities that employers can and should begin using to the fullest extent feasible.  Just make sure there are safeguards in place to prevent AI tools from creating liability.

And, please, when you consult with your legal counsel on these issues, make sure they check the AI’s research.

Questions?

Contact the author Adam Walker.