
Please be advised that contacting Miller Johnson or one of its attorneys by email does not constitute establishing an attorney-client relationship or otherwise confidential relationship between you and the Firm. Please do not give us any information you regard as confidential until a formal attorney-client relationship has been established. Any information you give to us before establishing an attorney-client relationship will not be regarded as privileged or confidential. Do you wish to proceed?
"*" indicates required fields
Artificial Intelligence is exploding. Everywhere we look, there are those concerned or excited by the potential that AI brings. On one hand, Hollywood writers, novelists, and artists are concerned that AI will replace their creative works with lower-quality combinations of others’ intellectual property. On the flip side, businesses see opportunities to streamline workflow, decreasing expenditures and increasing profit. We may even encounter AI in less controversial ways: navigation, social media, music and streaming services, etc. Everyone can agree on one point—AI is here to stay, and it will only increase its footprint on our lives.
Of course, AI is not flawless. One attorney found that out the hard way in a recent high-profile case. There, the attorney filed a brief with a court, supporting his arguments with legal citations. The problem? He used ChatGPT’s AI capabilities to do his legal research. The AI tool provided him with many citations, but, unfortunately for the attorney, the cases were fake. Now, he is in hot water with the judge and faces sanctions.
That’s a sobering reminder of the risks that cutting-edge technology poses when adopted without significant care. Like the attorney above, employers may want to take advantage of AI tools to offload work. After all, if AI can recruit, screen, hire, promote, transfer, monitor performance, demote, dismiss, and refer, why not allocate resources in the most beneficial way possible? Still, unwary employers who let an AI loose to make employment decisions unsupervised create exposure for themselves that can be far more costly in the long run.
By way of example, consider how AI tools intersect with Title VII, which prohibits discrimination in employment on the basis of race, color, religion, sex, or national origin. Title VII makes it unlawful for an employer to intentionally discriminate against employees based on those characteristics. But that type of violation, or “disparate treatment,” is not the only way to violate Title VII. It is also generally unlawful for an employer to use seemingly neutral practices and tests that disproportionately impact certain groups.
Take an example outside of the employment context. Imagine an insurer uses AI to determine insurance rates based on zip code. The AI uses various neutral factors to differentiate the geographic areas—accidents, crime rates in the area, etc. But because of demographic distribution, the AI’s results based on these neutral factors means that individuals of one race are charged much higher insurance rates than others. That could be a problem in some circumstances.
Back in the realm of employment, the concerns are much the same for employers. In fact, the Equal Employment Opportunity Commission recently released technical guidance on the subject of AI and disparate impact under Title VII. Here’s what the EEOC said in a nutshell:
The EEOC’s guidance won’t be the last word on this frontier. We can expect significant pushes for agencies to regulate AI as algorithms, bots, and the like become further integrated into our daily decision‑making. However, employers shouldn’t wait for government intervention. AI tools create significant opportunities that employers can and should begin using to the fullest extent feasible. Just make sure there are safeguards in place to prevent AI tools from creating liability.
And, please, when you consult with your legal counsel on these issues, make sure they check the AI’s research.
Contact the author Adam Walker.