Can Using AI in Hiring Practices Open Your Business Up to a Discrimination Lawsuit?


As the economy evolves into one that is driven by technology, human resources departments are busier than ever. It is far easier for applicants to apply to hundreds of jobs given the number of online recruiting services. Companies need to sift through potentially thousands of resumes for each job opening. Many employers have turned to software to both screen resumes and streamline the entire human resources process.

The vast majority of United States employers—83 percent of them according to the U.S. Equal Employment Opportunity Commission—now use some sort of automated technology in their hiring processes. Almost every Fortune 500 company—99 percent of them according to the EEOC—uses artificial intelligence in the employment process. Companies rely on AI because they realize they need assistance in streamlining their hiring operations.

The Use of AI Software in Hiring Can Introduce Risks

These efficiencies also introduce legal risks for employers. Companies could inadvertently violate numerous federal and state laws that prohibit discrimination in hiring and employment. Although employers should not completely avoid the use of artificial intelligence, they must not let it operate without oversight. Businesses can be sued for discrimination because they are the ones who adopt the software and use it as part of their hiring process.

One of the major problems with employing artificial intelligence in hiring is that it relies on existing information. Artificial intelligence scans past hiring history to help make decisions and give guidance. The problem is that past decisions are not always free from discrimination. Hiring software may perpetuate existing biases and formalize them in the hiring process.

AI Could Increase the Risk of Discriminatory Practices

There are many ways the use of AI in hiring could result in employment discrimination, including:

  • Screening job applicants based on where they live, which could lead to racial discrimination
  • Disfavoring applicants with gaps on their resume, which may discriminate against people with certain health conditions (in violation of the Americans with Disabilities Act)
  • Screening applicants based on the words they use in their resume (men are more likely to use words that companies may consider to be “stronger,” leading to gender-based discrimination)

Not only is there a risk of discrimination in the data itself, but humans often misapply and misuse data. The data supplied by the AI model itself could already be flawed, but hiring managers could then apply their own biases or overly rely on the data without challenging what the data is telling them.

Federal Regulators Have Raised Alarms

In May 2022, the EEOC issued guidance on the use of artificial intelligence in hiring. The EEOC’s guidance was not as much about providing policies and procedures as it was about warning of the potential ways employers could violate federal law when using AI. The guidance did mention “promising practices” that employers could use to avoid acting in a discriminatory manner when using AI to make hiring decisions. Many of these “promising practices” put the onus on employers to recognize the potential for discrimination and use alternate means for hiring decisions.

Attorneys are predicting an increase in litigation tied to AI-based practices. Already, Workday Inc., a maker of AI software, has been sued in a class action lawsuit that alleges its products promote discrimination in hiring. While this is an example of a lawsuit targeting a technology provider, one can expect that companies—the users of this technology—will also bear the brunt of this surge of litigation. It is likely that plaintiffs will more often blame the employer than the company that made the software.

Using Technology Is Not a Defense to a Lawsuit

When the software program operates as a “black box” algorithm, it is even harder for a company to defend itself and justify its own decisions. So long as the employer’s actions are deemed by a judge or jury to be discriminatory under the law, they can be made to pay in a lawsuit. The fact that the company was relying on artificial intelligence would not be a sufficient excuse in defending a lawsuit. Nor would a “the machine made me do it” defense. The reason why the company implemented artificial intelligence hiring software does not matter.

How Companies Could Protect Themselves from Liability

At the very minimum, there needs to be sufficient human input in reviewing hiring decisions to flesh out any potential discrimination. Businesses that blindly trust artificial intelligence could be opening themselves up to legal problems. This is not to say that a company cannot use AI in hiring. However, there needs to be human oversight of the process.

For example, companies could consider doing the following:

  • Hiring a third party to audit the results from hiring software
  • Closely scrutinizing software programs before the company purchases them and adopts them for usage
  • Drafting policies and procedures for ensuring that the use of AI does not result in discrimination, and then following them
  • Including corporate legal officers in decisions regarding hiring innovations

Already, New York City has passed a law that prohibits employers from using AI software in hiring unless they have already conducted an audit to detect any possible biases or discrimination.

A company will need to run its own customized analysis of how it can best comply with the law when using AI software. There may be no one-size-fits-all answer to the question of what is the best way to minimize litigation risk. Before a company initiates a process, it should conduct a detailed review of the possible results and implement its own safeguards. It is always easier to minimize litigation risk on the front end – and, best to do so with the advice of experienced legal counsel.

Companies Should Blend Human Oversight with Technology

On the flipside, completely eliminating the use of AI in hiring decisions and returning to solely the human element may also not be helpful. Even before the use of AI, companies had routinely been sued for discriminatory practices in hiring. Thus, some blend of technology and human decision-making is appropriate and may protect businesses from liability.

For companies, the appropriate mix seems to be some sort of human input and supervision over the use of artificial intelligence in hiring. Unrestrained and uncritical usage of AI could lead to litigation risks that outweigh any efficiencies gained from the technology. When used properly, AI technology could provide data-driven support to human decisions. It can introduce efficiencies into the hiring process when businesses receive countless resumes and expressions of interest and need to make the right hiring decision. Consulting with a labor and employment attorney should be part of any risk mitigation process regarding the use of AI in hiring decisions. Find out how MehaffyWeber can help your business. Contact us today.