- Advertisement -

AI for HR: Managing the risks of disparate impact discrimination

Kevin White and Daniel Butler
Kevin White is partner and co-chair, labor and employment, at Hunton Andrews Kurth LLP. Daniel Butler is associate at Hunton Andrews Kurth LLP.

It is no secret artificial intelligence is increasingly marketed to human resource professionals as a panacea for a host of workplace tasks. Many AI tools are designed to streamline and enhance hiring, while others are designed to eliminate bias and improve business processes for performance evaluations and assessments. While most AI tools can safely improve speed and efficiency, whether an AI tool actually eliminates human biases is not guaranteed. If implemented hastily, AI can exacerbate or even create discriminatory outcomes that did not previously exist. As a result, it is imperative that employers have a laser-like focus on mitigating the inherent risks associated with the use of AI. Employers should consider critically vetting vendors and routinely auditing the use of their AI tools.

Benefits of AI tools in the workplace

When managed properly, AI can improve workplace efficiency, enhance diversity, eliminate human bias in performance evaluations and promotion assessments, and even contribute to employee morale. For example, businesses are increasingly relying on AI tools to review candidate resumes. At their highest cognitive function, such tools recommend certain candidates to interview and predict job success—all based on a resume. On the lower end, such tools simply filter out candidates whose credentials disqualify them from consideration (e.g., eliminate those who lack a required degree).

- Advertisement -

In the performance evaluation and assessment space, AI can eliminate employee favoritism and result in more predictable, uniform scoring systems that can better withstand scrutiny when collective employment decisions are challenged. At their core, discrimination claims draw comparisons between employees within a protected group and those outside a particular protected group. As a result, human resource departments need to focus on applying policies and procedures consistently across the board. Using AI to help with hiring, promotion, discipline and performance management decisions can help HR departments achieve that consistency.

Understanding the risks of AI technology

AI tools likely have no intent to discriminate. But under the law of disparate impact discrimination, that is not enough to avoid liability. In a disparate impact case, intent is irrelevant. The question presented is whether a facially neutral policy or practice has a disparate impact on a particular protected group, such as on one’s race, color, national origin, gender, religion or disability. If there is a statistically and legally significant impact, then the AI tools must be justified by a close look at whether they are job-related, consistent with business necessity. Even if a court, however, determines that to be the case, a plaintiff can still prevail if they show that there are less discriminatory alternatives that would feasibly meet the employer’s objectives.

A common way in which AI tools can cause a disparate impact is through the use of homogenous “training data.” Most AI algorithms rely on a set of inputs to understand how to perform their tasks. If those inputs lack diversity, there is a chance that an unsophisticated AI tool will output a lack of diversity. For example, if an AI selection tool is programmed to select individuals similar to an employer’s highest-performing employees and those employees are all of the same gender or race, then a risk exists that the tool will replicate that homogeneity in its selections.


Hear the latest about AI in HR at the HR Technology Conference, Oct. 10-13 in Las Vegas. Click here to register.


- Advertisement -

Steps to manage disparate impact risks

The Equal Employment Opportunity Commission has shown substantial interest in AI. In May 2022, it promulgated technical assistance guidance on the use of AI as related to the Americans with Disabilities Act. And just recently, on Jan. 31, it held a public hearing on employment discrimination and AI, referring to it as a “new civil rights frontier.” Given the increasing use of AI and the EEOC’s interest in the topic, employers using such technologies must take steps to minimize the risks.

First, employers should critically vet AI vendors. Organizations typically cannot rely on a “the AI tool did it” defense and instead will be held liable for unlawful discriminatory impacts the tool has in the workplace. As a result, businesses should demand that AI vendors disclose whether the tool has been designed to mitigate bias and gain full insight into the tool’s functionality. If a vendor refuses to disclose sufficient information, then employers should look elsewhere. Also, employers should carefully review contracts with AI vendors to understand the limits of any indemnity provisions and representations and warranties.

Second, employers should audit the AI tool before implementation and continue to do so throughout its use. In fact, such auditing is a legal requirement in certain jurisdictions. For example, New York City prohibits the use of AI tools by employers unless they have been subject to a bias audit within the preceding year and a summary of the results of the most recent bias audit has been made publicly available on the employer’s website. Moreover, auditing should not be viewed as a perfunctory task. If an audit reveals an unlawful disparate impact, the company should identify and modify the criterion leading to the unlawful impact.

Finally, companies need to stay apprised of developments in the law. Because this is an emerging field, an ounce of prevention can go a long way toward ensuring your company does not become a “test” case in AI disparate impact discrimination litigation or enforcement actions. When in doubt, human resource professionals should consult with qualified labor economists and outside counsel to ensure the use of AI does not violate the law.