What HR needs to know today about the EEOC AI guidance

As artificial intelligence becomes a table stakes tool for employers looking to get ahead in the war for talent, HR leaders are revamping hiring strategies to incorporate the tech—and that can raise some ethical and legal questions. Just last month, the World Health Organization issued a statement urging AI users to consider how the technology could impact inclusion and wellbeing, while the creator of ChatGPT testified in a recent Senate committee hearing that such tools could cause “significant harm” if not well-regulated.

- Advertisement -

How can employers do their part to mitigate potential harm from emerging technology? That question drove a recent technical assistance document issued by the U.S. Equal Employment Opportunity Commission, which one employment labor says organizations must pay attention to.

Marissa Mastroianni, an employment member at national law firm Cole Schotz, explains that, most importantly, the document cautions employers that the EEOC will use its longstanding disparate impact analysis to consider whether employers that are using AI for hiring and employment decisions are compliant with Title VII of the Civil Rights Act of 1964.

EEOC Chair Charlotte A. Burrows recently commented that the technical assistance document, which is just one component of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, represents “another step in helping employers and vendors understand how civil rights laws apply to automated systems used in employment.”

In the document, Mastroianni says, the EEOC unequivocally confirms that improper use of AI could violate Title VII, outlining several examples of AI tools that could give rise to such violations:

  • Virtual assistants or chatbots that ask candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • Resume scanners that give deference to applications using certain keywords;
  • Video interviewing software that evaluates candidates based on speech patterns and general facial expressions;
  • Testing software that scores applicants’ or employees’ personalities, cognitive skills, aptitudes or general “cultural fit” based on their performance on a test or game;
  • Employee monitoring software that rates employees on number of keystrokes or other factors.

While that is not an “an exhaustive list of AI tools that could be used in the workplace,” Mastroianni says, “employers using any of the above tools should carefully review the assistance document to ensure compliance.” In that vein, the EEOC encourages application of the pre-existing Uniform Guidelines on Employee Selection Procedures to determine if any AI-related processes violate Title VII.

Marissa Mastroianni, Cole Schotz: What the EEOC's guidance on AI means for HR
Marissa Mastroianni, Cole Schotz

- Advertisement -

Chief in that protocol is the “four-fifths rule,” which the EEOC also affirmed that employers can use as a general guideline in determining whether an AI selection tool has a disparate impact on the basis of race, color, religion, sex (including pregnancy, sexual orientation or gender identity) or national origin, as protected by Title VII.

This rule states that there may be evidence of discrimination if the rate of selection of a certain group is substantially different than another (i.e., a ratio of less than four-fifths, or 80%).

For example, Mastroianni says, if 80 white applicants and 40 Black applicants take a personality test used to screen applicants, and 48 of the White applicants and 12 of the Black applicants advance to the next round, the ratio is 60/30 or 50%, which fails the four-fifths rule.

“That being said,” she explains, “the EEOC also cautions that the ‘four-fifths’ rule may be inappropriate under certain circumstances and is not determinative.”

See also: How ChatGPT and other generative AI tools are transforming HR jobs

EEOC document: Employers can’t avoid liability by using third-party AI vendor

Also, the document advises that employers cannot avoid legal liability by relying on a third-party vendor for AI selection tools, clearly stating that “employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.”

“This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf,” Mastroianni adds.

Generally speaking, she says, employers using AI in connection with their employment practices should proactively audit such tools, even if those tools are provided by third-party vendors, to assess any potential for discrimination.

“While not mentioned by the EEOC in its assistance document,” she says, “employers should conduct these audits with their legal counsel to further ensure compliance with applicable laws, and take advantage of the added benefit that such audit findings may then be protected under attorney-client privilege.”

Avatar photo
Tom Starner
Tom Starner is a freelance writer based in Philadelphia who has been covering the human resource space and all of its component processes for over two decades. He can be reached at [email protected].