AI: Next Big Thing or Next Big Lawsuit?

By: | June 4, 2018 • 3 min read
Paul Salvatore is HRE’s Legal columnist. He is a member of Proskauer's executive committee and a former co-chair of its global labor and employment law department. He can be emailed at psalvatore@proskauer.com.

Society’s interaction with artificial intelligence has never been greater.

In a world in which home speaker systems double as conversational computers capable of ordering groceries to your doorstep with a simple voice command, society’s interaction with artificial intelligence has never been greater. AI can analyze speech and text, recognize and examine faces, engage in predictive analysis and learn from its experiences. These advancements have led vendors to develop AI-based HR programs.

HR departments can use AI to quickly analyze resumes, draft effective job descriptions and even conduct first-round interviews by assessing candidates’ typed or video-recorded responses to screening questions. HR chatbots can instantaneously answer employees’ questions about company policies and benefits and direct employees to relevant company resources. AI programs can also use employee data to analyze performance indicators, evaluate job satisfaction, and predict retention and attrition.

Advertisement




While AI can perform many HR tasks, there are significant challenges related to its use. Despite the perception that computers are more objective than humans, AI is prone to bias and discrimination in employment decisions. The reasons for this are twofold. First, AI is only as objective as the engineers who coded it. As such, intrinsic biases can make their way into code, which the AI program’s analyses may reflect.

Second, AI learns from the data it is fed. The data, however, often include implicit biases from the real world that AI may perpetuate. For example, an AI program may disproportionately select resumes of white males over resumes of more diverse candidates for an executive position if the data it has learned from include primarily white male executives. Although engineers are implementing measures to counteract bias and discrimination, AI is not yet a truly objective decision-maker.

Additionally, there are subtleties to human interaction that AI cannot detect. Face-to-face conversations convey details that allow HR professionals to make more holistic employment decisions. Further, people may interact differently with AI programs, meaning AI may not be assessing an individual’s true skills or emotions.

AI can provide significant value to HR departments—but it is not without flaws. HR professionals should therefore consider the following:

Create a plan. AI’s use should be considered in connection with a company’s industry, size and culture. Employers should decide what HR-related tasks the company wants AI’s assistance with and research programs available to accomplish those goals.

Vet the vendors. Companies should engage in a vetting process of AI vendors, including asking them to explain how their algorithms were programmed, what data the program uses, and what safety measures the program has to combat bias and discrimination. Such engagement will better ensure that the AI program can produce quality outcomes.

Maintain boundaries. Although AI can be adept at what it does, there are some areas where its use can be inappropriate. Sensitive topics such as disability accommodation, harassment, discrimination or benefits enrollment, which could generate major legal consequences if handled improperly, usually require active engagement by HR professionals.

Advertisement