AI: Next Big Thing or Next Big Lawsuit?

Society’s interaction with artificial intelligence has never been greater.

In a world in which home speaker systems double as conversational computers capable of ordering groceries to your doorstep with a simple voice command, society’s interaction with artificial intelligence has never been greater. AI can analyze speech and text, recognize and examine faces, engage in predictive analysis and learn from its experiences. These advancements have led vendors to develop AI-based HR programs.

HR departments can use AI to quickly analyze resumes, draft effective job descriptions and even conduct first-round interviews by assessing candidates’ typed or video-recorded responses to screening questions. HR chatbots can instantaneously answer employees’ questions about company policies and benefits and direct employees to relevant company resources. AI programs can also use employee data to analyze performance indicators, evaluate job satisfaction, and predict retention and attrition.

- Advertisement -

Second, AI learns from the data it is fed. The data, however, often include implicit biases from the real world that AI may perpetuate. For example, an AI program may disproportionately select resumes of white males over resumes of more diverse candidates for an executive position if the data it has learned from include primarily white male executives. Although engineers are implementing measures to counteract bias and discrimination, AI is not yet a truly objective decision-maker.

Additionally, there are subtleties to human interaction that AI cannot detect. Face-to-face conversations convey details that allow HR professionals to make more holistic employment decisions. Further, people may interact differently with AI programs, meaning AI may not be assessing an individual’s true skills or emotions.

AI can provide significant value to HR departments–but it is not without flaws. HR professionals should therefore consider the following:

Create a plan. AI’s use should be considered in connection with a company’s industry, size and culture. Employers should decide what HR-related tasks the company wants AI’s assistance with and research programs available to accomplish those goals.

Vet the vendors. Companies should engage in a vetting process of AI vendors, including asking them to explain how their algorithms were programmed, what data the program uses, and what safety measures the program has to combat bias and discrimination. Such engagement will better ensure that the AI program can produce quality outcomes.

Maintain boundaries. Although AI can be adept at what it does, there are some areas where its use can be inappropriate. Sensitive topics such as disability accommodation, harassment, discrimination or benefits enrollment, which could generate major legal consequences if handled improperly, usually require active engagement by HR professionals.

- Advertisement -

Be transparent. Employers should consider disclosing the use of AI programs to candidates and/or employees. While disclosure may not be mandated by law, candidates and employees may be uncomfortable to learn AI is analyzing their information without their knowledge.

Given its newness, there is little regulation regarding AI’s use in the workplace. However, the use of AI in employment is an area that will be sure to develop through litigation, legislation or both. By proactively assessing AI’s risks and benefits, employers will be better prepared for any future legal developments.

Paul Salvatore
Paul Salvatore is HRE’s Legal columnist. He is a member of Proskauer's executive committee and a former co-chair of its global labor and employment law department. He can be emailed at [email protected].