How to Balance Artificial Intelligence and Decision-Making in HR
This is not the first time we have discussed artificial intelligence in the Inside HR Tech column, and it quite likely won’t be the last time. Judging by my unscientific but still reasonably accurate measure of the relative importance of an HR-technology topic or trend based on how much attention it received at recent HR Technology Conferences, I reckon that AI has emerged in the last five years as the single most discussed topic in HR tech. If it seems to you that everyone is talking or writing or speaking about AI in HR and in the workplace, you are pretty much correct. Even so, the topic feels so important for HR, for employees, for workplaces, that it may not (yet) be possible to pay too much attention to AI. That’s the conclusion I reached recently while reading some of the latest applications of AI technology in the workplace. Here are three themes on the issue.
1. The Promise of Artificial Intelligence
My favorite explanation of AI and how it can be applied by organizations comes from the excellent book Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans and Avi Goldfarb. In the book, the authors lay out a simple model for decision-making and illustrate where and how AI tools affect decision-making. In short, a decision is simply choosing the best prediction of what might happen next, given a set of data and inputs, and applying our own judgment. They further explain that the power and benefit of AI technologies is that AI lowers the cost and increases the number of predictions available for us to select from.
Let’s take a common HR example: ranking a set of 1,000 applicants for an open job, with the decision being choosing which applicants to “short list” for additional interviews. A skilled recruiter could review all 1,000 applicants and create such a short list of the top eight to 10 applicants, but it would be extremely time-consuming. And as we have learned, even the best HR and recruiting professionals sometimes can miss important traits and characteristics, can fail to think expansively or creatively about applicants, and even can introduce unconscious bias into their evaluations and assessments.
Now let’s imagine this same process—ranking 1,000 applicants and creating a short list—supported by a powerful AI-technology solution. The AI can assess and rank the applications in a fraction of the time it would take a person, it can make connections between a person’s credentials and key success factors for the job, and it can ignore information about applicants that is inconsequential but often is the area where bias enters the process. In this case, the AI tool allows the recruiter to bypass the steps that are labor-intensive, lengthy and arguably better handled by the technology. The recruiter then can spend all of his or her time on the short-listed applicants, conducting meaningful interviews and determining which would make the best new hire. In an ideal world, this is a perfect example of how AI and people can work together to make the best HR decisions.
2. What if AI Goes Too Far?
But, if the above example is truly an ideal, we can also easily imagine a scenario where we allow the AI tools to perhaps go too far and take on too much of the responsibility for HR and decision-making about people. What if a recruiter permits the AI tools to create short lists of fewer candidates, thus missing the opportunity to interview some compelling candidates? What if the AI tool’s models are allowed to screen candidates and applications on an increasingly narrow set of characteristics and traits, also effectively artificially (pardon the pun) reducing the number of candidates who get to interview? Or finally, what if HR decides that the AI tool is so good at identifying the best candidate that it simply skips the short list and instructs the tool itself to send a job offer to the top-ranked candidate?
RELATED: The HR Technology Conference & Exposition® will be held Oct. 1-4 at the Venetian in Las Vegas.
Do these scenarios seem unlikely? Maybe, but news already is being reported that some applications of AI technology in the workplace are crossing this fine line between technology on one side and human judgment and agency on the other. In one example, a report from the NNY360 site shared how an AI-powered “assistant” works with customer-service representatives in a large call center.
“When Conor Sprouls, a customer-service representative in the call center of insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, AI tells him how he’s doing. Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.
Sound sleepy? The software displays an ‘energy cue,’ with a picture of a coffee cup. Not empathetic enough? A heart icon pops up.
Sprouls and the other call-center workers at his office in Warwick, Rhode Island, still have plenty of human supervisors. But the software on their screens—made by Cogito, an AI company in Boston—has become a kind of adjunct manager, always watching them. At the end of every call, Sprouls’ Cogito notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimizing it, the program notifies his supervisor.”
If Sprouls or other call-center reps attempt to minimize or close the Cogito AI assistant, the report goes on to state, their supervisor is immediately notified. Does the application of AI in this example for these workers seem like it has gone too far by effectively placing an AI technology in a constant, always-on, never-silent combination of supervisor, colleague and coach? Some would argue yes. But just the fact that these kinds of applications and use cases of AI tech exist, and are growing, reminds us that HR leaders need to think about these technologies and issues with growing attention and concern.
3. Finding a Balance Between High Tech and High Touch
These examples of applying advanced technologies like AI to support HR and individual decision-making remind us of something that sounds simple and obvious but will become more important as AI technology is infused in more HR-tech solutions and incorporated into HR processes. And that is this: We must endeavor to find and even create the appropriate balance between the responsibility and influence that we grant technology to inform decisions and the responsibility and ultimate authority over those decisions that we reserve for ourselves. We can’t be beguiled into allocating or ceding more of the decision-making processes in HR and talent to technologies until we are certain that these technologies truly are advancing our practice of HR and, more importantly, not diminishing what is more important and what is best about HR: our understanding of, concern for and empathy toward people.
In HR, we are not as lucky as, for example, our colleagues in finance or in supply-chain management, whose decisions are often and certainly much less centered on people. For the CFO, the decision of which set of short-term securities will generate the best return on the organization’s excess cash with an acceptable level of risk can be granted to an AI tool without much concern about the impact of a bad choice. But in HR we are not so fortunate.
Our decisions, all of them eventually, impact lives. Sometimes in small ways, like when we use an AI tool to set worker schedules, a tool that can’t “know” how important it might be for a retail worker to be able to have an afternoon off to see his child’s soccer game. And other decisions can impact people in much larger and more significant ways, like when we decide to use a tool like the call center AI, which hovers over reps tracking not just their work but always making suggestions and reporting their actions. It is our responsibility, in the end, to know the people impact of our decisions, even the ones that we allow the AI tools to make.