Do Humans Really Have a Bias Against Algorithms?

What’s the accepted definition of artificial intelligence? What is “algorithmic aversion”? And will we ever learn to fully trust technology? These were some of the questions addressed during a spirited and wide-ranging discussion among experts at Recruiting Trends & Talent Tech LIVE! at a session titled “Where’s the Humanity? Is Recruiting Tech Making Us Less Human?”

Moderated by conference chair Elaine Orler, the panel included Ben Eubanks, principal analyst at Lighthouse Research and Advisory; Madeline Laurano of Aptitude Research Partners; Erin Spencer, senior researcher at Bersin-Deloitte; and Kyle Lagunas, research manager at IDC.

Orler started off the discussion with a loaded question: What is AI?

“There’s no universally accepted definition of AI,” said Eubanks.

“We’re seeing different categories of AI–machine learning, recruitment process automation,” said Laurano.

The panelists noted that the definition of AI–as it pertains to recruiting–has been unclear, with vendors often labeling solutions that automate certain recruitment tasks as “AI.”

Orler followed up by asking the panel to define algorithmic aversion.

“There’s a study I wrote about which found that humans have a bias against algorithms,” said Eubanks. “They’re less forgiving of mistakes made by algorithms than they are of human-made mistakes.”

People are often upset when AI technology reveals patterns or offers answers that make them uncomfortable, said Spencer.

“They want confirmation of things they already believe in,” she said.

Even though algorithms have been shown to generate more accurate assessments, in general, than humans, many people still prefer to “go with their gut” when making hiring decisions, said Lagunas.

“That may not be the best way to hire, obviously, and algorithms seemed like the answer to solving longstanding problems in recruiting,” he said. “In fact, algorithms may not be the answer.”

Humans need to have a solid understanding of how a given solution that uses algorithms works, said Lagunas.

“We have the responsibility to try and comprehend how a recommendation was generated, what data was used,” he said.

Companies such as Amazon should serve as case studies for what can go wrong when algorithms are used for hiring, said Eubanks.

“Amazon built an algorithm based on its past hiring–and it turned out the algorithm was recommending mostly men for open positions,” he said. “But that was the data they were feeding it. An algorithm is not smart enough to understand that this is not a good outcome.”

When using algorithms to select the best candidates to send on to hiring managers, Laurano said, “it’s important to remember that what you’re feeding them are recommendations based on an algorithm that’s based on data.”

Orler asked the panel about some common misconceptions.

Laurano said there are too many recruitment-tech startups run by people with little to background in actual recruiting who, nevertheless, think they can make lots of money in the industry.

“I call them ‘two guys in skinny jeans’–they look at the talent-acquisition landscape today, which is flush with venture capital, and they move into the space without enough data,” she said.

Orler, who worked as a recruiter before launching her consulting career 20 years ago, said it appears that too many companies don’t conduct enough due diligence before implementing new recruiting tools. Proper due diligence can help companies avoid choosing the wrong software and all the headaches that accompany it.

“I implemented Resumix way back in 1993,” she said. “We’ve had tech in recruiting for a long time, and it points to the importance of due diligence.”

Orler’s last question to the panel was, “Looking ahead to the next two years, what are you most excited about, and what are you most concerned about?”

“I’m not sure what I’m afraid of, but I’m very excited about a new tech category called ‘insights,’ which can give you specialized insights into areas such as, which employee consistently provides the best referrals,” said Eubanks.

Laurano said she’s excited by the assessment space, which is seeing “big changes.”

“We’re seeing game-based assessments, shorter assessments, more candidate-friendly assessments,” she said.

As for what concerns her most, Laurano again cited unqualified vendor startups. “I’m scared that you have all these startups coming into the market with an exit strategy in mind–as in, being acquired by a bigger vendor–but without a commitment to helping you solve your problems.”

Lagunas said he was most concerned that “we’re not going to learn from our mistakes” such as failing to validate the data used by algorithms to make hiring recommendations.

“We have to hold ourselves accountable, because no one else is going to,” he said.

Avatar photo
Andrew R. McIlvaine
Andrew R. McIlvaine is former senior editor with Human Resource Executive®.