Do Humans Really Have a Bias Against Algorithms?
What’s the accepted definition of artificial intelligence? What is “algorithmic aversion”? And will we ever learn to fully trust technology? These were some of the questions addressed during a spirited and wide-ranging discussion among experts at Recruiting Trends & Talent Tech LIVE! at a session titled “Where’s the Humanity? Is Recruiting Tech Making Us Less Human?”
Moderated by conference chair Elaine Orler, the panel included Ben Eubanks, principal analyst at Lighthouse Research and Advisory; Madeline Laurano of Aptitude Research Partners; Erin Spencer, senior researcher at Bersin-Deloitte; and Kyle Lagunas, research manager at IDC.
Orler started off the discussion with a loaded question: What is AI?
“There’s no universally accepted definition of AI,” said Eubanks.
“We’re seeing different categories of AI—machine learning, recruitment process automation,” said Laurano.
The panelists noted that the definition of AI—as it pertains to recruiting—has been unclear, with vendors often labeling solutions that automate certain recruitment tasks as “AI.”
Orler followed up by asking the panel to define algorithmic aversion.
“There’s a study I wrote about which found that humans have a bias against algorithms,” said Eubanks. “They’re less forgiving of mistakes made by algorithms than they are of human-made mistakes.”
People are often upset when AI technology reveals patterns or offers answers that make them uncomfortable, said Spencer.
“They want confirmation of things they already believe in,” she said.
Even though algorithms have been shown to generate more accurate assessments, in general, than humans, many people still prefer to “go with their gut” when making hiring decisions, said Lagunas.
“That may not be the best way to hire, obviously, and algorithms seemed like the answer to solving longstanding problems in recruiting,” he said. “In fact, algorithms may not be the answer.”
Humans need to have a solid understanding of how a given solution that uses algorithms works, said Lagunas.
“We have the responsibility to try and comprehend how a recommendation was generated, what data was used,” he said.
Companies such as Amazon should serve as case studies for what can go wrong when algorithms are used for hiring, said Eubanks.
“Amazon built an algorithm based on its past hiring—and it turned out the algorithm was recommending mostly men for open positions,” he said. “But that was the data they were feeding it. An algorithm is not smart enough to understand that this is not a good outcome.”
When using algorithms to select the best candidates to send on to hiring managers, Laurano said, “it’s important to remember that what you’re feeding them are recommendations based on an algorithm that’s based on data.”
Orler asked the panel about some common misconceptions.
Laurano said there are too many recruitment-tech startups run by people with little to background in actual recruiting who, nevertheless, think they can make lots of money in the industry.
“I call them ‘two guys in skinny jeans’—they look at the talent-acquisition landscape today, which is flush with venture capital, and they move into the space without enough data,” she said.
Orler, who worked as a recruiter before launching her consulting career 20 years ago, said it appears that too many companies don’t conduct enough due diligence before implementing new recruiting tools. Proper due diligence can help companies avoid choosing the wrong software and all the headaches that accompany it.
“I implemented Resumix way back in 1993,” she said. “We’ve had tech in recruiting for a long time, and it points to the importance of due diligence.”