Large technology companies tend not to be big fans of government regulation. Recently, however, the president of Microsoft challenged that notion by writing a lengthy blog post calling for greater government scrutiny of facial recognition technology.
“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” Bradford L. Smith wrote. “A world with vigorous regulation of products that are generally useful but potentially troubling is better than world devoid of legal standards.”
Facial-recognition software is one of the many innovations enabled by artificial intelligence, and has been hailed for its potential to increase security and screen out criminals and terrorists. There’s growing concern, however, that the software–along with the AI in general–is highly susceptible to the biases of whoever programs it and could therefore unintentionally lead to greater discrimination in areas like recruiting and hiring.
A recent study led by an M.I.T. researcher, for example, found that facial recognition software from Microsoft and IBM was much more accurate in identifying white men than darker-skinned females. Several years ago, Google came under fire and was forced to apologize after it was discovered that its image-recognition photo app had labeled African Americans as “gorillas.”
These examples are especially disturbing because AI has been touted as a fairer way for companies to find talent–by using algorithms to identify people who are highly qualified for a certain job and whose social-media activity suggests they’d be open to a new opportunity, companies could avoid the pitfalls of biased recruiters and hiring managers who might balk at bringing on someone from a different background, race or gender.
Unfortunately, AI can be susceptible to what’s politely known as “algorithmic bias.”
“AI is only as good as the data it analyzes,” says Caitlin MacGregor, CEO of Plum, a company that’s developed hiring software designed to counteract human bias. “It’s garbage in, garbage out.”
She cites the example of a well-regarded AI solution that was designed to identify high performers via social media profiles. Yet when researchers “opened up the solution’s ‘black box,’ they discovered it was using criteria such as whether these people played lacrosse and tennis and read Harry Potter,” says MacGregor.
Plum uses a database of 24 trillion “human data points” to help identify candidates who are best-suited for a given role. Recruiters complete a six-minute survey created by industrial-organizational psychologists that’s designed to identify the core competencies for a given role. Job candidates then take a 25-minute assessment that’s designed to determine whether they possess those competencies.
“It’s using AI to replicate an expert system, rather than being a black box with low-quality data,” says MacGregor.
Other vendors such as Koru and Pymetrics also use algorithms in various ways to help companies circumvent bias in the hiring process. Koru uses surveys to identify employees’ strengths and weaknesses and has its software identify people with the same traits, while Pymetrics uses a combination of gamification and neuroscience to identify people who may be best fits for a certain job.
HireVue, which made its name as one of the earliest video-interviewing platforms, uses “emotion detection systems” to screen the faces of video interviewees to evaluate them using models its created based on a company’s top-performing employees. Although the intent is to remove bias from the hiring process, critics have questioned whether this approach really does what HireVue says it can.
“[HireVue’s system] is alarming, because firms that are using such software may not have diverse workforces to begin with, and often have decreasing diversity at the top,” Meredith Whittaker, co-founder of New York University’s AI Now Institute and founder of Google’s Open Research group, told CNBC.
“And, given that systems like HireVue are proprietary and not open to review, how do we validate their claims to fairness and ensure that they aren’t simply ‘tech-washing’ and amplifying longstanding patterns of discrimination?”
Loren Larson, HireVue’s chief technology officer, told CNBC “It is extremely important to audit the algorithms used in hiring to detect and correct for bias. No company doing this kind of work should depend only on a third-party firm to ensure that they are doing this work in a responsible way … it’s the responsibility of the company itself to audit the algorithms as an ongoing, day-to-day process.”
Companies such as IBM have responded to concerns about bias by making changes to their software. IBM recently unveiled a new dataset that’s designed to train facial recognition to see more skin colors. The dataset, which contains 36,000 images from Flickr Creative Commons, is designed to make facial recognition more accurate, the company said.
All companies that use AI for talent acquisition and management should do what they can to guard against bias, says Nathan Mondragon, HireVue’s chief IO psychologist.
“It’s not the algorithms that are biased, it’s the data that’s going into it,” he says. “If people aren’t checking that, it could be a problem.”
He cites the example of data scientists who were trying to develop software that could properly classify wolves versus huskies. They thought they’d been successful in coming up with an algorithm that was 90-percent accurate in sorting wolves from huskies–until they examined what the algorithm was focusing on. It turns out that it was classifying the animals based on whether there was snow in the background of the pictures it was analyzing–it had nothing to do with whether they were actually wolves or huskies, says Mondragon.
“If we’re not paying attention to the features the computer is flagging to show the difference between a good and bad performer, for example, then it could turn out it’s using factors such as racial characteristics, things that have nothing to do with job performance,” he says.
Although vendors should have the primary responsibility for ensuring the algorithms they’re using are fed good data, HR and talent-acquisition leaders can also be on the lookout for adverse impact, says Mondragon.
“Run the numbers, don’t just take things at face value,” he says. “Make sure that to get to ‘X,’ you’re not getting race, age and gender differences. It’s not that hard to run those calculations, but people that are doing it right will be more than happy to help you.”