HR Tech 2019: How to use machine learning to reduce recruiting bias
Whenever Google uses artificial intelligence in any of its products or projects, it adheres to seven principles, according to Dmitri Krakovsky, a vice president at Google.
One of the most important ones, he said during a session titled “Learning to Work with AI In Recruiting” at the HR Tech Conference, is #2: Avoid creating or reinforcing unfair bias.
As the concepts of bias and inclusivity take on more importance in the world of recruiting, Krakovsky said, machine learning can play a role in reducing those negative outcomes for workers–for better or worse.
“As we are solving problem of people finding jobs and jobs finding people,” he said, “it’s very important to think about bias, fairness and inclusion and how algorithms can actually enforce bias or make other problems worse.”
HR Tech 2019: Follow our up-to-the-minute coverage here.
Indeed, while we may think of machine learning as an inherently unbiased process, he said, “even without any malice or bad intent, there’s many opportunities for creating biases,” including reporting bias, sampling bias, latent bias and interaction bias.
So when companies create their own data sets and models to improve their recruiting process, it’s important they collect diverse data to ensure they will be truly representative.
Popular today: How agile innovated Walmart’s hiring
“The sets of data you are training on need to be based on the problem you’re trying to solve,” he said. “If you’re trying to solve specific things, you need specific data. Otherwise the model won’t learn.”
Krakovsky also said companies should “proactively” try to break their models.
“Can you trick the model into producing unfair outcomes? How do you fix these scenarios?”
By getting AI models right, Krakovsky said, organizations will “have the potential to be transformational in promoting inclusivity and diversity in recruitment.”