The debate over artificial intelligence’s role in HR–from recruiting to workforce planning to performance–has become moot: There’s no doubt that AI has arrived and is expanding rapidly in the HR space. But, not so fast, some experts say. While AI represents a fantastic opportunity to drive HR success (and by extension, bottom-line growth), ethical issues tied to AI represent a potential dark side of these technologies.
The good news is, chief HR and people officers can successfully navigate this rapidly changing, growing trend by steering clear of those ethical speed bumps in the first place. They must take a smart, steady, planned approach to circumvent negative outcomes.
However, surveys show that HR leaders are not sure they have that situation under control quite yet.
For example, Deloitte’s 2019 Global Human Capital Trends survey found that 22% of 10,000 respondents in 119 countries are using AI within their organizations; plus, 81% predicted growth in the use of AI. Yet, in the same survey, a mere 6% said they are “very ready” to address the impact (ethics issues included) of AI and related solutions–including cognitive technologies and robotic-process automation (RPA)–on their workforces.
Also, a recent joint report from Willis Towers Watson and the Society of Human Resource Management found that 36% of chief people officers say they are prepared to think about how technology such as AI can be used to execute work in the future–but only 26% report they have the technical acumen to evaluate these new, growing technologies.
Clearly, the rise of AI and machine learning are creating both demand and dissonance among HR leaders globally. It seems AI can be a true HR savior or, on the flip side, can create a slippery slope leading to potentially damaging consequences. Execution is critical.
Define the Goals
By and large, many organizations are “horribly unprepared for answering the question around how to ethically use the data they collect for effective AI use, for a variety of reasons,” says Brian Kropp, group vice president with Gartner.
For one thing, Kropp says, many HR leaders are not really thinking about this challenge to the degree they should. He offers the example of business leaders, HR or otherwise, trying to use the data that’s being collected to make decisions outside the scope of what the information was intended for.
For instance, Kropp encountered one company that had been tracking when employees come in and out of the building, with the original goal of trying to figure out what sort of real-estate footprint it needed. As it turned out, at the same time, executives were looking to trim headcount. One of the business leaders figured it was OK to use the data to root out employees who were coming in late or leaving early. In essence, those employees could be expendable because they weren’t committed to the organization. That strategy didn’t happen (HR stepped in and stopped it), but it makes a clear point about ethical considerations when it comes to data use.
“It’s not that those business leaders were mean, terrible people,” Kropp explains. “They’re smart, creative leaders trying to better understand their business and using this data to try to help make decisions. But things can turn out quite differently in a case like this.”
In another example, Kropp says, companies might scrape internal communications data using AI to try to gauge employee sentiment–on its face, a great idea. But if they say they’re keeping the data confidential and anonymous, what happens when, during data collection, an employee reports internally being sexually harassed at work?
HR leaders are in a bind because they feel they should investigate the claim, yet they also said data collection would be confidential and not attributed to any individual.
“What is the appropriate ethical use of that data?” Kropp asks. “That’s where ethical violations will occur. It’s important to be certain you’re not doing anything to cause non-intended results or skewed or biased results, which will ruin efforts to use the data for good.”
Strive for Transparency
Ravin Jesuthasan, author and managing director at Willis Towers Watson, says that, while AI certainly has the potential to transform and reinvent HR, it is imperative that HR leaders continue to enhance their digital acumen–which includes having a detailed understanding of the mechanics and consequences (intended or otherwise) of AI.
“AI needs data to power it, but the data need to be bias-free and ethically acquired,” he says, citing Unilever’s use of AI in talent acquisition as a prime example. “They were actually able to reduce the bias in their recruiting by explicitly ensuring that addresses and hobbies were not considered by their algorithm. HR needs to create the space to learn and practice in this domain.”
It is also critical that HR embraces a mindset of perpetual reinvention, including a culture of continuous learning and experimentation, he adds.
“HR needs to make digital enablement part of its core capabilities and not a ‘hobby,’ ” Jesuthasan says. “This means dedicated roles and continuous market scanning to understand the latest developments in technology and how they might be leveraged by the organization and HR.”
Meg Bear, senior vice president of products at SAP SuccessFactors, says AI-powered technology has been rapidly growing over the last several years, but employers are just scratching the surface, with most still experimenting.
Bear cites example of sales teams accessing real-time information and guided assistance to help them close deals. Also, business leaders can use real-time comprehensive business data (sales, finance, operations) to predict outcomes–leading to better-informed decisions on where to invest time and money. And data will not only predict outcomes but can demonstrate results, making it possible to test and learn at scale.
Ultimately, she says, AI will help HR professionals directly impact business results in a way that aligns with the new pace of competition. But Bear also issues a warning: AI should be leveraged to augment the human experience–not replace it.
“Technology is a force multiplier, and AI is an exciting new capability,” she says, noting that SAP SuccessFactors is working with global customers to make sure it applies the right technology to improve business outcomes. “We are mindful of both the benefits and risks of AI, so we have established ethical guidelines to guide us and are always focused on the importance of reliability, security, privacy and quality.”
According to Montra Ellis, senior director of product innovation at Ultimate Software, AI–in the short term–will likely have the greatest impact on recruiting, with recruiters looking to cast the widest possible net to find diverse candidates. On the other hand, they don’t want to “drown” in applicants who don’t match a candidate profile.
Despite recent examples to the contrary (such as Amazon’s now-scrapped AI tool that discriminated against women), Ellis says she believes that software, when deployed correctly, is critical to addressing hiring bias. One of the main issues with AI-assisted recruiting today means identifying a “good” candidate is a process that involves evaluating both objective and subjective skills, she says. Unfortunately, biases–conscious or unconscious–exist and can creep into the examination of a candidate’s resume or expertise.
To truly move beyond potential biases, Ellis says, companies must incorporate third-party software tools that have access to millions upon millions of aggregated data points–from all types of industries, companies and geographies–to dilute individual company or industry bias and introduce diversity into the training set.
“When it comes to the ethics of AI, one of the most important actions a company can take is to strive for clarity and transparency on the impact AI software will have on their employees and customers,” Ellis says. “Whether it’s respecting privacy or ensuring that all voices are heard, the more transparency in AI’s role, the easier it will be to welcome these advancements as an augmentation, rather than fear them as a replacement.”
‘Automate to Elevate’
Above all, Ellis notes, AI within HR realms must “work for the people and never against them. I fundamentally believe that the role of AI is to amplify the power of people, not replace them.”
Cristina Goldt, vice president of HCM products at Workday, says the HR industry is seeing the increased use of machines to surface predictions and data-driven insights, but Workday itself continues to rely on people to make judgments about those predictions and insights.
“HR is increasingly leaning on data insights to drive decision-making,” she says. “We’re also evolving towards a paradigm where the data find us through personalized recommendations, such as curated tasks, learning recommendations and story-based reporting.”
She says machine learning is and will continue to drive increased efficiency in HR. For example, machines can find anomalies in processes, such as payroll or boost recruiting success, and can also sort through resumes and surface candidates who are strong matches.
“We have too much data to sift through manually, so machine learning is helping us ‘automate to elevate,’ ” she says. By that, she means automating where it makes sense to work at the pace of business today and elevating HR to a more advisory role by leaving tactical tasks to the machines. In addition, she says, this increase in data volume is going to naturally force reporting to evolve–a need for a smarter, “story-based” reporting model to help make sense of the data and surface insights. This new era of augmented analytics is going to be a game-changer for organizations and a way to help amplify HR’s data literacy, Goldt explains.
“There will be a natural evolution that we might not notice until we look back and reflect. For example, in the last few years, we’ve started to rely on digital assistants [think Alexa, Siri and Waze] in our consumer lives. In the next two to three years, the enterprise space will catch up.”
Gartner’s Kropp says companies, and HR executives in particular, should create their own employee “Bill of Rights” on data, focusing on how they’re going to use the data they collect. What are the things they agree to do or not do with data needed for an effective AI strategy? For example, are they going to be transparent with employees about what data they actually collect? Are they going to turn over decisions to an algorithm, or is the algorithm only going to provide recommendations, and a human being has to sign off on every decision?
“We’re finding–and what we really believe–is that, if companies don’t create their AI ethics Bill of Rights, then what will invariably happen is that well-meaning business leaders will use that data and information with the best of intentions but potentially make ethically poor decisions,” he says.
Another strategy Kropp offers is using AI within HR to create a new role in the company, a “head of data ethics,” so to speak. That person preferably should be housed within HR, not IT or legal, because HR represents the most balanced, innovative approach to data ethics.
“The major challenge is, when an issue arises, do you have a set of rules that you can look at and say, ‘Here’s how we deal with this,’ ” Kropp says. “Because, in the moment, it is human nature to take the easiest path, which may not be the right one. To get to a result that is the least painful regarding the use of AI, you have to have the rules set in place before you are confronted with a problem.”
A CHRO’s tech wishlist
Better video tools for connecting workers
“We can never overcome the fact that geography does play a role in driving high performance and we can’t get as rich of an experience with video and collaboration tools as we do face-to-face. Building rapport in relationships is still critically important, though video can go a long way toward helping create that connection. The technology is getting smarter and smarter but, still, when I facilitate online meetings with multiple people, we have people talking over each other. I think the continuation of the technology we’re seeing, where video is able to focus in on the person who is talking and mute mics of others, can really help create great online meetings.”
–Billie Hartless, CHRO, Mitel
A tool that accurately measures culture
Karen Davis, Partner and CHRO at Prosek Partners
“Working in HR, I’ve tried to find the balance between efficient automation and human interaction. As the demand for transparent and inclusive workplaces increases, a solution that would allow us to more accurately measure our culture was on the top of our tech wish list â€¦ I think it’s easy to identify a problem and find a quick tech solution to use as a Band-Aid and hope it works. But HR is a people-first industry and we must be strategic with the tech solutions we invest in â€¦ My wish for this one-stop solution was granted [in] December when we launched Pluto, a D&I platform that uses proprietary blockchain and privacy technology to help us continue to foster a safe and inclusive workplace â€¦ We’re thrilled that this technology has provided a safe and personalized platform that delivers deep insights and allows us to engage with our employees in a way that we couldn’t before.”
–Karen Niovitch Davis, partner and CHRO, Prosek Partners