Generative AI’s game-changing capabilities stand to offer transformational benefits for HR leaders; however, the technology does bring real dangers that HR needs first to address, experts warn.
HR uses generative AI for everything from drafting job postings to tackling the laborious task of matching employees’ skills to open positions. While only 5% of HR leaders say their function is currently using generative AI, 9% are conducting pilot tests and more than half are exploring how to use it in the future, according to a Gartner report that surveyed more than 100 HR leaders.
But, like with any transformational tech, generative AI can pose a threat, such as inserting bias in hiring and promotions, which can derail an employer’s efforts to expand diversity among its workforce. Amazon in 2018, for example, nixed using an AI tool for recruiting because it showed a bias against women.
“Is there truth to some of these claims about the dangers of AI? I believe, yes, those are true if the technologies are not used in an ethical and responsible way,” said Aneel Bhusri, co-founder and co-CEO of Workday, at the company’s Workday Rising conference last week.
In addition to potentially introducing bias into processes, AI can deliver incorrect data that may appear correct on the surface, or AI hallucinations, warns Sayan Chakraborty, co-president at Workday. According to a report by NBC, such instances may come from the technology trying to connect dots when information is too sparse. If HR uses generative AI to draw insights from a small employee survey, for example, incorrect analysis can create big problems for people strategy.
2 ways to minimize hazards when using generative AI
Despite these dangers, Bhusri and Chakraborty say HR can harness the power of the technology if they take steps to reduce the risks.
Chakraborty says one approach calls for feeding high-quality data gleaned from “enterprise allies” into an employer’s generative AI. This type of data comes from a known source and adheres to regulatory, privacy and security requirements, compared to harvesting data straight from the internet, he says.
“We’re mostly aware of these very large language models with billions or even trillions of parameters like GPT, which are trained on massive amounts of data straight from the internet and built in an opaque way. We don’t really know what training and fine-tuning went into these models,” Chakraborty says. “And, inevitably, these models reflect all the good and all the bad things on the internet, which can be a great place where you can get things done but also a dark place for misinformation.”
Instead of populating their gen AI tools with data from the internet, Workday clients, for instance, can rely on the firm’s own data set—built from 65 million users who are under contract and using the same version of its software to generate over 600 billion transactions, says Bhusri. “It’s a very clear, clean and coherent data set,” he notes.
A second step HR teams and employers can take is the uncommon approach of keeping humans involved when using generative AI.
“Humans are the ultimate decision-makers anywhere AI is applied,” Bhusri says. It’s uncommon because many organizations are looking to use the technology to fully automate tasks for speed and cost reduction without human involvement.
However, lawmakers are increasingly becoming concerned about generative AI without human oversight. For example, New York requires employers that use AI in the hiring process to undergo a bias audit within a year of launching the technology.
Despite concerns about generative AI’s potential for harm, its potential for transforming how organizations, including HR functions, perform their work is prompting employers to run toward the technology.
“I’ve been around the technology industry for a long time and there’s been a lot of big shifts. But this one is different,” Bhusri says. “It certainly feels like the most important shift since the emergence of the cloud.”