As artificial intelligence becomes more accessible by the day, employers are looking to reap the benefits the technology can bring to their enterprises, including HR. At the same time, they are navigating new vulnerabilities created by an evolving patchwork of AI regulation and rapid technological change, causing employers to tread lightly—and with good reason, according to a recent report.
In its AI in the Workplace Survey Report, Littler, an employment and labor law practice representing management, found that many employers are already leveraging predictive AI tools—including for recruiting, hiring and other HR processes. However, so-called generative AI technologies, such as ChatGPT, that create new data or content are not as widespread.
In the survey based on responses from nearly 400 in-house lawyers, HR professionals and other business leaders across the U.S., more than half of respondents (56%) say their organizations are not using generative AI tools in any HR capacity. Among those (34%) incorporating generative AI into HR functions, the most common usages were for content creation, including job descriptions, onboarding materials and employee communications.
“Generative AI holds great promise for HR functions by automating repetitive and time-consuming tasks, as well as improving start-up times for content creation activities,” says Niloy Ray, Littler shareholder and member of the firm’s AI in Human Resource Decisions practice. “Given that the technology is still developing and that its impact on workforces is increasingly complex, it’s encouraging that employers appear to be taking their time implementing it organization-wide.”
Risk versus rewards of AI
The increasing regulatory uncertainty surrounding the use of AI by employers has many proceeding cautiously, Ray says—although most are still interested in moving ahead.
As laws and policies governing the use of AI emerge, 51% of those surveyed by Littler said their organizations have not necessarily changed their AI usage but are closely monitoring regulatory developments. Another 29% are limiting the scope of HR activities for which AI tools are deployed.
“That only 10% have halted usage altogether or decreased use in jurisdictions with proposed or enacted legislation is a sign that employers are willing to take on a certain level of risk in exchange for the benefits these tools bring,” he explains.
While several U.S. states are proposing or developing legislation in this area, only New York City currently has a law specifically governing AI use in employment-related situations. About one-third of respondents to the Littler survey identified New York City’s law as a concern compared to 53% who said the same of legislation being considered in California.
Apart from ensuring compliance with potential regulations, a critical first step for HR leaders considering incorporating AI into their practices is determining whether such a tool is critical to their operations, Ray says. In short, is there a sufficient benefit to justify the adoption of an AI tool for a specific use?
Those benefits need to be weighed against the risks, he says, including the rapidly evolving regulatory environment and the rampant pace of technological advancement, along with potential risks related to privacy, accessibility and discrimination. Also, generative AI tools, in particular, can raise questions regarding accuracy, plagiarism, enterprise-wide information security and control of intellectual property.
“Absent a clear vision of how and why they are adopting AI tools in HR, organizations may inadvertently create new vulnerabilities without bringing distinct value to the business,” he warns.