In May, a federal judge in California certified the case of Mobley v. Workday as a collective action. The lawsuit alleges that Workday’s AI-powered applicant screening tools, in multiple hiring environments, discriminate against job seekers based on their self-identification as a member of a protected class. The case, one of the first of its kind, has the potential to become a landmark in regulating the use of AI in hiring decisions. This is sparking critical conversations about fairness, accountability, and governance in hiring practices.
From screening resumes and ranking candidates to recommending job matches, administering pre-employment tests and identifying learning and development opportunities, AI in HR technology is already influencing decisions that directly impact who gets hired, promoted or left behind. Regardless of the lawsuit’s outcome, this case serves as a call to action for HR leaders. HR executives must become proficient in AI’s applications, risks and governance.
See also: Buying AI? The questions HR leaders should be able to answer
Why this case should matter to HR leaders
As an HR tech nerd, I’ll admit that AI tools are incredibly compelling. I was an early adopter. AI in TA is still revolutionary. When used responsibly, it can create greater efficiency in hiring, a better candidate experience and more successful hiring decisions. However, the Mobley case highlights the risks; there can be too much of a good thing. All apologies to rotisserie oven inventor Ron Popeil, this is not a “set-it-and-forget-it” technology.
Key questions
Even if Workday isn’t your HCM provider, the Mobley case underscores the importance of understanding where AI is in your existing HR tech stack and its potential risks. Here are five critical questions every HR leader must ask.
Where is AI in my HR ecosystem? Do you know which tools in your HR tech stack use AI and for what purposes? AI is prevalent in most TA tech stacks for both good and bad reasons. Know where it is and how it works.
What data is our AI trained on? Some types of AI are “trained” using specific data sets. For tools like resume screening or candidate ranking, it is essential to understand whether the training data accurately reflects the diversity and skill set you aim to attract or whether it reinforces historical biases.
Are we auditing our systems, and who is conducting the audits? Regular, independent reviews of AI systems are crucial for identifying and mitigating bias. If you’re purchasing a tool, consider looking for certificates of Ethical AI from companies like Warden-AI.
What recruiting KPIs do our tools optimize for? Are the algorithms aligned with equitable hiring goals, or are they prioritizing speed and efficiency at the expense of fairness? You won’t know until you conduct an audit.
What’s our plan if a tool fails a bias test? Do you have clear protocols for addressing underperforming systems, including escalation paths and corrective actions? Are your plans reflected in your vendor Master Service Agreements?
Rethinking the HR leadership table
To answer these questions, HR leaders need new skills and perspectives. Traditionally, senior HR teams have included roles such as HR Business Partners, Heads of Total Rewards, Learning and Development, HR Strategy and occasionally Talent Acquisition. But AI is changing the game.
Today, HR leaders need people with the following skills at their table:
- Data. Find experts who understand how your organization uses HR data across functions and disciplines.
- Technology. You need tech-savvy leaders who can direct HR technology strategy rather than relying solely on vendors and implementation partners.
- Finance. Seek leaders who can build business cases for AI investments that deliver measurable value.
- AI ethics. You need champions to ensure responsible and fair use of technology in HR processes.
The way we work and the skills required to lead are evolving rapidly. HR teams must adapt to meet these new demands.
The strategic role of CHROs in AI governance
As AI becomes more sophisticated, CHROs must take the lead in evaluating, governing and optimizing the tools their organizations use. This work includes:
- Establishing responsible AI principles and creating a practical governance framework.
- Expecting transparency from vendors about how algorithms are developed and trained.
- Requiring certified ethical AI tools from HR technology providers.
- Building AI literacy within HR teams to enable informed decision-making.
- Creating escalation paths to address tools that underperform or misalign with organizational goals.
The Mobley v. Workday case is a pivotal moment. Whether or not Workday is found liable, the message to employers is clear: If you’re using AI in HR, you must understand precisely how it works—and how it may be impacting your decision-making.
A word of caution: Don’t over-automate
CEOs and boards are pressuring CHROs more than ever to cut costs, and AI is often a tempting solution. However, replacing HR experts with unproven technology can have unintended consequences. Some organizations have laid off entire recruiting teams, only to rehire them when AI tools failed to deliver reliable results or introduced unacceptable risks.
AI can automate specific tasks, but that doesn’t mean it’s always the right choice. HR leaders must strike a balance, ensuring they maintain the “human” aspect of human resources while leveraging AI responsibly.
Final thoughts on AI in hiring
The Mobley case is one of the many signals that HR is entering a transformative era, one that feels even more significant than the massive shifts caused by the pandemic. Today’s CHROs must not only master traditional HR skills but also become champions of AI governance and data ethics. The stakes have never been higher, and the time to act is now.
AI is changing HR, but it’s up to HR leaders to ensure the change is for the better. Lead responsibly. The world is watching.