Artificial intelligence is transforming hiring practices, but a recent lawsuit is highlighting critical questions about how much companies should rely on AI when making employment decisions.
The Workday case: A legal test for AI in hiring
The lawsuit against Workday, originally filed by Derek Mobley in 2023, alleges that the company’s AI recommendation system discriminates against applicants based on race, age and disability. Mobley claims the system rejected him from hundreds of positions over seven years.
The case has since expanded, with four additional plaintiffs—all over age 40—joining to argue that Workday’s AI-driven recommendation system disproportionately prevents older workers from securing employment.
Last week, California federal judge Rita Lin ruled that the case can proceed as a collective action, allowing Mobley to notify “similarly situated individuals” who may wish to join the lawsuit.
Workday has pushed back, stating it believes the case has no merit. “Central to the Plaintiff’s allegations is that Workday products make hiring decisions on behalf of our customers, which is not true,” a Workday spokesperson told HR Executive.
They elaborated: “This is a preliminary, procedural ruling at an early stage of this case that relies on allegations, not evidence—the Plaintiff has not supported his allegations with facts. The Court has not made any substantive findings against Workday. We’re confident that once Workday is permitted to defend itself with the facts, the Plaintiff’s claims will be dismissed.”
According to law firm Holland & Knight, this is an early procedural decision based on allegations only, and the court hasn’t made any final judgments. However, this suit will likely spark concern for HR leaders, particularly as some lawmakers are moving forward to protect candidates from experiencing AI-cultivated bias. Set to take effect this summer, California’s new civil rights regulations specifically target AI-driven hiring discrimination, aiming to ensure algorithms don’t unfairly penalize applicants.
Pitfalls of AI hiring
These legal and legislative moves reflect wider concerns about AI’s role in recruitment. As the Workday case unfolds, experts are weighing in on the evolution of AI-driven hiring and its potential pitfalls, including discrimination claims.

“One of the biggest hurdles in AI-driven hiring is bias,” explains Elaine Pulakos, CEO of PDRI by Pearson, a provider of workforce assessment services.
She says algorithms must be transparent and accountable to avoid unintended discrimination. Pulakos warns HR leaders to be prepared with bias-mitigation strategies: “AI can analyze hiring trends, but it shouldn’t decide who gets the job.”
AI’s influence on recruitment continues to expand, from generating interview questions to screening resumes and even conducting early-stage interviews. However, growth brings additional challenges that CHROs must navigate carefully.
Complicating the equation, HR leaders now face another roadblock: candidates increasingly using artificial intelligence to their unrealistic advantage. “Candidates are starting to use AI to cheat,” says Pulakos. While AI itself can help detect AI-assisted applications, she notes that “it’s becoming harder to differentiate genuine answers from machine-generated ones.”
Read more | The future of AI and job interviews: Expert advice for HR leaders
This responsibility adds to HR professionals’ already full plates—especially as they work to evaluate a growing number of AI tools designed to streamline hiring. While many turn to technology to manage the volume of job applications, relying solely on automated systems removes the critical oversight a human can provide. Experts warn that without someone to guard against bias, these systems risk reinforcing existing inequities rather than eliminating them.
Pulakos says some CHROs are addressing this by reintroducing human elements into the hiring process—through in-person evaluations, monitored test centers and innovative assessments that prioritize real-world skills over polished answers. “We’re moving toward tangible skills—figuring out who is truly qualified rather than just who gives the ‘perfect’ answer,” Pulakos explains.
Balancing ‘efficiency and human judgement’
Despite these challenges, the future of hiring isn’t likely to abandon AI—leaving HR leaders to apply tech thoughtfully. “AI should serve as a partner rather than a decision-maker,” says Pulakos. “AI works best in collaboration with human beings, no matter how we assess candidates.”
The key is striking the right balance. “We have to balance AI-driven efficiency with human judgment,” Pulakos says. “The future of hiring depends on thoughtful integration—not complete automation.”
Human oversight is crucial—AI should support hiring professionals, not replace them, she advises.
As the Workday case progresses, it may set important precedents for how companies can legally and ethically deploy AI in their hiring processes. For now, the message from experts is clear: Embrace AI’s capabilities while maintaining human accountability and oversight.