How HR can prevent AI systems from becoming the ‘yes man’

Date:

Share post:

Globally, nearly half of business leaders doubt their leadership teams have the AI system skills needed to navigate risks and opportunities. A May 2025 Business Leaders report by the Adecco Group, which surveyed 2,000 C-suite executives across 13 countries, highlights a significant gap in AI readiness at the top of organizations.

Perhaps more telling, only about one-third of business leaders say they’ve engaged with AI improvement initiatives over the past 12 months, despite widespread acknowledgment that AI readiness is critical to business success.

Cristopher Kuehl, VP of Artificial Intelligence & Data Science at Akkodis
Cristopher Kuehl, VP of Artificial Intelligence & Data Science at Akkodis

This leadership gap creates a risky vacuum—one increasingly filled by HR tech that may be making the problem worse. Many HR departments rely on AI tools that reinforce existing assumptions instead of surfacing uncomfortable or challenging insights.

Since AI is trained on historical patterns, it often amplifies blind spots rather than correcting them. Christopher Kuehl, vice president of artificial intelligence and data science at digital engineering firm Akkodis, calls this the “AI yes man” problem.

“An AI yes man is a system that tells you what you want to hear rather than what you need to know,” Kuehl says. “In practice, it’s a system of chatbots or HR tools that mirror assumptions instead of testing them against real data or presenting alternative perspectives.”

For HR leaders making decisions about hiring, promotions and pay equity, the stakes are particularly high. When AI systems reinforce blind spots rather than expose them, Kuehl reminds HR leaders that the consequences can affect entire workforces.

Where the ‘yes man’ problem shows up in AI systems

Kuehl warns that the yes-man problem appears across common HR applications, often in ways that feel helpful in the moment but create risk over time.

In recruitment, filters can rank candidates to match existing staff profiles, reinforcing bias and shutting out new perspectives. “The system optimizes for familiarity rather than diversity of thought or background,” Kuehl says.

Employee sentiment tools present another challenge. These systems can overemphasize positive terms like “great” or “happy” while overlooking comments about burnout or frustration. Kuehl says the result is a skewed picture of employee wellbeing that masks underlying problems.

Performance management systems create similar blind spots. “Analytics that simply mirror manager ratings hide favoritism or inconsistent standards,” Kuehl notes. “Leaders end up seeing what they want rather than the reality they need to act on.”

The pattern is consistent, warns Kuehl. AI systems that prioritize efficiency without oversight tend to smooth over contradictions and complexity, delivering results that feel right but may be misleading.

This yes-man problem is particularly dangerous because it masks a growing expectations gap. The Adecco Group research found that 60% of leaders expect employees to update their skills, roles and responsibilities to adjust to AI’s impact. Yet only 25% of workers say they’ve completed training on how to apply AI at work.

Red flags when evaluating vendors

HR leaders evaluating AI tools should watch for warning signs of systems designed to validate rather than challenge. “It’s a red flag when a vendor talks about cultural alignment but cannot show how the system surfaces uncomfortable or counterintuitive findings,” Kuehl says. “Dashboards reporting only positive results are another warning, because no workforce is that uniform.”

Other concerns include limited transparency around training data and bias testing, and overreliance on manager inputs over employee-generated data. In the latter case, HR leaders risk “buying validation instead of insight.”

HR leaders need guardrails to ensure AI surfaces hard truths. Regular audits of pay, promotions and representation prevent blind spots from becoming systemic. These safeguards should include explainability standards so leaders can trace how conclusions are reached, channels for employees to challenge questionable results and ownership structures that extend beyond HR. “Governance cannot sit in HR alone; it must include legal, ethics and employee voices so HR isn’t marking its own homework,” Kuehl says.

The Adecco data supports this structured approach. Organizations with responsible AI frameworks are seeing measurably better outcomes: Sixty-five percent are actively upskilling their workers in AI, compared with just 51% of organizations without frameworks. These organizations are also significantly more likely to report that AI has positively impacted their talent strategy.

Another thing HR leaders need to be aware of: Real data-driven insights rarely look neat and tidy. “Genuine insights show nuance, variation and sometimes uncomfortable findings,” Kuehl says. “If everything looks positive and uniform, that’s a red flag.”

He recommends tracing results back to raw data and cross-checking multiple sources. Contradictions between surveys, interviews and exit data often reveal the real issues. “When AI smooths over those differences, it’s not providing analysis—it’s reinforcing the status quo,” he notes.

Questions CHROs should ask about new AI systems

Before implementing new AI systems, Kuehl suggests CHROs ask vendors five critical questions:

  1. How does this system highlight negative or contradictory findings?
  2. What steps can detect bias in recruitment, promotion or pay data?
  3. How often do insights challenge leadership assumptions, and how are they surfaced?
  4. Can we see examples where the tool uncovered uncomfortable truths rather than confirming expectations?
  5. What level of data access and explainability will my team have to validate findings?

“The most telling evidence is whether the vendor can show real examples where the system uncovered problems missed by traditional methods,” Kuehl says. “Without transparency and validation, insights will never be trusted.”

The scale of the challenge is significant: Only 10% of organizations in the Adecco study qualify as “future-ready.” These companies demonstrate a strong commitment to leadership development, workforce skills, career mobility and structured AI integration.

AI systems should help HR leaders see the full picture, not just confirm what they already believe. “AI can improve efficiency in HR, but only when teams understand its limits,” says Kuehl. “Without proper training and oversight, the same systems that speed decisions can reinforce blind spots.”

Jill Barth
Jill Barthhttps://www.hrexecutive.com/
Jill Barth is HR Tech Editor of HR Executive. She is an award-winning journalist with bylines in Forbes, USA Today and other international publications. With a background in communications, media, B2B ecommerce and the workplace, she also served as a consultant with Gallagher Benefit Services for nearly a decade. Reach out at [email protected].

Related Articles