Despite the enthusiasm around AI agents, industry research suggests a significant correction is coming. In fact, over 40% of AI agent projects will be canceled by the end of 2027 due to escalating costs, unclear business value or inadequate risk controls, according to Gartner.
But that doesn’t mean organizations won’t find success if HR leaders make thoughtful, strategic choices about where and how to use this developing technology.
The AI ‘agent washing’ problem
“Most AI agent projects right now are early-stage experiments or proofs of concepts that are mostly driven by hype and are often misapplied,” says Anushree Verma, senior director analyst at Gartner. She warns that organizations may overlook the true costs and challenges of implementing AI agents at scale, which can prevent projects from reaching production.

Adding to the confusion is what Gartner calls “agent washing”—vendors rebranding existing products, such as AI assistants, robotic process automation and chatbots, as agents without substantial agent capabilities. Gartner estimates that only about 130 of the thousands of AI agent vendors are real.
In fact, some tools are being dubbed agentic when that level of orchestration isn’t even necessary, Verma says: “Many use cases positioned as agentic today don’t require agentic implementations.”
The future of AI agents
By 2028, Gartner researchers predict that at least 15% of routine work-related decisions will be made autonomously by AI agents, up from 0% in 2024. Furthermore, Gartner projects that one-third of enterprise software applications will incorporate AI agents by 2028, compared to less than 1% in 2024.
Read more | Agentic AI in HR: What is it and how can it help?
From helper to colleague
For those developing and using the new tech, AI agents are moving from being a “helper” to the organization to “being a colleague,” according to Yvette Cameron, senior vice president, global cloud HCM strategy and marketing at Oracle. “It’s an entirely new paradigm, a game-changer.”
Cameron’s vision of a new way of working includes three main areas:
- Innovative approaches to work that move beyond process automation to fully autonomous workflows.
- Software designed to adapt to users rather than forcing users to adapt to it.
- AI agents that introduce a fundamentally new interface and user experience.

When AI agents reach colleague level, HR leaders have work to do. At that point, companies now “have to onboard agents like employees,” says Jessica Holt Ware, vice president of global sales at Salesforce.
She explains that people teams must establish guardrails, set clear objectives for AI agents, monitor their performance and develop strategies for when they fail to achieve those objectives.
“There will be somebody who gets a call in the near future and is told that their agents are rogue,” warns Holt Ware. “We have to be thinking about this.”
Read more: UKG announces AI agents as HR tech vendors race to innovate
The 4 Rs framework
Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:
- Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
- Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
- Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
- Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”
Looking ahead, employees will need to develop new skills. OB Rashid, chief technology officer at LMS Absorb Software, predicts that within the next five years, workers will move from simply using AI agents to actively managing them. He explains that everyone—not just managers—will need to learn how to guide AI agents toward desired outcomes, similar to how they would mentor a colleague.
New success metrics
While many HR leaders focus on workforce skill-building, traditional learning metrics are becoming less relevant as AI advances. Course completion—once a standard measure—was designed for “standardized learning paths,” he says. As programs become more personalized, and as AI agents become capable of “completing training on an employee’s behalf,” Rashid says, completion alone starts to “lose its meaning.”

Instead, companies should measure the impact of learning on real outcomes, Rashid advises. For example, after a representative completes training on difficult conversations, systems can track whether call resolution times improve or customer sentiment shifts in the weeks following the training.
Additionally, because AI agents can monitor patterns continuously and in context, Rashid points out that they can reveal insights that traditional systems often overlook—such as identifying when an employee is excelling in one area but requires support in another.
Read more: Bringing AI agents on board? Here’s what to consider first
Learning beyond completion rates
As AI agents become more capable, there’s a risk that employees will “disengage” from the learning process, according to Rashid. This challenges HR teams to design learning that is genuinely engaging and to communicate clearly why upskilling matters for employees themselves.
The solution focuses on linking learning to personal outcomes, not just company benefits. Ignoring learning opportunities could have serious consequences. Employees who don’t continue developing may miss promotions or future roles.
Rashid references research from edX Workplace Intelligence, which found that nearly half of executives believe today’s skills will be outdated within two years. Rashid also cites Microsoft and LinkedIn’s 2024 Work Trend Index, which found that over 70% of executives would choose a less experienced candidate with AI skills over a more experienced candidate without them.
AI agents, meet legal and compliance

This shift highlights why upskilling has become urgent, but it also exposes a deeper issue. AI agents aren’t just reshaping which skills matter; they’re revealing the limitations of traditional, human-driven decision-making processes.
Ali Reza Manouchehri, CEO at government technology innovator MetroStar, points out a shocking reality: “If you Google ‘how long recruiters spend reviewing a resume,’ you’ll see that the average is only around 10 seconds per resume.”
That speed raises concerns about quality. Manouchehri asks, “How can a recruiter realistically evaluate hundreds or thousands of resumes with the depth needed to meet hiring goals?” He suggests that trained, compliant AI agents can provide greater consistency and precision—but still will require human oversight to prevent bias or blind spots.
Indeed, the regulatory environment is rapidly evolving. Employment lawyer Wende Knapp from Woods Oviatt Gilman notes that “we’re definitely past the ‘AI curiosity’ stage.” She says her team has witnessed a “significant shift this year” as AI agent projects move from pilots to implementation.
Knapp notes that more complex AI applications—like agents that make hiring recommendations or influence promotions and terminations—remain largely in the pilot phase for many organizations. She emphasizes that these use cases carry higher stakes, with greater risks around compliance, including bias, discrimination and inaccuracy.
Read more: EEOC commissioner on AI and employee rights
3 risk buckets for hiring
Knapp clarifies that the legal risks related to AI agents in the hiring process fall into “three buckets.” These are bias and discrimination, data privacy and lack of transparency in automated decision-making. New York City already regulates automated employment decision tools, and she points out that “the EEOC has also made it clear that employers remain responsible for outcomes, even when using third-party AI vendors.”
Courts are establishing clear expectations for human involvement in AI-driven decisions. Knapp explains that “legally sufficient oversight means that a qualified reviewer evaluates the AI output with genuine discretion” rather than simply rubber-stamping recommendations.

“The human reviewer must have both the authority and the willingness to override the AI when the recommendation doesn’t hold up,” says Knapp. She predicts that if the human performs only a perfunctory role, the courts will likely view that scenario as having no oversight at all.
Knapp says the key principle remains clear: “AI can support decision-making, but it cannot be the decision-maker.” She warns that the law demands proof that a person—not a program—made the final call.
Organizations must balance innovation with legal requirements while also navigating the hype cycle. Knapp recommends treating “AI the way you would any other high-risk employment practice—plan, test and document.”
Trust through transparency
“The HR teams that are best positioned to take advantage of the benefits of AI are the ones treating AI not as a quick efficiency win, but as a long-term compliance investment,” Knapp says. For vendor selection, she emphasizes that organizations shouldn’t “just ask whether the tool ‘works.’ ” She urges HR leaders to look into how the platform was trained, how it’s tested and who “stands behind” the outcomes.
Transparency is now a compliance requirement, Knapp says, not just a best practice. The law requires employers to inform employees how they collect, use and store their personal data. Building a level of trust that will satisfy most HR leaders and the workforces they serve requires openness about AI operations, particularly in recruiting.
“Trust starts with transparency,” Manouchehri says, and success with AI agents depends on “explainable AI and transparent prompt logic” that can “earn and retain trust from both sides of the hiring table.”
Agents can’t replace curiosity
Despite rapid advances in technology, human skills remain irreplaceable. Cameron explains that AI agents can help HR teams increase business agility by serving as an “always-on workforce partner” while fostering a culture of continuous performance management.
Rashid agrees, noting that although AI agents can reduce friction and personalize learning, they can’t replace curiosity or personal accountability. HR’s role, he says, is to nurture accountability by connecting learning to real-world results, delivering content that feels relevant and making development meaningful, both personally and professionally.
This perspective aligns with Gartner’s guidance to deploy AI agents only when they clearly deliver value or ROI. Verma advises organizations to focus on enterprise-wide productivity rather than just improving individual tasks.
“This will look different at every organization,” she says, “but the goal should be driving business value through cost, quality, speed and scale.”