Perhaps because we can see and sometimes touch technology, we seem to be obsessed with imagining how it will transform our lives. Sometimes it does, but it takes a long time to happen. The number of those cases is very small—and many more times, that transformation just does not happen. Remember driverless trucks, which were supposed to take over the highways by 2019?
We are now a good 10 years into machine learning models of data science, which were transformative in theory but, in practice, expensive and hard to use. We are also now two years into large language models like ChatGPT, where speculation continues to be at a fever pitch—the peak of the Gartner Hype Cycle, coined by analyst Jackie Fenn 30 years ago. But unlike machine learning and past innovations, the pressure is really on to make AI integration at work a priority, despite the difficult time we have in finding examples where it is changing work in money-saving ways—let alone transforming it.
Board pressure driving up AI urgency
Why this is happening is explained in a truly important and revealing Harris survey conducted for Dataiku: CEO Poll on AI Use. Remarkably, 74% of CEOs fear they will lose their jobs if they cannot show progress in getting AI to work. Sixty-six percent say that pressure is coming from their boards, which want to see real improvements from introducing AI—dollars and cents savings.
They also report that they are personally involved in many of these projects in their company, something that is unusual. Most cases where technology is introduced are front-line issues—trying to improve something for a particular job. This is not a good use of CEOs’ time, but it suggests how urgent some successes are for them.
Most revealing, they admit that about one-third of AI projects are more or less fake; they are not actually delivering what we claim they do. I hear this a lot where technology like applicant-tracking systems are claimed to be using AI when, in fact, there is nothing new in them that wasn’t there before large language models appeared. This is what happens when we demand something that is simply not realistic.
In pursuit of cost savings
This situation, where boards are demanding that CEOs produce hard evidence of something that so far has been hard to see anywhere, is truly stunning. Why is it happening? Because the boards and the investor community—their most important constituents—believe that AI will cut jobs, which is arguably the biggest priority for investors because it’s a cost they can see. They believe that it is just a matter of rolling AI out to make that cost savings happen.
Creators of these tools suggested that outcome, consultants pushed it along, and the business press echoed it. They are no longer saying that, but first impressions linger—as does the appealing promise of a quick and cheap way to knock down wage costs.
CEOs on AI: unrealistic expectations
Many of you in HR are living this now and seeing the crazy requirements being pushed down from the CEOs because they themselves are getting the same demands:
- quotas of the amount of AI introduced, as if it could be measured in pounds;
- requirements to prove that any vacancy to fill cannot be done by AI. This is an incredible requirement given that jobs have many tasks, and it is hard to find any examples where an entire task can be undertaken by LLMs, at least without creating more work that people elsewhere have to do.
Two findings, in particular, suggest that board members aren’t the only ones with unrealistic expectations. A remarkable 87% of CEOs believe that off-the-shelf LLMs are good enough to transform work. For almost any serious task, the LLMs have to be “trained” on real data to execute tasks.
It may sound surprising that ChatGPT can easily give you a report on the Chinese chemical industry, but that is because it is just surveying what has already been written about it. If you want it to do something focused that an employee would routinely do where the answers have to be correct—such as assessing the merits of a claim—that takes a huge amount of training. In other words, it involves looking at actual cases where people have already decided the merits and demerits, so that the software can build an algorithm that actually works. Every time the claim changes, the algorithm also has to be rebuilt. The best LLMs that are most able to do this work are very expensive to use as well.
AI hope is winning out over AI experience
So, why believe AI transformation is so easy? Because the appeal that some free tool will do all this is overwhelming.
The second and most bizarre result in the survey is the belief reported by CEOs that we could replace other executives and board members with LLMs and the results would be better. The reality is that those jobs have dozens of important tasks, each one of which would require a separate, trained LLM to attempt—let alone to get right. I read this not as CEO contempt for their human counterparts, but a huge lack of information as to what these AI tools can actually do.
In short, what we have going on here is another triumph of hope over experience. The promise is so enticing that companies will continue to chase it—wasting time and resources, stressing everyone out—long after it is clear that using free tools to replace employees is just too good to be true.