What You Should Know About AI in HR Tech

By: | September 20, 2018 • 3 min read
Emerging Intelligence columnist John Sumser is the principal analyst at HRExaminer. He researches the impact of data, analytics, AI and associated ethical issues on the workplace. John works with vendors and HR departments to identify problems, define solutions and clarify the narrative. He can be emailed at hreletters@lrp.com.

There are several basic principles you should understand about our new digital co-workers. The first is that most current “artificial intelligence” is really nothing more than a set of sophisticated statistics.

Without delay, here is another.

The output of an “intelligent machine” is an opinion, not a fact.

This may be the most important thing you learn about intelligent tools. Whether it’s sophisticated matching, chatbot interactions, machine learning, natural-language processing, sentiment analysis or data models, the machine can only offer an opinion. Just because it comes from a computer doesn’t mean it’s either real or true.

In the same way that humans have unconscious bias, machines have uncoded bias. They only know about things that are measured, quantified and given to them. Like people, they are bad at accounting for the things they can’t see and don’t know. Lacking any imagination whatsoever, their worldview is limited to the data in their possession.

Advertisement




What’s worse, machines can only know how things are in the past. Their opinions are limited to associations; this is like that. They can’t experiment, propose alternate scenarios or intervene to improve. When the world is not like it was yesterday, their work stumbles. They would be “happiest” if it were always yesterday.

Currently (and for the foreseeable future), our new digital “interns” are the worst sort of employee imaginable. They are literal-minded, opinionated, require extensive training, only stop when you tell them to, have no conscience and require retraining from scratch when something is not right. And still, we need to use them now while the technologists work to take us to the next level because the next level depends on more data from us.

This means that we need to learn to argue with their machines and train our human employees to understand this new type of software that gives suggestions and opinions instead of facts. Like video gamers looking for the next hack, employees will need to monitor, understand, question and exploit the vulnerabilities of their tools and account for them. Digital employees are central to our future, but managing them is very different than managing people or older software.

With humans, a manager can afford to be imprecise or distracted. Trust can be expansive. With machines, every delegation, follow-up or training must be flawless. The effectiveness of an intelligent tool is entirely dependent on its manager. Humans can overcome bad management; machines cannot.

Ready for one more?

The company with the biggest database usually wins.

Data are the new infrastructure, the foundation of intelligent tools and new forms of business. Unlike old-fashioned enterprise computing where workflow was king, intelligent tools thrive on the boundless opportunity to discover patterns. For companies, this means that the need to clean up their data is urgent. For vendors, the credibility of their claims rests entirely on the volume and quality of data they use.

I know of one start-up that spent millions teaching machines to generate “fake data” to test algorithms, data models and sentiment analysis. They knew that, without the ability to really substantiate their claims, they would be placing the risk on their customers’ shoulders. If only this was the norm.

Advertisement