How Technology Helps Us Make Better HR Decisions

By: | October 3, 2018 • 4 min read
Steve Boese is HRE's Inside HR Tech columnist and chair of HRE’s HR Technology Conference®. He also writes a blog and hosts the HR Happy Hour Show, a radio program and podcast. He can be emailed at sboese@lrp.com.

With the HR Technology Conference just completed a few weeks ago, I have had some time to attend a few industry events, record new episodes of the HR Happy Hour Podcast, and give a presentation on data, technology and decision-making in HR and talent management.

In preparing for that talk, I referenced two highly recommended books, How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellberg; and Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans and Avi Goldfarb. While neither book is “about” HR—or even the workplace—both provided some excellent frameworks for thinking about information, data, technology and AI, and had great examples of how understanding these “non-HR” concepts can help those of us in HR get better at making talent decisions.

Advertisement




I thought I’d devote this month’s column to sharing a few ideas from those books and my own personal thoughts on how we might want to view our people challenges a little differently.

  1. Data don’t always mean what you think they mean.

How Not to Be Wrong opens with an extremely interesting tale from World War II. As air warfare gained prominence, the challenge for the military was figuring out where and in what amount to apply protective armor to fighter planes and bombers. Apply too much armor and the planes become slower, less maneuverable and use more fuel. Too little armor, or if it’s in the “wrong” places, and the planes run a higher risk of being brought down by enemy fire.

To make these determinations, military leaders examined the amount and placement of bullet holes on damaged planes that returned to base following their missions. The data showed almost twice as much damage to the fuselage of the planes compared to other areas, most specifically the engine compartments, which generally had little damage. This data led the military leaders to conclude that more armor needed to be placed on the fuselage.

But mathematician Abraham Wald examined the data and came to the opposite conclusion. The armor, Wald said, doesn’t go where the bullet holes are; instead, it should go where the bullet holes aren’t, specifically, on the engines. The key insight came when Wald looked at the damaged planes that returned to the base and asked where all the “missing” bullet holes to the engines were. The answer was the “missing” bullet holes were on the missing planes, i.e. the ones that didn’t make it back safely to base. Planes that got hit in the engines didn’t come back, but those that sustained damage to the fuselage generally could make it safely back. The military then put Wald’s recommendations into effect and they stayed in place for decades.

The reason I wanted to share the story here, and talked about it in my recent presentation, is that it reminds us that raw data, even in substantial amounts and of good quality (like being able to measure and count every bullet hole on every plane that returned to base), are not usually sufficient to help us gain insight. The placement and number of bullet holes was not insight—it was simply information, data. It required human traits—curiosity, intuition, willingness to ask different questions of the data—essentially Wald’s contribution, before that data could lead to insight, and better decisions.

  1. Here’s a simple but powerful definition of artificial intelligence.

The other book I referenced above, Prediction Machines, adds to this idea that data alone are not sufficient to make better decisions by breaking down the decision-making process into its component elements. It goes on to describe how modern technologies (like AI, machine learning and natural-language processing) are supplementing the kind of human thinking and added value in the plane-armor example provided by Wald.

In the model, information is fed into an engine or an algorithm for the purposes of making a “prediction.” Put simply, a prediction is the process of filling in the missing data. Prediction takes what information we have and uses it to generate the information we don’t have. But the key going forward is that the tools and technologies that are being applied to these engines and algorithms are making it possible to simultaneously take in much more raw input data, generate more predictions, learn from these predictions and do all of this faster and cheaper every day.

Here’s an HR-centric example of why this is important: For years, most of the new technologies that were being developed to support talent-acquisition processes generally served to make communicating and broadly distributing job openings easier. They also, after a time, made it faster and simpler for candidates to apply to these openings. Greater distribution and easier applications then led to significant increases in the average application volume per job.

While the technology did offer some process and administrative-support improvements for HR and recruiters, it generally did not make up for the fact that application volumes were so much higher. Essentially, if finding the right candidate for a job is compared to finding a needle in a haystack, then all the early HR tech did was make the haystack bigger, and without offering any more help to locate the needle.

Advertisement