Sumser: 12 questions to ask to make a smart AI purchase

By: | March 4, 2020 • 3 min read
Emerging Intelligence columnist John Sumser is the principal analyst at HRExaminer. He researches the impact of data, analytics, AI and associated ethical issues on the workplace. John works with vendors and HR departments to identify problems, define solutions and clarify the narrative. He will speak at the 2020 Virtual HR Technology Conference scheduled for Oct. 27-30. He can be emailed at hreletters@lrp.com.

If you are considering using intelligent tools (machine learning/AI) in your HR or operations processes, here are some questions to consider asking potential vendors.

  1. Tell me about the data used to train the algorithms and models.

Bias, both legal and illegal, creeps into intelligent systems through the training data set. There is no such thing as a bias-free data set because data only occurs within a context. What are the sources, completeness, accuracy and context of the data set? What are the underlying demographics of the group that generated the data (age, gender, geography, ethnicity and whatever else can be specified)?

  1. How long will it take for the system to be trained?

Every intelligent tool has to learn the specifics of your company and the particular workflow. Some vendors make the learning curve a part of the implementation process. Others learn from transactions as they occur. There are systems that take 18 months to train.

  1. Can we make changes to our historical data?

You may want to modify the machine’s result because they reflect the biases of your history. Having the conversation is what’s important; it will give you a window into your real risks. It is likely that the answer to this question is, “You can’t.”

  1. What happens when you turn it off? How much notice will we receive if you turn it off?

Imagine that you are using a tool that does the job of several employees (sourcers who review resumes, for example). If the tool fails in a way that requires a shutdown, what sort of advance warning do you get? Since most providers are in experimental stages, the answer to this question also matters if the project ends. In a very real way, these are digital employees, and it is best to have a replacement plan.

See more columns from John Sumser HERE.

  1. Do we own what the machine learned from us? How do we take those hat data with us?

Part of the way that these systems operate is that they learn in both the aggregate and individual cases. Most vendors guarantee that your data is “anonymized.” You still may not wish to have your operating practices be a part of some larger benchmarking process after you change suppliers.

6.What is the total cost of ownership?

We know precious little about the behavior of intelligent machines. Like any employee, they require training, supervision and discipline. Make sure you have a clear picture of the total cost of ownership of any learning machine you enable.

  1. How do we tell when the models and algorithms are “drifting”?

All data models and algorithms age. Sometimes it’s a graceful degradation. Other times, it’s complete failure. Knowing when the intelligent tool is out of kilter is essential to managing it.

  1. What sort of training comes with the service?

Working with partners who are machines is different than working with human collaborators. The machine’s ability to deliver results declines with time. Working with input that is of variable quality is a normal thing in human relationships. We are not used to this when working with machines.

  1. What do we do when circumstances change?

As much as their creators would like to believe otherwise, machines that learn work in rapidly shifting environments. Recommendations that worked yesterday may fail today. When the machine issues directions that turn out to be irrelevant or mistaken, their utility declines. Knowing how to adjust the tool to meet changed circumstances is a critical part of owning and maintaining them.

See also: When intelligent tools don’t see what matters

  1. How do we monitor system performance?

It’s possible that the largest expense in owning a machine-learning tool is monitoring the relationship between real circumstances and recommendations. This is a critical part of the task of supervising an algorithm. It’s like the quality control that used to be required at the end of the production line.

  1. What are your views on product liability?

Be sure to have a long conversation about how the tool works and how the vendor monitors the impact of machine-learning curves. The key question here is “What if your tools’ recommendations or decisions cause damage to our people or our business?”

  1. Get an inventory of every process in your system that uses machine intelligence.

Explore every single place that uses algorithms, models or other forms of machine intelligence. This is where your execution risk lives.

As you might guess, I’ll be talking about these issues during my workshop at Select HR Tech this June. Procuring AI involves different expectations than traditional software acquisition, and the conference features a series of hands-on experiences, including mine, titled “AI & Intelligent Technologies.”