AI, Algorithms and Who Owns the Outcome

Editor’s note: This is the first in a series of columns called Emerging Intelligence that will examine and showcase examples from the front lines of predictive tools in HR technology.

As intelligence infiltrates our software, our relationship with it will change. Enterprise software, born in the 1980s, was largely a receptacle for the data we fed it. “Garbage in, garbage out” was how we characterized our responsibility for the system’s output quality. Enterprise software simply reported and summarized the things we told it.

Today’s new tools take the information we give them and then use it to make predictions, forecasts, recommendations and decisions. They use algorithms, data models, machine learning, natural language processing and neural nets to make sense of our data. They often use data and processing from sources beyond our organization’s boundaries.

The wave of innovation currently called “artificial intelligence” is the first step in a long journey. Within a few years, all our tools will have these features. Rather than being a novelty, as is currently the case, we will encounter predictions and recommendations about everything.

The dystopian view of our future looks like a blanket of Magic 8-Balls giving fortune-cookie-style commentary on every little aspect of our lives. It is easy to imagine the digital equivalent of a nagging overprotective parent who relentlessly offers ‘tips for improvement’ … software that guilt-trips in a relentless flow of tiny little bites.

A more optimistic scenario has machines reducing the drudgery of administrative work. In the enterprise-software era, software became the work. We were not good sales people, recruiters, clerks or employees if the forms were not properly and thoroughly completed. Getting work done meant learning new interfaces at the whim of the software vendor.

That administrative layer should recede from our view. Emerging intelligent tools will work to kick us out of the interface so that we actually do our jobs. We work more and document it less. The software does the administrative work while we focus on the things that matter most.

The progress of automation will echo the familiar development path of any new employee. Starting with the most basic bits of execution, we train our digital assistants so that our trust develops with each step of their improving effectiveness. We delegate as their competence evolves. The process allows us to move into higher order projects as the mundane, repetitive tasks are mastered by our machines.

With each step, we will grant our tools ever-broader authority to make decisions on our behalf. Since they will be licensed from vendors, there will be few things as important as understanding what a decision is and who was responsible for making it. The documentation that all digital transactions provide will take on new life as evidence.

In the enterprise-software era, it was easy to understand who had liability for the consequences of a decision. Since the user provided the data and the software simply summarized or rearranged it, there was no need to discuss responsibility when events soured. If you gave the machine erroneous data, it was your fault.

However, when the decision is the result of a licensed algorithm, things are different. In 20th century software, responsibility was clear. With intelligent tools, the vendor can contribute significantly.

It will always be the case that the employer has the liability for employment-related decisions that rely on the output of a piece of software. The important question is whether the software provider shares some, none or most of the responsibility. When attrition monitoring tools actually increase attrition or when resume-sifting software increases hiring discrimination, it is easy to see why employers will want to hold their vendors accountable.

Of course, there will be a flurry of activity to make sure that the contract terms and conditions specify that the employer is responsible. But, exactly who will be interested in automation that won’t take responsibility for its own quality? Can you imagine who would buy a bias-reduction tool that didn’t guarantee to reduce bias?

The AI Track at this fall’s HR Tech Conference will have several sessions focused on ethics, bias, fundamentals and implementation issues.

Much of the coming debate will hinge on the definition of the word “decision.” The questions range from, ‘When does a recommendation become a decision?’ to ‘Which part is more important, the data model the algorithm use or the data used to train it?’ There is the decision the machine makes and the employer’s decision to act on it. The jury is out. The question is undecided.

In the early going, we are going to hear all sorts of arguments about how one tells who made a decision. We will hear all sorts of stories about contract negotiations that wrestle with the question. All we know for sure is that we are dealing with a new form of software.

Avatar photo
John Sumser
Emerging Intelligence columnist John Sumser is the principal analyst at HRExaminer. He researches the impact of data, analytics, AI and associated ethical issues on the workplace. John works with vendors and HR departments to identify problems, define solutions and clarify the narrative. He can be emailed at