Sumser: Here’s an insider’s look at AI ethics in practice
I want to tell you about a project I’ve been working on for most of the last year. As you may recall from prior columns, I am deeply concerned about ethics in AI in HR tech applications. Because when AI is applied to the lives and livelihoods of people at work, it is profoundly different than when used for consumer purposes.
AI used by employers to recruit, assess, manage, coach, develop, predict and “improve” their employees is fraught. The underlying models are extremely simplistic and error-prone. Design is often the product of upper-middle-class male college graduates from a narrow band of skin tones. Cursed and blessed by the proclivities of STEM professionals, these models come with a set of assumptions about the world and its workings.
At its essence, B2B AI used in HR tech creates a set of categories, then assigns people to them. It scores and predicts based on historical data. Given the past year, you might imagine there are lots of questions about the utility of using historical data for anything.
Related: Sumser will speak with CHROs Maxine Carrington of Northwell Health and Mary Ruberry of The Parking Spot during a March 18 panel discussion at Spring HR Tech.
I believe HR software design and use should be deeply influenced by people who have a broader range of lived experiences and can see a bigger picture. Otherwise, AI is based on a limited perspective full of assumptions about how things work that are simply not true for many people.
Arena Analytics is a Baltimore-based recruiting technology company founded by Mike Rosenbaum. It’s his second company and is about four years old, though the product itself traces its roots back nearly a decade. Rosenbaum’s first company, 20-year-old super-successful Catalyte, finds potential technical professionals in unexpected places and develops them.
Arena Analytics uses machine learning and NLP to predict the likelihood that a candidate will thrive in a specific company, division, department or even under a certain supervisor on a particular shift. While its early-stage focus is healthcare, and it is already deployed into organizations in healthcare whose applicants represent 18% of the U.S. healthcare workforce (3.7 million unique job applicants per year), the company sees itself as a labor market disruptor that can apply the same ideas and tools in a broad range of settings.
This is no “build a business to sell it” tech start-up. The company has been designed from day one to be an enduring institution with a broad-ranging impact on the economy. The underlying idea is that talent can be found in many places, but opportunity often cannot. Arena Analytics is meant to unlock economic value through finding talent and expanding opportunity.
A compelling approach to AI and ethics
In my first conversation with the company, we started talking about ethics in AI. It’s a question that matters deeply for both the vendors and users of intelligent tools (AI) because the intricacies of humans at work are riddled with nuance and assumption. Rosenbaum and his leadership team wanted to build an AI Ethics Advisory Board that served two complementary purposes.
First, they wanted an advisory board that could serve as a resource for the design team. The goal is to deliver such value that design wants the ethics board’s involvement. Second, the team would be a launch point for a national discussion of issues in AI, labor market realignment, DE&I and other topics that emerge when we look beyond data and take humans into account.
We built, conceptualized, recruited and funded a team of 12 individuals who make up the board. The group is highly talented with different members bringing experience in technology, HR, AI, recruiting, people analytics and DE&I. We’ve blended academia, practitioners and technologists. Members of the team also represent the types of people who are the most likely victims of badly wrought technology.
The first wave of the project involves a three-year contract between each member and the company. We’re parceling out four days of work per person per year. In the first year, we get to know each other, learn about the company’s technology and get comfortable with making decisions about ethics. In year two, we will dig deep into Arena’s design process. In year three, we will be starting to navigate questions beyond the enterprise.
We had our first quarterly meeting in January and have begun a process that has each member spending an hour with another member each month. We would have done this differently if we were traveling and may rearrange things in the future. The first meeting involved an introduction to the company, the technology and an ethics puzzle.
It turns out that ethics questions are really about prioritizing values. When two competing values have to be prioritized, this raises ethical issues. Ethics answers the question “What is the right thing to do?”
This is an unfamiliar approach that is easily side-tracked. When the conversation turns to money, you have either skipped the ethics questions and turned to cost-benefit analysis or resolved it in favor of moving forward. When people start discussing absolutes, the conversation has entered the realm of morals. Ethics, by its very nature, involves contestable concerns and issues—something usually not amenable to tools that rely on data and logic.
The Arena AI Ethics Analytics Board is a first-of-its-kind prototype. We are learning as we go. I’ll keep you posted.