Update: The Automated Employment Decision Tools law went into effect on Jan. 1 as planned, but enforcement remains delayed until April 15 after significant public comment. Revisions also have been made to the proposed rules, narrowing the definition of the tools and making other changes.
Original article, Nov. 29, 2022: The audits are coming.
In the lead-up to the New York City law that compels organizations to test their AI-based HR solutions for bias, talent acquisition solution provider Beamery recently conducted two audits to gauge the compliance of its recruitment tools being used by clients in the city.
London-based Beamery wanted to get ahead of the Automated Employment Decision Tools law—which goes into effect Jan. 1—and potential new laws that are expected to be enacted in the coming months, says Sultan Saidov, co-founder and president at Beamery.
“We decided to jump on this journey because we see it as a personal responsibility as an innovator in this space,” Saidov says. Conducting the audits, which were handled by Parity AI—a third-party AI risk and compliance company—also provided Beamery a good opportunity to “pressure test our own assumptions.”
No red flags with Beamery’s tools were detected, he says, but the audits did provide an opportunity to prepare for future audits, once lawmakers offer more guidance.
They also highlighted the lack of detail in the New York City law. For example, Saidov says, the regulations do not spell out if an audit must take place every time an AI solution provider updates its algorithms. (Beamery says it updates its formulas once or twice a year.)
“I think the surprises have come more from how far we are into the year and how little clarity there still is from a regulatory standpoint,” he says. “We are so close to these regulations coming into effect and nobody still knows kind of what’s expected.”
This is not Beamery’s first audit of its AI tools. It conducted internal audits to test for compliance with General Data Protection Regulation, the 2016 European Union law that protects consumer identity and privacy. For AI anti-bias audits that fall under the New York City law, Beamery sought to test how its talent acquisition tools handle a potential job candidate’s gender and ethnicity during the recruitment process. The first audit took place in the summer followed by a month-long audit in October.
Beamery chose to test first for possible gender and ethnicity biases due to the lack of reliable methods of capturing these types of data during the talent acquisition process.
“During recruitment events, you don’t always ask people for their gender or ethnicity, and those traditional EEOC types of fields don’t always get captured,” said Saidov, referring to the Equal Employment Opportunity Commission. “The approach we took was to ensure that we could analyze large samples of data and not just the candidates that are coming through into our systems.”
Beamery also tested its models that emphasize a potential job candidate’s skills instead of education or previous job titles on their job applications.
This is where the third-party audit comes into play. Parity auditors didn’t just examine Beamery’s historical data, they also simulated the HR tech vendor’s models in different scenarios to discover the possibilities of inadvertent biases. The audit involved two tests: One with gender, ethnicity and other data included in the models and one without these data. The suggestions from the TA solutions were examined and reported to Beamery.
“There is a significant challenge for businesses and HR teams using AI today in that they must reassure all stakeholders that these tools are privacy-conscious and that they don’t discriminate against disadvantaged communities,” said Liz O’Sullivan, CEO of Parity AI, in a news release.
Registration is open for the HR Tech Virtual Conference from Feb. 28 to March 2. Register here.
Clarification: This story has been updated to clarify that Parity AI, not Parity Technologies, conducted Beamery’s audit.