AI Bill of Rights: Why it’s the beginning of what HR, HR tech need

The White House’s Office of Science and Technology Policy unveiled today its “Blueprint for an AI Bill of Rights,” a sweeping set of guidelines that employers will be urged to consider when using artificial intelligence tools for hiring, promoting current employees and other HR operations.

What’s in the blueprint for HR and recruitment leaders and the technology vendors that provide these tools?

- Advertisement -


The guidelines—which are just that and not a proposal for new laws—offer suggestions for navigating the “great challenges posed to democracy today [by] the use of technology, data, and automated systems in ways that threaten the rights of the American public,” according to the announcement.

They set out four key areas of protection in the use—and possible abuse—of modern technology in the workplace and in people’s personal lives: Safe and Effective Systems; Data Privacy; Human Alternative, Consideration and Fallback; and Algorithmic Discrimination Protections.

The final set of guidelines—for Algorithmic Discrimination Protections—could answer many questions that HR leaders and recruiters have concerning the possible existence of bias in the AI tools they use, says Kyle Lagunas, head of strategy and principal analyst for Aptitude Research.

“I think this is awesome,” says the former head of talent attraction, sourcing and insight for GM. “Having implemented AI solutions in an enterprise organization, there are a lot more questions coming out of HR leadership than there are answers.”



According to Lagunas, HR and recruitment heads have been seeking guidance from the federal government for helping them make “more meaningful” analyses of these AI tools. 

“In the absence of this kind of guidance, there’s really just been a lot of concern and fear and uncertainty,” he said. “This could be excellent. This is the beginning of what we need.”

HR technology analyst Josh Bersin agrees about the necessity of these guidelines in today’s modern workplace, saying they set an important principle around the use of artificial intelligence.

“AI should be used for positive business outcomes, not for ‘performance evaluation’ or non-transparent uses,” says the founder of The Josh Bersin Academy and HRE columnist. 

Bersin believes the blueprint will help software vendors, including companies that provide tools for scanning applications and assessing candidates, ensure that their clients are not implementing biased systems. It will also help the vendors ensure that their systems are transparent, auditable and open.

- Advertisement -

“I am a big fan of this process and I hope legal regulations continue to help make sure vendors are not abusing data for unethical, discriminatory or biased purposes,” Bersin adds.

What the guidelines say

The blueprint’s introduction states: “Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use. …” The Office of Science and Technology Policy announcement adds, “Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use.” 

The blueprint also focuses on what it calls “algorithmic discrimination,” which occurs when automated systems “contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” These could violate the law, it says.

“This document is laying down a marker for the protections that everyone in America should be entitled to,” Alondra Nelson, deputy director for science and society of the Office of Science and Technology Policy told The Washington Post

In addition, the guidelines recommend that “independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.”

Lagunas believes that these new guidelines could compel employers to review their AI tools in regular audits for bias, like those that will be mandatory for employers in New York City starting Jan. 1, 2023.

“Any vendor that you’re working with that is utilizing AI, they were already prepared to run audits for you before this [NYC] legislation came to pass. This is a really good and important best practice,” says Lagunas.

While recruiting for GM, Lagunas said AI recruitment solution providers were more than willing to conduct audits of their formulas when requested by HR and recruiters. 

“I can’t tell you the documentation that we got from our partners at Paradox and HiredScore when we were evaluating them as providers,” he said. “These vendors know what they’re doing, and I think it’s been difficult for them to build trust with HR leaders because HR leaders are operating on a need to ‘de-risk’ everything.” 

That said, Lagunas thinks the federal guidelines will help HR as well as technology vendors.

“It’s not just that if the vendor’s client is misusing the technology, their client is in the hot seat. There is going to be some kind of liability,” he says. 

“I would say the vendors don’t need legislation to get serious. They already are.

Phil Albinus
Phil Albinus
Phil Albinus is the former HR Tech Editor for HRE. He has been covering personal and business technology for 25 years and has served as editor and executive editor for a number of financial services, trading technology and employee benefits titles. He is a graduate of SUNY New Paltz and lives in the Hudson Valley with his audiologist wife and three adult children.