The White House issued an executive order today detailing regulations on artificial intelligence systems, with President Biden stating broadly that the U.S. aims to work with allies to further an international governance framework for the rapidly changing technology. “I know we can meet this moment with hope, not fear,” said President Biden before his official signature.
More specifically, the order addresses various crucial aspects of AI, aiming to balance its promise and potential risks; some of these topics directly impact the responsibilities of human resource leaders.
Key points in the document include standards for the security and responsible development of AI technologies, respect for individual privacy and data security and avoiding bias and discrimination in the use of AI. It also notes the need for a competitive landscape within the AI development sector as well as potential regulations relating to worker and consumer protections.
AI is an incredibly hot topic within HR, as seen by the sheer number of solutions on display and conversations on the technology during the recent HR Technology Conference in Las Vegas. Some practitioners believe it is revolutionizing the industry; others are wary; still others are cautiously optimistic and using care in implementing AI solutions.
Among the primary concerns are discrimination and bias, as HR Technology Conference chair and president of H3 HR Advisors Steve Boese wrote earlier this year.
“The widespread adoption of AI technology for HR functions has recently drawn the attention of federal agencies such as the Department of Justice and the Equal Employment Opportunity Commission, which are interested in how these AI technologies could influence hiring and other decision-making processes,” he wrote. “Specifically, these agencies are examining how AI and other advanced technologies in hiring and other processes may negatively impact people with disabilities.”
AI executive order: What should HR leaders consider?
There’s more to come as guidance rolls out. For instance, the order promises a report on the potential impact of AI on the workforce. This is a topic that has flooded headlines and occupied analysts and business leaders with a predictive numbers game. A summer 2023 report from McKinsey Global Institute, for example, estimates 12 million occupational shifts by 2030 due to the proliferation of AI-based solutions.
According to McKinsey, building a future-ready workforce will continue to be a priority for employers. Earlier this year, the Future of Privacy Forum working group released best practices for using AI in hiring and employment activity. The organization—along with software companies ADP, Indeed, LinkedIn and Workday—created recommendations for HR teams.
These include obligations to clearly define AI responsibilities, promote transparency in using AI in consequential activities such as terminations, avoid bias and discrimination and commit to informed human oversight. The full report can be found here.
Amber Ezzell, policy counsel at the Future of Privacy Forum, says that employment is one of the high-risk use cases for AI tools and that this executive order highlights the need for safeguards that protect against discrimination.
“When AI tools used for employment are developed and implemented responsibly, they have the power to transform worker creativity and productivity, expand opportunities for individuals who may have otherwise been excluded due to underlying bias and discrimination, and increase trust in decision making,” says Ezzell.
Jason Albert, global chief privacy officer at ADP, told HRE the executive order is the federal government’s most comprehensive effort to regulate the risks and adoption of AI. He says that ADP welcomes efforts that promote the use of best practices of these tools in the workplace.
“In the area of employment, AI holds much potential to help workers identify skills, suggest training paths and unleash creativity,” says Albert, noting that responsibility, transparency and “explainability” are the obligations of those who develop and use AI in an employment context.
Last year, the White House introduced the Blueprint for an AI Bill of Rights, and in early 2023, the government released the AI Risk Management Framework, but neither tackles enforcement. Today’s executive order will initiate accountability measures overseen by several departments, but specific details on these aspects are currently limited, with more information expected in the future.