On Oct. 30, 2023, President Biden issued a wide-ranging executive order to address the development of artificial intelligence in the United States, titled the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. The order seeks to address both the “myriad benefits” as well as what it calls the “substantial risks” that AI poses to the country.
Through the executive order, described as a “federal government-wide” effort, the administration charges several agencies, including most notably the Department of Labor, with mitigating the impacts of employers’ use of AI on job security and workers’ rights. The job of complying with Labor’s regulations will surely fall to human resource and employee relations professionals.
Remedies for worker displacement
In a section dedicated to “Supporting Workers,” the White House concentrated on a much-feared outcome of the AI boom: the displacement of human workers. To address this potential risk, the order directs the Secretary of Labor to evaluate the ability of federal agencies to support workers displaced by the adoption of AI and “other technological advances.”
Part of the support solution outlined by the order is to combine worker retraining programs through the Workforce Innovation and Opportunity Act (WIOA) with existing unemployment insurance administered by the DOL. However, the Labor Secretary was also tasked with identifying potential legislative measures, in consultation with the Commerce and Education Departments, to expand retraining opportunities to meet the envisioned AI workforce demand.
Reading the tea leaves, it would not be at all surprising for the administration to encourage employers to aid in worker retraining efforts through existing incentives, such as tuition reimbursement programs. HR professionals can prepare for this possibility by reviewing their company’s current education assistance programs or advocating for the implementation of such programs to position their clients to take advantage of potentially favorable federal treatment.
Organized labor is front and center
Echoing the White House’s October 2022 “Blueprint for an AI Bill of Rights,” the executive order asks the Secretary of Labor to establish AI “principles and best practices” for “labor standards” and “job quality,” addressing:
- the displacement risks and opportunities posed by AI’s effects on job skills and evaluation of workers;
- labor standards pertaining to “equity, protected activity, compensation, health and safety implications,” and
- the implications of AI-assisted collection of employee data.
In developing these “principles and best practices,” the president directed the Secretary of Labor to consult with outside entities, specifically mentioning labor unions and workers’ groups, but not employers’ groups.
Based on the explicit reference to unions and workers’ groups, it is expected that the National Labor Relations Board will play a key role in implementing these “principles and best practices.” Tracking the first principle, the NLRB is likely to push employers to bargain with unionized employees over the use of AI tools in the workplace in areas such as job augmentation and job replacement.
The specific issues highlighted in the second principle indicate that the NLRB will also pay close attention to how employers use AI to determine compensation for represented employees, as such decisions are typically a subject of bargaining. Worker monitoring is spotlighted in the third principle, signaling that the NLRB may use the unfair labor practice process to protect concerted organizing activity that can be caught up in employers’ monitoring efforts—even when such efforts are done under the auspices of legitimate performance assessment or compliance with safety regulations.
As a result, employee relations professionals should be cognizant of the systems their companies use or are preparing to implement, which may have an impact on worker evaluation and subsequent employment decisions. If there is an AI or algorithmic-based component, employers may be required to bargain over the system’s use.
Even if the system appears to serve an otherwise benign function, such as monitoring for employee safety, it could nonetheless constitute unlawful surveillance of employees’ discussions of working conditions. For human resource and employee relations professionals to flag these risks, they must become familiar with the capabilities of such systems, which requires their involvement at the planning and development stages of the system’s use.
Again echoing points from the White House’s AI Blueprint, the executive order directs the Secretary of Labor to publish guidance for federal contractors regarding nondiscrimination in hiring systems utilizing AI and other “technology-based systems.” This mandate comes hot on the heels of a weighty update to the Office of Federal Contract Compliance Program’s Scheduling Letter and Itemized Listing, which now obligates federal contractors to provide documentation of their use of AI, algorithms and other automated or technology-based selection procedures.
As a result of the order’s directive, the DOL will likely seek to augment the latest OFCCP rules by requiring federal contractors to conduct audits of their AI-based hiring systems for discriminatory bias. Some states, such as New York, already require that companies using computer-aided selection tools obtain third-party audits and publish the results of those audits.
It’s a reasonable assumption that the department will take a cue from these existing state frameworks. If your company already uses these tools, then consider finding an external vendor to certify that the selection tools fairly assess applicants.
While the president’s executive order places no immediate requirements on employers, the administration has provided useful guideposts as to where the enforcement and rulemaking are headed. Human resource and employee relations professionals should heed these signs and prepare their businesses for inevitable government scrutiny surrounding the game-changing benefits of artificial intelligence.