- Advertisement -

A global outlook on 13 AI laws affecting hiring and recruitment

Richard Mendis, HireLogic, on AI laws
Richard Mendis
Richard Mendis is the CMO at HireLogic. He has more than 20 years of experience in the enterprise software industry and currently works on AI solutions to help companies hire smarter and faster.

The use of artificial intelligence has become a focal point for regulatory scrutiny. Multiple national and international efforts are either already passed or underway to identify AI use cases and provide regulatory frameworks or guidance that govern those use cases. Within the United States, there are federal-level discussions on AI regulations, but states and municipalities are moving faster.

- Advertisement -

In 2024 alone, state legislatures in the U.S. have introduced an unprecedented number of AI-related bills, surpassing the figures seen in 2023 by sixfold, totaling more than 400 bills. With 16 states having already enacted legislation related to AI but no sign of a federal law yet, this intricate patchwork of AI laws is challenging to keep up with, especially for businesses operating across the U.S. and internationally.

Determining which of these myriad national and local AI laws impact hiring and recruitment adds to the complexity. This article provides an overview of legislation affecting hiring and recruitment processes, so you can reference it as you shape your organization’s AI policies and evaluate AI solutions.

Enacted or imminent AI laws impacting HR

The EU AI Act

The European Parliament approved the highly anticipated AI Act on March 13. It encompasses the regulation of high-risk AI systems, enforces transparency requirements on limited-risk AI systems and leaves minimal-risk AI systems mostly unregulated.

Although the act only applies to organizations that operate in the European Union, other countries may enact a similar framework eventually, and vendors that operate in multiple countries will likely start to support the EU AI Act as a baseline, much like what happened when GDPR privacy regulations emerged. For now, the Blueprint for an AI Bill of Rights is the only comparable set of guidelines for the U.S.

How it applies to hiring and recruitment: The AI Act classifies the use of AI in employment as high-risk. Hiring professionals should evaluate how their selected AI solutions work and avoid those that use biometric data or provide subjective information on emotion or sentiment. Any solutions that remove human oversight from the hiring process (e.g. make a solely AI-driven decision on whether a candidate should move to the next stage) should also be avoided.

Detailed guidelines for employers and recruiters will be forthcoming, likely covering transparency, documentation and bias reduction. Many existing AI solutions meet these criteria, allowing you to integrate them into your tech stack without interruption as the law evolves and enforcement begins.

Canada’s Artificial Intelligence and Data Act (AIDA)

Similar to the EU AI Act, the AIDA regulates the use of high-impact systems for companies that operate in Canada. The AIDA companion document offers insight into the types of systems that will be targeted by future AI regulations, including automated decision tools, screening solutions and biometric systems. The AIDA guidelines and regulations will be enforced beginning in 2025.

- Advertisement -

How it applies to employers: High-impact systems used for employment will be subject to forthcoming requirements around privacy, transparency and fairness. Organizations that operate in Canada must monitor these guidelines as they’re unveiled to ensure compliance.

China’s Internet Information Service Algorithmic Recommendation Management

Chinese law requires transparency and audits of recommendation algorithms, mirroring efforts seen in EU legislation. The legislation also establishes criteria for how algorithms are created and implemented and requires AI developers to disclose certain information to the government and the public.

How it applies to employers: Employers must align with the same recommendations outlined in the EU AI Act, particularly focusing on talent management solutions utilizing recommendation engines. If your organization operates in China, thoroughly vet your AI vendors to ensure their solutions comply with the recommendation algorithm and transparency requirements.

The Ministry of Electronic & Information Technology (MeitY) AI advisory

India’s recent AI advisory states that AI must not demonstrate inherent bias or discrimination, encourages providers to disclose the potential unreliability of any AI that lacks thorough testing or reliability, and implements measures to prevent deep fakes.

How it applies to employers: The provision against AI that demonstrates inherent bias or discrimination is most relevant to employers, as they can be liable for using solutions that introduce new biases to hiring or talent management processes. Employers must do their due diligence when assessing AI solutions used for talent management in India and ensure there is always human oversight for talent-related decisions.

New York’s Automated Employment Decision Tools (AEDTs) law

In New York City, employers and employment agencies are barred from using AEDTs, which commonly use AI and machine learning, unless they have conducted a bias audit and provided the necessary notices. Enforcement of this New York AEDT law (Local Law 144) began in July.

How it applies to employers: This law takes ensuring fairness and transparency a step further by requiring that employers conduct bias audits on AEDTs before integrating them into their hiring processes. Multiple employers or recruiting agencies may use the same bias audit, and vendors may have an independent auditor conduct an audit of its tool, which reduces some of the barriers to compliance.

There is some gray area as to which software may be considered an AEDT, so before making final decisions, check with your vendors and legal counsel on how this law may or may not apply to specific solutions you are using or evaluating. It may come down to the use case.

Illinois’ Artificial Intelligence Video Interview Act

The Artificial Intelligence Video Interview Act mandates that companies operating in Illinois obtain consent before recording interviews, inform applicants if AI is going to analyze their recorded interviews and specify the traits or characteristics that AI will be assessing.

How it applies to employers: Employers must disclose when they use AI solutions that record and analyze video interviews with job applicants and be transparent about the characteristics that the AI will use to evaluate them. Recorded videos may only be shared with people or other technology solutions required to evaluate the applicant. Applicants have the right to request their video recording and analysis be destroyed within 30 days of submitting their request. In practical terms, obtaining consent from all applicants will be cumbersome, so many employers will likely eschew the use of video analysis in the recruiting process.

Maryland’s Facial Recognition Law (HB 1202)

Maryland’s AI law, HB 1202, focuses on regulating the use of facial recognition technology during job interviews. The law imposes limitations on the acquisition, storage and use of facial recognition data.

How it applies to employers: Employers and recruiting agencies must obtain explicit consent from applicants to create a facial template during a job interview. Consent must be provided via a specific waiver. Similar to the Illinois law, many employers will likely opt out of using video analysis in the recruiting process.

Executive Order 14110 for the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence

Executive Order 14110 establishes a unified national strategy for regulating artificial intelligence. The policy objectives set forth in the executive order include fostering competition within the AI sector, mitigating potential threats to civil liberties (including worker rights) and national security posed by AI technologies, and securing America’s position as a leader in global AI competitiveness.

How it applies to employers: Because the executive order tasks almost every government agency to adopt AI governance policies, employers should note that this means additional legislation around AI at the state level is forthcoming. Section 6, “Supporting Workers,” includes a clause on principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ wellbeing and maximize its potential benefits. This order provides no immediate impact on hiring and recruiting, but it is a harbinger of what’s to come.

General laws that apply to using AI in employment

Americans with Disabilities Act (ADA)

The ADA is a longstanding civil rights law prohibiting discrimination against individuals with disabilities in all areas of public life, including employment, education, transportation and public accommodations.

How it applies to employers: The ADA, enacted in 1990 before widespread AI adoption, extends to the use of AI in hiring and recruitment, mandating non-discrimination, accessibility and reasonable accommodations for applicants with disabilities. The ADA’s Guidance on Algorithms, Artificial Intelligence and Disability Discrimination in Hiring states that employers can be held accountable if their use of software, algorithms or artificial intelligence leads to failures in providing or considering reasonable accommodation requests from employees, or if it inadvertently screens out applicants with disabilities who could perform the job with accommodation. For example, a person with a vision impairment must be offered an alternative to an AI-powered skills evaluation test that requires them to see.

Title VII of the Civil Rights Act of 1964

Title VII, enforced under the Equal Employment Opportunity Commission (EEOC), prohibits discrimination based on race, color, national origin, religion or sex (including pregnancy, sexual orientation and gender identity).

How it applies to employers: In 2021, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative to uphold civil rights laws and national values by ensuring that AI and automated systems used in hiring practices promote fairness, justice and equality. Recently, a technical assistance document was issued to help employers assess whether such systems may result in adverse or disparate impacts. Noncompliance with these guidelines could result in penalties and legal consequences.

The Age Discrimination in Employment Act (ADEA)

The ADEA prohibits discrimination based on age against individuals over 40 in hiring, promotion, termination, compensation and other aspects of employment conditions and benefits.

How it applies to employers: The EEOC has stated that employers cannot evade accountability for AI-driven discrimination by attributing it to a third-party technology provider. For instance, a screening tool that filters out candidates without specific educational qualifications could unintentionally discriminate against older applicants. In the case of EEOC v. iTutorGroup, the company faced allegations of age discrimination as its recruitment software automatically rejected older applicants for tutoring positions, exemplifying the potentially discriminatory impact of such systems.

The California Consumer Privacy Act (CCPA)

The CCPA and The California Privacy Rights Act (CPRA), also known as Proposition 24, are state statutes designed to enhance privacy rights and consumer protection for residents of California. They provide consumers with more control over their personal information held by businesses, requiring transparency about data collection, the right to access personal information and the ability to opt out of its sale.

How it applies to employers: Under the CCPA, employers must disclose to job applicants the categories of personal information collected, the purposes for which it is used and any third parties with whom it is shared. Applicants also have the right to access their data and request the deletion or correction of that data. As a result, any AI solution used by organizations that accept job applications from California residents must comply with these measures.

The General Data Protection Regulation GDPR

The GDPR stems from the European Union but applies to organizations worldwide that process the personal data of individuals within the EU. It aims to ensure transparency and accountability in all business processes, particularly those that collect personal data, such as hiring and recruiting.

How it applies to employers: Employers must ensure compliance with GDPR requirements to protect the privacy rights of EU residents throughout the entire hiring process and when using AI solutions. Your AI solutions should never collect sensitive personal information, such as Social Security numbers or biometrics. You must also have transparent and secure processes for collecting, processing, storing, transmitting and deleting candidate data.

*

Given the array of regulations discussed and the hundreds of additional pending AI laws across states, it’s evident that hiring and recruiting will face rigorous enforcement in the future. However, most AI laws governing hiring aim to ensure fairness, transparency and legality, principles many AI vendors uphold regardless of legal requirements. Understanding these key laws affecting AI in hiring empowers informed decision-making when integrating tools into your tech stack and hiring processes to ensure your organization uses AI ethically and legally.