8 principles for ethical AI at work, according to the White House

Employers and HR departments often wonder how regulation on artificial intelligence will impact the pursuit of innovation. Sessions have been dedicated to ethical AI at events such as HR Technology Europe, HRE‘s Strategy Summit, Eightfold’s Cultivate and AWS re:Invent. And while the EU rolled out its AI Act, and several U.S. states and cities have pushed out AI-related laws, the U.S. federal government has been slow to initiate any legislation on AI.

- Advertisement -

This month, the White House issued a new set of standards, giving further shape to what is expected of AI deployment in the workplace. As directed by President Biden’s Executive Order on AI, the Department of Labor has established eight principles for the ethical development and deployment of AI systems at work.

Regarding the human resources practice, EEOC Commissioner Keith Sonderling has been on a speaking tour, reminding HR leaders of long-standing civil rights laws. He’s told several HRE audiences that HR’s emphasis should be on making appropriate employment decisions and that the decisions, not the tech, are under the jurisdiction of the EEOC.

Responsible corporate citizenship

Asha Palmer, Skillsoft; Guidance from top U.S. leaders on ethical AI in the workplace
Asha Palmer, Skillsoft

Asha Palmer, vice president of compliance at the enterprise learning platform Skillsoft, told HRE that some organizations start with responsible corporate citizenship and move forth with that as a North Star. Others wait to be regulated, which can result in two types of behavioral environments.

The first is exceptionally risk-averse, with a culture that tries to restrict AI use until firm regulatory guidance is in place. The opposite operates as if a lack of legislation means no guardrails are needed until further notice.

An organization that is ethically minded while also being innovative is careful to interpret guidance thoughtfully while moving forward with concepts that fit business needs and appropriate use outlines. Industry analyst Josh Bersin put it this way at Eightfold’s Cultivate Talent Summit in May 2024: “Don’t wait until [AI] matures to get to work trying.”

- Advertisement -

Related: AI regulation: Where the U.N. and other global leaders stand

While creating a responsible AI policy is a good starting point, says Palmer, HR leaders need to continue to push themselves and their peers to visualize how compliance efforts can adapt into functional practices. “Terms and conditions don’t influence behavior,” says Palmer. “Get policy words off the page and into the hearts and minds of the workforce.”

This can be hard to do when an organization lacks regulatory guidance, which is why the news of a White House fact sheet is welcomed by many. While there isn’t much actionable direction, the document provides perspective on what the government expects from employers. The overarching themes emphasize protecting workers, giving them a voice, promoting responsible and ethical AI development and use, establishing robust governance, ensuring transparency and accountability, and respecting data privacy.

Palmer says that industry and business leaders don’t have a forum to comment on the principles for now, but “relentless incrementalism” is moving this governance process forward. In other words, keep putting one foot in front of the other. “We’ve all got thoughts, but what’s the action?” says Palmer.

Eight principles of AI systems in the workplace

According to the White House fact sheet on ethical AI:

  • Workers should have input in the design, development and oversight of workplace AI systems, especially those from underserved communities.
  • AI systems should be designed and trained in a way that protects workers.
  • Organizations should have clear governance, oversight and evaluation processes for workplace AI systems.
  • Employers should be transparent about the AI systems used in the workplace.
  • AI systems should not violate workers’ rights to organize, health and safety rights, wage rights or anti-discrimination protections.
  • AI systems should assist, complement and enable workers while improving job quality.
  • Employers should support or upskill workers during AI-related job transitions.
  • Workers’ data used by AI systems should be limited, used for legitimate business purposes and handled responsibly.

The full fact sheet can be found here.

Jill Barthhttps://hrexecutive.com/
Jill Barth is HR Tech Editor of Human Resource Executive. She is an award-winning journalist with bylines in Forbes, USA Today and other international publications. With a background in communications, media, B2B ecommerce and the workplace, she also served as a consultant with Gallagher Benefit Services for nearly a decade. Reach out at [email protected].