AWS on responsible AI: ‘A growing number of jobs in this space’

As legislation and regulation around artificial intelligence continue to take shape, the responsibility of ensuring ethical practices and mitigating risks has initially fallen on the shoulders of the tech industry.

- Advertisement -

This has left HR leaders to rely on partner platforms to deliver accountability. Diya Wynn, the Responsible AI lead at Amazon Web Services (AWS), has taken on this challenge, working with her organization’s customers to pursue a future where AI is both powerful and accountable. She says responsible AI is not just a tech-centric endeavor; it requires integration into teams, consideration of diverse users and collaboration with academia and government.

Wynn’s journey at AWS spans more than six years, and she is using her background in computer science and focus on career mobility to help organizations as they transition to the cloud. At her company’s re:Invent conference, she spoke to HRE about preparing the younger generation to lead the future workforce in a world where schools may not necessarily be keeping pace with technological advancements.

Integrating AI and building trust

Diya Wynn, Senior Practice Manager of Responsible AI, AWS
Diya Wynn, Senior Practice Manager of Responsible AI, AWS

In a customer-facing role dedicated to responsible AI, Wynn ensures that the impact of artificial intelligence is not limited to internal discussions at AWS. Instead, the focus is on influencing the vast ecosystem of AWS customers—millions of users actively creating cloud-based products with AWS.

To mitigate the risks and build trust as new AI cases are developed, Wynn advocates for defining fairness from the outset and continually assessing unintended consequences that may emerge “out in the wild.” Testing for anti-personas—or those users a product is not intended to serve—becomes a requirement for developers. In other words, responsibility requires predicting and mitigating what a bad actor might do if the tools fall into their hands.

The journey toward responsible AI doesn’t end with testing; it involves ongoing training and education. Bias, whether initiated by people or data, can impact products, says Wynn. The key is to educate those who develop AI-based tools about their biases to prevent them from influencing the technology they create.

Not just a ‘diversity issue’

Though bias has gotten attention as a leading risk of using AI without scrutiny, Wynn warns that narrowing in on bias can create a limiting perspective. “Don’t relegate this to just a diversity issue,” she says. “Responsible AI is an operating approach; we can’t just decide to do it without consideration for the people, process and tech that is required.”

- Advertisement -

AWS provides frameworks to enable customers to implement their products securely. This is done with embedded guardrails on tools like Bedrock, which gives clients a choice of foundation models—such as Anthropic, Cohere and Meta—on which customers can build generative AI applications with controls specific to their use cases. According to Wynn, the shared responsibility model ensures that customers building on AWS have the tools and transparency needed to navigate the responsible AI landscape.

A need for action and expertise

Generative AI has more risk when compared to more traditional artificial intelligence. Large language models present complex challenges with transparency, explainability, privacy, intellectual property and copyright.

These issues are coming to bear with real-life examples. Wynn says that concerns about false images, particularly deepfakes, are valid, referencing the 2023 image of the Pope in a white puffer coat as an example. Another concern is data protection, especially for those who have had proprietary information exposed to public models such as ChatGPT. Earlier this year, Samsung banned the use of ChatGPT and other consumer-level generative AI tools when employees accidentally fed sensitive code to the platform.

These instances are causing many employers to tap the brakes on gen AI, but sometimes at the cost of planning and progress. Wynn has witnessed significant interest in “having conversations about responsible AI but less movement on doing the work.”

However, in a recent study, AWS shared findings indicating that nearly half of business leaders (47%) plan to invest more in responsible AI in 2024 than they did in 2023. The anticipation of imminent regulations worldwide has heightened the awareness of the need for responsible AI practices, Wynn says. As the industry moves at warp speed, AWS acknowledges that responsible AI is not just a trend—it’s a crucial element that cannot be ignored.

When asked about the likelihood of new accountable AI positions emerging, Wynn suggests it is indeed a possibility: “I think we will see more of that, a growing number of jobs in this space.” She acknowledges that some organizations might delay creating dedicated positions until official regulations are established. Nevertheless, she advocates integrating “learning paths” into existing job descriptions as a proactive approach to instill accountability and procedural readiness. In other words, she says, don’t wait—start where you are, with what you have.

Jill Barth
Jill Barth is HR Tech Editor of Human Resource Executive. She is an award-winning journalist with bylines in Forbes, USA Today and other international publications. With a background in communications, media, B2B ecommerce and the workplace, she also served as a consultant with Gallagher Benefit Services for nearly a decade. Reach out at [email protected].