Can today’s employee listening tools predict workplace violence?

With the United States experiencing a high number of shootings and instances of workplace violence—OSHA says one million employees report an incident of workplace violence every year—the dire statistics beg the question for HR leaders: Can AI-powered employee listening tools predict if an employee could be violent while on the job?

Not yet, say industry experts, although some tools are able to flag certain employee behaviors that might warrant attention.

- Advertisement -

Today’s listening tools are designed to conduct and analyze the data from employee sentiments and, in some cases, gauge if a worker is likely to resign. But when it comes to predicting if a particular individual is likely to act out violently at the workplace, many of these tech tools do not come close to predicting such behavior. 

“We haven’t seen this kind of signal come out of these tools yet, but more and more passive listening technology is available,” says global analyst Josh Bersin, who will be a keynote speaker at the 2022 HR Tech Conference in Las Vegas on Sept. 13-16. Register for the event here

Many of these employee listening solutions allow HR leaders to search for specific words and phrases that could be interpreted as threatening or displaying the characteristics of a person who might warrant attention. Bersin says that employee listening solutions such as Microsoft Viva Insights, Keen, Yva.ai (acquired by Visier), Cultivate (acquired by Perceptyx) and others are now monitoring emails, chat messages and other communications—and they can pick up stress, possible fraud, and other instances of workplace abuse. 

A number of leading providers of employee listening solutions declined to comment on this story.



Some AI-powered analytics providers do not believe that current HR technology can search out an employee who may have the potential for violence at work while other technology firms believe these solutions already exist.  

Current AI tools won’t be able to predict a violent employee, says Cody Fenter, senior solutions consultant for employee listening provider Perceptyx. Users of their technology are able to add what they literally call “potty mouth” words or phrases to their employee listening scripts, which would then notify the HR leaders. 

“If an individual is going to be violent, it can categorize and thematically align the comments that are coming through in a negative fashion, and then thematically align them based on words they’re using that might represent violence,” he says.

One solution that claims to search for signs of a toxic workplace and potential violence is CommSafe AI Safe Communication Software from CommSafe. “The keyword here being ‘potential,’ says CommSafe founder and CEO Ty Smith. 

CommSafe looks for keywords and phrases combined with communication sentiment and tone, and the company believes it “understands nuance and context,” Smith adds.

“The solution is intelligent; therefore, it understands how to gather context from multiple interactions that include the potentially violent person. Again, the keyword is ‘potential,’” he says. 

Smith adds that because his solution believes a human being has the capacity to become violent at work, this doesn’t mean that that person will actually become violent. “Rather, our solution simply provides early warning [signs] according to a person’s behavior and communication habits in the professional environment,” he says.

Although employee listening tools were primarily designed to measure the mood of workers, they can identify red flag responses from employees and alert the CHRO. 



According to Melissa Swisher, chief revenue officer and co-founder of employee listening solution provider Socrates AI, her company’s solution encountered a client’s employee who used one of the tools’ “red flag” phrases. 

“We’ve had examples where people said, ‘I bought rope and I’m going to do something with it,’” she recalls. “It was entered into our application. We have all these words that are trigger words, and then [the system] texted and alerted the team.” (When confronted, the employee said they were joking.)

- Advertisement -

Today’s AI tools have an easier time searching for potential crimes being committed rather than sussing out what’s in a person’s thoughts. 

“The main technique these tools use is natural language processing to identify stress, possible anger or distrust, or patterns of communication that predict bad behavior. When people start to use words or language that connote stress or risk, analysts can see that something’s going on,” Bersin says. “They can also see patterns of emails between certain individuals to find out if a bad actor is causing alarm.”

Swisher feels that while AI tools will soon be able to drill down and bring some darker employee sentiments to light, current surveys are only able to capture a moment in time. 

“If somebody’s having a bad day, they might be like, ‘This place stinks,’ or they could be on cloud nine and say, ‘It’s the greatest thing since sliced bread,’” she says. “You have to look at those things over time.”


For more insights into people analytics and the insights inside your HR data, save the date for the 2022 HR Technology Conference, which will feature at least five sessions on mental health, including “Mindful Managers: The First Step to a Mentally Healthy Workforce.” Learn more here.

Phil Albinus
Phil Albinus
Phil Albinus is the former HR Tech Editor for HRE. He has been covering personal and business technology for 25 years and has served as editor and executive editor for a number of financial services, trading technology and employee benefits titles. He is a graduate of SUNY New Paltz and lives in the Hudson Valley with his audiologist wife and three adult children.