Employee turnover at DocuSign was already low.
“We have a healthy attrition rate,” says Senior Director of Recruiting Susan Ross. When people do leave, she continues, they’re usually in the company’s sales lead-generation department at the San Francisco headquarters. However low turnover was, though, recruiting still found room for improvement–$1 million worth of improvement, in fact.
By implementing technology that predicts whether job prospects will leave before the end of year one, Ross says, the company was able to avoid making roughly 11 bad hires, thereby saving more than $1 million in salary.
When it comes to preventing turnover, the first year is most vulnerable: The Work Institute, a Franklin, Tenn.-based research company, reports that year one is when 34 percent of voluntary exits occur. It found that 18 percent more happen by the end of year two.
Regardless of how long employees are–or aren’t–in a job, attrition itself is on the rise: Since 2010, employee separations have gone up 14 percent, according to the February 2017 U.S. Bureau of Labor Statistics Job Openings and Labor Turnover Survey. The survey found a 46-percent increase in voluntary departures. For every employee who leaves, Work Institute President Danny Nelms says, a company can expect to lose “about 33 percent of base pay.” The result is a $536-billion drain across all U.S. business.
If there were a way to tell pre-hire whether someone would quit that first year, it’d be big business. Employee-recognition platforms such as YouEarnedIt and Growbot, and satisfaction-polling software such as TINYpulse and Polly, work to prevent turnover by keeping current employees happy. Unfortunately, these tools typically don’t use the data they collect to predict when someone will leave before hire. But Nelms says there’s hope, as new solutions venture onto the market.
Analytics tools that enable employers to perform predictive modeling of job candidates at the pre-hire stage are becoming increasingly accessible, says Nelms. This accessibility, he continues, is popping up in two ways: Clients such as DocuSign are adapting tech originally designed for other purposes–such as checking references–to predict turnover. Then there are the software providers designing tech expressly for that purpose.
Rating the References
In the first category, there’s SkillSurvey, the company that Ross says saved DocuSign $1 million. Based in Berwyn, Pa., SkillSurvey started as a reference-checking and credentialing platform but, because the company records references’ responses, it allows recruiters to pattern match answers against those from existing, tenured employees.
Today, President and CEO Ray Bixler says companies look to SkillSurvey’s solution to do one of four things for them: Source references, automate the reference-checking process, ensure fair-hiring compliance and improve quality of hire, which, by definition, falls in with things like retention, predicting performance and turnover.
The way it works is simple: SkillSurvey emails pre-set questionnaires to applicant references. One by one, those references respond, rating the candidate from one to seven on 30 behaviors. They also answer questions such as: What are the applicant’s top-three strengths? What are three areas for improvement?
Ross says responses are normed “against a greater population … of other people in the same role based on different core competencies”–not just at DocuSign or another client company, but industry-wide. Applicants are categorized into three groups–low, medium and high–with written answers used to augment findings. The lower a candidate scores, the more likely he or she is to leave early.
“[DocuSign doesn’t] hire anybody that’s at least not mid-to-high against this norming of the greater population,” Ross says. “Typically, when you have poor references, obviously you have a chance of attrition or higher turnover rate.”
Pattern matching enables employers to determine the likelihood of turnover, Bixler says. “We’re not making any judgments, just basically sharing the data.” DocuSign, for example, reports that the data has shown that, if a candidate’s references don’t respond within two days, he or she is a flight risk.
“If you don’t hear from the references and nobody’s filling out the survey, that’s usually not a good sign,” Ross says.
Playing to Their Strengths
Do-it-yourself data matching is the exact opposite of Pymetrics’ approach. The New York-based start-up analyzes candidates’ neurobehavior in order to predict their odds of staying in the job for one year. And unlike SkillSurvey, the company started that way. Over the last four years, CEO and co-founder Frida Polli says Pymetrics has raised $16.6 million to get inside applicants’ heads and put a number on candidates’ chances of staying at least one year.
To do this, applicants play 12 online games: Two are decision-making exercises where they choose how much money to give a stranger, one is a round of cards that looks like solitaire, and in another, candidates rapidly click right or left in response to images displayed on screen. But, “they’re not really just games,” Polli says. “They are scientific exercises that have been developed by the cognitive neuroscience community globally.”
When the exercises are complete, Pymetrics provides a report detailing how reward-driven applicants are, whether they’re more critical than others and the outcome of 90 other cognitive and emotional characteristics.
Before testing an applicant, though, Pymetrics conducts a local job-validation study, a technique Polli says is “used in industrial organizational psychology to indicate that you have made the link between these traits and performance at a company.”
Pymetrics asks hiring companies to select their current top employees. They too play the games and, if they test as being critical or reward-driven, for example, those attributes become the ones required.
“[T]hat’s how we establish the traits that are important for good performance in that role at that company,” Polli says. “That’s how we make the links between what we’re measuring and job-related outcomes.”
Of course, for this to work, a company has to quantitatively–and not qualitatively–define its high performers. By nature, Pymetrics’ pattern matching will point recruiters toward applicants who mirror those selected. For results to be accurate, the sample pool that candidates are judged against must truly be the business’ best, and not just those management thinks are doing well. Polli says her company doesn’t help determine who top employees are, but does recommend clients look for hard metrics–such as sales numbers for a business development job–“instead of just focusing on evaluations from managers.” This comparison group must also include at least 50 employees working in the same role for accurate results.
Results at Unilever were so accurate, Pymetrics spokesperson Rose Dawydiak-Rapagnani says, that the corporation has stopped using candidate resumes as a screening tool, relying instead on Pymetrics’ tests. Unilever did not respond to requests for comment, but Accenture spokesperson Sam Hyland confirms his company does use the tool. After implementation, Polli says, the percentage of employees staying a year or more with Accenture and other clients increased by as much as 60 percent, depending on initial attrition rates.
At large corporations where hundreds work the same job, finding 50 successful employees to test might be easy. But for jobs on a small team, where individual personalities impact overall project outcome, it’s not. An applicant’s neurobehavior may be perfect for the actual job but not for getting along with that position’s boss. Nelms says true predictions of employee viability should ask, “How is this person going to interact with the workplace conditions?”
Consider the Context
The downside to both SkillSurvey and Pymetrics is that, in predicting the likelihood of a candidate making it past the first year, both assume employees exit because they’re not a good fit for the role. After all, that’s generally why someone gets fired and is the outcome DocuSign says it avoided by using SkillSurvey results to opt out of hiring 11 people. But Work Institute’s 2017 Retention Report found only 7 percent of employees quit due to job characteristics. Eleven percent leave over management behavior and another 6 percent because of work environment, which can be largely supervisor-driven.
Those 50 successful employees from Pymetrics’ sample pool may be successful not just because of their neuro wiring, but because they have great supervisors able to coach the best out of them. Nelms gives the example of a company with 40 different managers: “Well, all 40 of those managers manage slightly different, right? Am I going to get my AI to be so specific to be able to understand exactly how this person wants to be managed, and that’s exactly the person they’re going to be managed by?”
“Workplace conditions are … in a constant state of flux,” Nelms says. Test 50 employees and develop a baseline, then the role shifts and a company has to start all over again. “Companies are constantly changing,” he says. These changes make it hard to predict how someone will respond to the work environment based on personality traits alone. “I think we’re definitely years away from that, if [it’s possible] at all,” he says.
Perhaps that’s why so many personality-matching tools focus on post-hire, where full team dynamics are better known. In his 2018 HR Technology Disruptions Report, Bersin by Deloitte Principal Josh Bersin mentions Crunchr, an Amsterdam-based start-up selling six HR apps. One of the apps, Talent, analyzes individuals’ personality traits against job requirements. But, as with satisfaction-polling tools and other turnover tech, Talent’s focus is on existing employees: Are they in the right job? Are they currently at risk for leaving?
At DocuSign, the pre-hire personality test of choice is Westwood, Mass.-based Predictive Index. The software does not predict which candidates will make it to the one-year mark, but it is like Pymetrics in that it tests for traits that are considered indicators of on-the-job success. And like Pymetrics, PI even tests current employees to create its control group.
“We’ve analyzed our top-performing enterprise and commercial [account executives] and we have a baseline pattern on four kinds of core competencies, which are dominance, extroversion, patience and formality,” Ross says. “We assess [applicants] based on the core competencies using that Predictive Index.”
PI has two tests: one for cognitive traits and another for behavioral. Combined results outline what Ross calls a candidate’s “core DNA.” DocuSign then uses SkillSurvey data with PI results to predict how uncovered traits may manifest on the job. Together, Ross says, the two tools create a “front-end and back-end quality check so we can make sure this is a good hire.”
And a good hire–no matter how many years they stay–is every recruiter’s goal.