Closes in 10 seconds skip ×

Cappelli: Testing’s complicated role in the return to workplaces

By: | July 27, 2020 • 3 min read
Peter Cappelli is HRE’s Talent Management columnist and a fellow of the National Academy of Human Resources. He is the George W. Taylor Professor of Management and director of the Center for Human Resources at The Wharton School of the University of Pennsylvania in Philadelphia. He can be emailed at hreletters@lrp.com.

Surveys of employers indicate that an important part of their plans for bringing employees back into their workplaces is to test employees to see if they are infected with the COVID-19 illness. That certainly sounds like a good idea. It would go a long way toward making returning employees feel safe if they knew that their co-workers were not sick.

Boy, that is harder to do than it sounds, though.

I discovered this because one of my family members in our little “pod” came down with all the symptoms of COVID-19—five of the most common ones, and no symptoms that were not consistent with the condition.  So they got the viral test, which shows whether you have a current infection (the antibody test indicates whether you had the infection and developed antibodies to fight it, which might make you immune to further infection). The test came back negative. What should we conclude?

Advertisement

This is where things get sticky because, under the right circumstances, even good tests can be wrong more often than right. It has nothing to do with the test per se; it has to do with the context.

If you had Bayesian statistics in school and actually remember how it works, you can skip this section.  Otherwise, here is an example to illustrate the problem. Let’s say we have a software test to see whether photos are of a man or a woman. We know the test is right 90% of the time, which means that it is wrong 10% of the time. We might rightly assume that in 5% of the cases, it will say that someone who is a man is a woman, and in another 5%, it will say that someone who is a woman is a man. The other 90 percent of the time it will be right.

The reason that is true is because we assume that half the population from which the photos came will be from women and half from men. But what if we know that is not true? Suppose we have a sample of photos from an Army base, where we believe 90 of the people there are men. In that case, if the test says it is a man, it is only wrong 1% of the time, but if it tells you it is a woman, it will be wrong half the time. One way to think about this is that there are so many more men in the sample that the errors are bound to be on them. When you hear analysts talk about “the base rate” or “updating their priors” in making predictions, they are talking about understanding what we expect in those samples.

Back to the COVID-19 test … the reason this gets complicated is that, depending where you are and who the patient is, the base rate on the sample may be wildly different. In a rural population, it might be the case that only 2-3% of the population has the virus, so the test is going to tell you that a lot more people have it than actually do. In other contexts where infection rates are high and where someone has all the symptoms, the tests will tell a lot of people that they do not have it when, in fact, they do.

Advertisement

The other problem is, because we can be infectious without having symptoms, we will need to be doing these tests a lot and on virtually everyone to make sure people are safe, at least every five days now to be reasonably sure. It is possible that these tests at some point will become cheap, easy to do and fast enough to get the results back in time to be meaningful. But even then, someone has to be screening the patients to calculate that base-rate probability of being infected in order to interpret the test. It’s going to be a lot of work.

In our case? The doctors said, “Assume you have it.”

More from HRE