Is it a good idea to regularly test the whole population for infection with SARS-CoV-2?
Superficially, you might well say the answer is ‘Yes, obviously’. But I want to look at it in more depth.
First of all, obviously, there are the practical considerations. The current system, fragmented as it is (some would say deliberately fragmented between the private and public sector), is proving to be quite unable to cope with testing those who really need it. And the tracing aspect is even worse. Remember when you hear the figures quoted, e.g., 70% of infections are detected, and 70% of contacts are traced – that is 70% of 70%, or 49%. So even with the Government’s claims, only about half of potential contacts are traced. And it is far too slow – several days to get the test result, and several more days (often longer) to reach the contacts, who thus may have been spreading the disease for days before being told to isolate,
Secondly, it depends on technology that doesn’t yet exist. There is much talk about rapid tests that are being introduced. These are antibody tests, which only tell you whether someone has been infected at some time in the past, not whether they are currently infected, Antibody tests don’t give a positive result until the infection is, in most cases, over.
Leaving the practicality aside, let’s assume that somehow we do have a test that can be easily and rapidly done for vast numbers of people. Would it be a good idea then, to screen the whole population?
To answer that question, we need to look at some concepts related to the test. I should clarify that what I mean by ‘test’ is what happens in the laboratory with the sample you have provided. The other parts of the process – taking the sample, transmitting it to the lab, and reporting the results – all have their problems, but I’m not considering those.
No laboratory diagnostic test is perfect. Biological systems are very complex, and all sorts of things can lead to a ‘wrong’ result. This can mean that someone who is infected gives a negative result (a ‘false negative’). For example, there might be something in the sample that interferes with the test process. The ability of the test to detect real positives is referred to as the sensitivity, which is defined as the number of positives detected divided by the number of true positives.
The other side of the coin is the specificity, which roughly means the ability of the test to correctly identify someone who is not infected. This is defined as the number correctly identified as negative divided by the number who are really not infected. This gives us a measure of how often we would get a false positive – i.e., a positive test result for someone who does not have the disease.
Now we can look at what this means in reality. Let’s assume we have a test that is 99% sensitive and 99% specific. That would be regarded as a really good test. And let’s assume we have a prevalence of 1 in 1,000. So, in a population of 100,000, there would be 100 cases, and the test would detect 99 of them. That’s not bad. The crunch comes with the specificity. In this situation, the test would return a positive result for 999 people who do not have the disease (false positives). In other words, there would be about 10 false positives for every correctly identified positive. If we scale that up to 60 million people, there would be nearly 600,000 people who would be told they were infected when they weren’t. Would they all have to isolate themselves? And even thinking about contact tracing on that scale gives me a headache.
These considerations apply to all occasions when you are screening random samples for some relatively rare event, whether it’s screening donated blood for an infectious agent such as HIV, or mass screening for cancer. The standard way of dealing with it is to use two independent tests. So, you take all the samples that gave a positive result in the first test and re-test them, with a different test. (It has to be a different test as the reason for the ‘wrong; result might be inherent in the sample). If you did that, with the above example, then after re-testing all the positives from the first test, you would find that fewer than 10% of the positives would now be false positives.
In effect this is what happens when you use a test to confirm a clinical diagnosis, or if you are applying a COVID-19 test to people who have symptoms. The clinical picture is the first test, giving a group of people who are much more likely to have the disease than the general population is.
Is a two-step procedure feasible? It could be argued that the first step could use the hitherto unknown and entirely conjectural rapid test, and the positive samples submitted to the existing PCR-based test – but only if this hypothetical new test was something entirely different. And it would still demand adding an additional 600,000 samples to the existing testing workload. Since that system is currently unable to meet the present demands, this seems unlikely.
If you would like to see the calculations I have used, and play around with them, send me an email and I’ll send you the spreadsheet.
Jeremy Dale
20 Sept 2020