People often want alarms, medical tests, and assessments to be as “sensitive” as possible: that is, in looking for condition X, we’d like to catch 100% of the occurrences of condition X. But increases in sensitivity almost always involve decreases in specificity and this can cause problems.
To understand the difference between sensitivity and specificity, consider the following test, which has been designed to always give a positive result. In a medical situation we can imagine a blood test that always tells you you have cancer, no matter what.
The test does a perfect job at detecting cancer, at least in one sense: it correctly identifies every instance of cancer in the sample.
But in reality the test is useless. It creates so many false positives that it does not supply information we can act on.
The problem is that the “sensitivity rate” only measures the rate of true positives. To have an effective test we need to correctly identify all the negatives as well. The ability of a test to identify negatives correctly is called its “specificity”.
A perfect test correctly identifies all the positives (sensitivity) while also correctly identifying all the negatives (specificity). Here is what that looks like in practice:
Since specificity and sensitivity are often opposed in practice (a smoke alarm that is highly sensitive will produce many false alarms, but one with high specificity may miss a fire) designers of tests and processes have to think carefully about what they want out of a test.
In cases where you want to rule out a rare condition, a test with a high sensitivity and low specificity may be perfect.
In cases where a more common condition is being tested, and a positive result will result in drastic action, high specificity may be more desirable.
A related issue is found in nuclear weapons, where Safety and Reliability are at odds. We want a weapon to go of when it needs to (sensitivity) but not go off when it isn’t supposed to (specificity).
Alarm Fatigue can be a result of low specificity.