Bayesian Thinking (based on Bayes Theorem) can provide unintuitive insights in a wide variety of situations. As a medical example, consider some rare horrible disease (SRHD), which strikes one person in a hundred independent of ethnicity, geographical area, or socioeconomic status. Suppose that there is a 98% effective diagnostic test for SRHD. That is, 98% of people who are infected display a positive test result, while 98% of those not infected display a negative test result. Now imagine someone, who in routine screening, has just tested positive for SRHD. What is the chance they are actually infected?
Sam Savage on Bayesian Thinking, February 2005
A chance of around 1/3rd, as explained in the linked article. The likelihood of a test showing a true-positive result is just under 1% (98% of 1%) and the likelihood of it showing a false-positive result is just under 2% (2% of 99%). Given that the test is positive, the result is about twice as likely to be false-positive as true-positive.
Despite the explanation I can’t quite shake the gut feeling that since the test is almost always correct, the initial “positive” result should be considered highly reliable. More on this tomorrow: Abraham Wald’s WWII memo.