Bayes' Theorem

The probability that belief A is true, given new evidence B is equal to the probability of B given A times the probability of A (regardless of B) divided by the probability of B (regardless of A).

For example, suppose you are getting a medical test to find out if you have a disease. The disease only appears in 1 out of 100 people and the test is 99% accurate. What is the likelihood you have the disease if you receive a positive test result?

Plugging it into the equation:

P(A|B) The probability of having the disease given a positive test result (what we want to find out).

P(B|A) = 99% Probability that the test is accurate if you actually have the disease.

P(A) = 1% Probability of having the disease regardless of the test.

P(B) = (1% x 99%) + (1% x 99%) The probability the test is accurate means summing a positive test result and the probability of a false negative test result. To get the false negatives if you have a population where 100 people have the disease and take the test which is 99% accurate, you get 1 false negative.

Result There’s a 50% probability you have the disease given a positive test result

(99% x 1%) / ((1% x 99%) + (1% x 99%)) = 50%

  • Bayes Theorem Is a Form of Inductive Reasoning

    In Bayes' Theorem, the probability of something occurring is based on probabilities of other parameters of the problem. Put simply, using the theorem builds on prior knowledge of the problem domain to update a prediction. This became very popular because, in the real world, there is much uncertainty and Bayes Theorem provides a way of modeling that uncertainty through probability (e.g. machine learning).

  • Ellsberg Paradox

    People prefer situations where they know the risk. In experiments ran by Daniel Ellsberg, participants were asked to bet on a known 10% chance to win and an unknown chance to win (which was actually 90%). People tend to choose to bet on the 10% option.