An expression for Bayesian thinking where you think in ranges of outcomes based on prior information. When new information arrives, to get a more accurate range of solutions you ‘update your priors’.
See also:
- Rigorously updating priors can prevent path dependence and reduce priming and implicit bias
- Product work is a pursuit of facts about the user, market, and their problems, as the facts, user or market changes so should the product.
Links to this note
-
Extrapolating from past data points is not an explanation. Building your confidence that something that will happen—like Bayes Theorem—is useful for descrete, observable problems, but fails to reveal the truth. It’s the equivalent of saying “because it’s always been that way” which is a flawed way of reasoning about the world.
-
Bayes Theorem Is a Form of Inductive Reasoning
In Bayes' Theorem, the probability of something occurring is based on probabilities of other parameters of the problem. Put simply, using the theorem builds on prior knowledge of the problem domain to update a prediction. This became very popular because, in the real world, there is much uncertainty and Bayes Theorem provides a way of modeling that uncertainty through probability (e.g. machine learning).
-
The likelihood that something will continue to be done the same way it was done from the outset. This can be readily observed in technology. For instance, the width of train rails is the width of a horse pulled cart (or “two horses asses”) and led to the width of rockets on the space shuttle being set to a size not based on what is optimal, but based on what can be transported via train.