Everything I think I know about risk, uncertainty, and markets

A meander through the definitions and connections of certainty, uncertainty, randomness, and risk -

(and if any of this is wrong, you have a moral responsibility to let me know how and why)

The path traced by a double pendulum
The path traced by a double pendulum

Certainty and uncertainty are properties that are relative to the actor. Certainty usually increases as an event approaches, and more information is revealed. After an event has passes, certainty decreases, as it becomes harder to ascertain what exactly happened.

Information, as defined by Claude Shannon, is a relative measure of what is known (certainty) vs. what is not (uncertainty). New information, then, is the reduction in the uncertainty. Bear in mind that what is not known may still be bounded, and knowing those bounds (another certainty) means you can assign better value to the other things that are known (certainty). For more, read up on information theory, but essentially, if you know the size of the knowable universe, you have a better sense of how much value to give to what you do know.

In cases where little is known but much can be known, we use computational models (historically, our intuition) in order to try to make better decisions in the face of uncertainty. In that sense, probability is the quantified sum of everything that is known about an unknown. I serves as a survival heuristic that, over time, determines the survivability of the actor and their genes. It is a way of modeling what we do know about the uncertainty, which can still be a lot.

A key point is that a specific outcome does not determine whether a model was right of wrong. This is because a model is usually built on many points of data, each of which can be right or wrong, and though multiple outcomes can prove or disprove a model, a single outcome is merely another data point. What confirms or denies a model (and thus a prediction) is how much crucial information was found out to be not known (or known and not weighted correctly), between the time of prediction and time of outcome. Additionally, not all information is processed equally, so it would be wrong to say that more information always moved the needle in the right direction. The direction of information within a model determines whether new information gets you closer or further from being right. This is roughly why big data can be misleading - it holds the potential to accentuate confidence in models that are wrong.

One subset of uncertainty is randomness - those elements that truly cannot be predicted. Since what can be predicted is dependent on what can be measured, more accurate measurements mean less randomness. Randomness is therefore also a property that it relative to the actor. Fortunately, even randomness (the unknowable) can sit within a knowable space. A coin flip may have a random outcome, but we know it will always yield heads or tails.

The more randomness in a system, the harder it is to make useful predictions, because there is less uncertainty that can be reduced. On the other hand, if something is completely predictable, it is because there is no randomness, and much of what could be known about it is known (low uncertainty).

A side note, based on the double pendulum article by Paul Graham -

Chaos is not randomness. Chaos is when a tiny, tiny variance in initial conditions leads to huge disparities in outcome. In theory, two double pendulums with precisely the same starting conditions, will follow the same path. The problem is that it is almost impossible for us to measure to that degree of precision. While our measurements catch up, we treat chaos as randomness.

Probability is not fundamental concept. It is a derivative of knowledge. Specifically, what can be known (the upper bound of information) and what cannot (randomness), and of that which can be known, what is known (information) and what is not (uncertainty).

Risk is the downside associated with outcomes linked to uncertainty. In a fully certain world, there is no risk. Because uncertainty can be aggregated and disaggregated, risk can be aggregated and disaggregated by a market trying to finding pricing arbitrage. This is the foundation of insurance - the probability of a car accident happening to a given individual is difficult to compute, but there is far more certainty in computing the number of car accidents for a population of a million drivers.

The market sometimes makes mistakes by treating all levels of uncertainty, and therefore risk, in the same way. However, this also creates arbitrage opportunities over time. For example, randomness may be priced without considering that it still falls within a knowable range. This is because the market is a reflection of many, many different predictive models. Since these models process information and therefore uncertainty differently, they in fact react differently to new information. So new information is priced in, but there is no guarantee it is priced in correctly - this is why the efficient market theory is largely disproven.