This post was inevitable. Over the last few months, communication for and about our company has been very much a mixed bag. Some parties understand exactly what we do and why it's needed, and wish us luck; others have a sense of the space in the field that we occupy, but don't fully understand what makes Invrea unique; and still others have no idea what kind of needs Invrea is aiming to address, what kinds of problems we aim to solve.

This blog post is an attempt to communicate, in as direct a fashion as possible, how the way Invrea solves problems is useful across many fields. Being a blog post, it's not meant to communicate a business case so much as a Weltanschauung, a worldview, an ideology. Weltanschauung being a German word literally meaning 'world-view', and this blog post ideally not being so much read as consulted whenever we neglect to give sufficient background and context in future writings.

As a result, this will be somewhat longer than our previous blog posts. Given that disclaimer, let's start by talking about - what else? - graphs.

The Histogram

A histogram is a graph that plots events and the probabilities of those events. The horizontal axis of a histogram includes all possible events, while the vertical axis gives the probability of each event occurring. By the most basic rule of probability, adding up all possible events must yield 100%. Histograms distribute a fixed sum of probability amid a mess of possible outcomes; they are pictorial representations of probability distributions.

For example, imagine you were trying to guess which day of the week a package you ordered might arrive. You estimate that there is a 50% chance the package arrives Monday, a 25% chance the package arrives Tuesday, a 15% chance the package arrives Wednesday, and a 10% chance that the package arrives Thursday. This combination of events and probabilities is concisely represented in the following histogram:

Or perhaps you know that your delivery is arriving today, but you are interested in estimating the exact time that it comes. The delivery service specified "around noon", but gave no further information. This more detailed histogram might depict your forecast:

Note that the axis labels on this histogram have subtly different meanings than those on the previous graph. Before, the first bar labelled "Monday" had value 50%, indicating that the probability that the package arrives on Monday is 50%. Now, the bar between the tick mark labelled 11:53 AM and the tick mark labelled 12:11 PM has value 12.8%, meaning that the probability that the package arrives between 11:53 and 12:11 is 12.8%. Also, the above histogram will hopefully be familiar to you: it is the Gaussian distribution, also known as the normal distribution and the bell curve.

Why are we wasting time talking about such a basic graph? At Invrea, histograms are our bread and butter. In some sense, all of probabilistic programming, including of course our product Scenarios, can be seen as operations on histograms. We write software that takes a histogram or collection of histograms as input, your assumptions, crunches data, and outputs a histogram or collection of histograms, our conclusions. Formally, the problem of learning from data in this way is called induction, and it is actually the fundamental challenge encountered in modelling, forecasting, and decision-making.

Probability Distributions Are Beliefs

The first part of Invrea's 'controversial' hypothesis is this:

All well-formed, consistent beliefs are representable as probability distributions (that is, as histograms). If your beliefs cannot be represented in this language, they are irrational, and your predictions and decisions are suspect.

And yes, we do mean all beliefs. To believe in something is simply to assign it subjective probability in one's head. Don't believe us? Here are some examples, produced using our product Scenarios, with the Scenarios commands used listed above them. These examples start simple, and get steadily more complicated, in order to demonstrate that all beliefs, no matter how complex, can be phrased in this language.

Belief: "I believe X is near 7"

Code: =GAUSSIAN(7, 1)


Belief: "There is a large chance X is somewhere between 600 dollars and 800 dollars, and a smaller chance X is somewhere between 900 dollars and 1,100 dollars"

Code: =IF(FLIP(0.75), BETWEEN(600, 800), BETWEEN(900, 1100))


Belief: "A, B, and C are three different products sold by a shop, with prices £5, £20, and £100 respectively. P is the number of products each customer buys on each visit, which is between 0 and 8. It is equally likely that a customer buys A and B, but the likelihood that they buy C is much lower. X is the total amount that a customer spends per visit"

Code: =SUM({5, 20, 100} * MULTINOM(RANDBETWEEN(0, 8), {1, 1, 0.25}))


Belief: "X is the present value of the price of a security in 20 years (that follows a Gaussian random walk each day, and starts at $50)"

Code: =-PV(3.5%, 20, 0, 50 + COMPOUND(20*365, "gaussian", 0.1, 2))


This procedure, of translating between English sentences, probabilistic models, and histograms, is the fundamental process of building a model. It requires an uncommon synthesis of statistical literacy and domain-specific knowledge to set up a model, and much more statistical literacy to prove a model's correctness. Unfortunately, building models is essential and unavoidable when making decisions or forecasts.

Every human makes the decisions that bring them through their day according to some internal model of the world. This model of the world consists of a gigantic set of beliefs about the way the world operates, beliefs like those represented in the histograms above. All these beliefs, if they are rational – we will return to precisely what that means later – can be represented as probability distributions. When one weighs risks against rewards and makes a decision, one uses these beliefs to make a prediction, and then uses one's individual preferences – such as greed and risk aversion – to make a decision.


The purpose of a system of beliefs is to construct an accurate model of the world to help with decision-making. This gives us two immediate criteria for distinguishing between 'justified' beliefs and 'unjustified' beliefs. First, a 'more justified' belief is one that assigns high probability to the actual data. For example, if I believe both "X is near 7" and I believe "X is near 8", and it turns out that X is in truth 9, the second belief appears to have been 'more' justified, though neither belief was 'wrong'. While it might feel somewhat circular, a 'good' worldview is simply one that looks as much as possible like the world.

Second, a justified belief should be as simple as possible. This is the essence of the philosophical injunction known as Occam's razor. If two beliefs each plot the same histogram over the unknown variable X which can be measured, but the second belief is more complicated and has more moving parts, the first belief should be preferred. For example, if the first belief asserts that X is approximately 3, and the second belief asserts that X is approximately twice Y, and Y is approximately 1.5, then the first should be preferred. Both are identical in practice, but the first is simpler in theory. Theories that postulate the existence of fewer distinct objects should be preferred; otherwise, making predictions becomes needlessly difficult.

In order to make decisions, we therefore want to construct beliefs that explain the observed data as best as possible, and that are as simple as possible. However, because in principle many separate beliefs may explain the same data, we do not want to limit ourselves to merely one belief – because this makes learning impossible. It is crucial that less-probable beliefs are not thrown out, because future data may yet prove them to be more likely. And it is crucial that more complex beliefs are kept around even if simpler ones suffice for now, because future data may yet show that the simpler beliefs will fail us. All our beliefs are used when constructing predictions about the future - but simple ones that explain the data are used the most.

Therefore, our model of the world must be a gigantic web of interlocking beliefs, some of which we are extremely confident in, some of which we find improbable, yet all essential. Some beliefs may hinge on others being true, some beliefs are independent of other beliefs, and some beliefs form complex cyclical dependencies. The philosophical term for this position is coherentism, because beliefs are not justified on their own merits. Rather, it is how one belief relates to all the others that determines one's confidence in this belief. 'Truth' is not an intrinsic property of a belief, but rather a matter of relation among beliefs.

To learn, then, is to continually update this network of beliefs using data. Learning is an operation that takes in a histogram and updates it into a narrower, more informed histogram, by using a new piece of data. This new data may propagate changes through a decision-maker's entire worldview, often forcing them to revise a large number of previous beliefs. At Invrea, we refer to the mathematical methods that automate this process as inference.

Operating in a world filled with uncertainty, therefore, works as follows. The ideal rational decision-maker keeps a mental ledger of beliefs that might be true, along with his or her confidence in these beliefs, and alongside the predictions of these beliefs. Every time this rational agent sees, hears, or otherwise senses something new, any beliefs with predictions relevant to this new data must be reassessed, their confidences and predictions recalculated in light of the new information. This changes the agent's forecasts of the future, and therefore, possibly, their decisions. Their worldview is inherently probabilistic, their predictions cautious, and their decisions therefore sound.

Bayes and the Stock Market

The second part of Invrea's 'controversial' hypothesis is as follows:

Bayes' theorem is the rational way to justify beliefs that conflict, and to update them using data. Any decision-maker that can be called 'rational' is either using Bayes' theorem or an equivalent to use data to choose between beliefs.

Bayes' theorem refers to a mathematical formula that tells us what is the correct way to learn from data. Put another way, it provides a way to judge which of a set of beliefs is the most justified. For an intuitive explanation of Bayes' theorem using Scenarios, as well as a video, consult our earlier blog post on the subject. However, this post does not hinge on mathematical detail.

Recall the completely rational decision maker we were discussing previously. This agent's 'worldview' was a detailed list of all its beliefs – a fantastically lengthy list in all likelihood – which was painstakingly updated each time the agent received any new data. Imagine that this entity is possessed of the following notion: it wishes to emigrate to Earth, and for whatever reason, it wishes to make money. Being an ideal forecaster of future events, it turns its attention to the stock market.

Invrea's two hypotheses, combined, would therefore predict the following experimental result: the rational Bayesian agent would consistently outdo all humans, even groups of humans (unless perhaps those humans had inside knowledge, and therefore access to data that our Bayesian could not see). If something can be predicted, the rational Bayesian will predict it as well as is possible. In comparison, humans frequently possess vague and irrational beliefs, frequently fail to update our beliefs in light of new data (or update them incorrectly), and worst of all, frequently become attached to our beliefs and so refuse to relinquish them when the data turns against them.

Our rational Bayesian agent cannot help but win against humans. It can be shown that if its competition is making decisions in a way that is not consistent with Bayes' theorem or other laws of probability, our rational Bayesian will be able to construct bets, perhaps in the form of complex financial derivatives, that take advantage of its opponents' irrationality in order to earn itself money.

This is what is meant when we say this agent is rational – if allowed to make bets on a prediction market, the rational Bayesian agent will make only bets that are consistent with one another. That is, all its bets taken together necessarily build up a coherent description of the possible futures that our rational agent believes are likely to happen. In short, a rational agent is one that operates according to a worldview. While this might sound like a niche, improbably practical justification for a philosophical concept, one that would please only the most ardent of libertarians, remember that this situation describes not only financial markets, but by proxy any decision under uncertainty in the presence of competitors – which is any decision at all.

Invrea's Third Hypothesis

Every decision made by a human informs, and is informed by, the decisions made by every other human. Every purchase you make tells everyone around you how much you are willing to pay for a certain item. Every risk you take tells all who wish to hear how much risk you will bear. Your Weltanschauung is always obvious to those who care to look.

Invrea's third hypothesis is simply:

You cannot escape making predictions; so, you should make the best ones possible.

When you move to take a new job, contribute to a pension fund, or even donate to charity, you implicitly put values on these things. You don't just value them now – you necessarily express a guess about how their values will change in the future. Given a set of preferences, premises, and relevant data, an optimal decision can be rigorously derived. There is always a right answer.

While inconvenient, this is a reality that cannot be denied. In 2017, when it rains in Iowa, traders in Singapore check the values of their futures contracts. The very best forecasters train their predictions using data, and all business decisions are based on forecasts. Deny this reality, and your decisions are likely unjustified.

Our Vision

The three hypotheses laid out in this post are our Weltanshauung, our worldview. They completely prescribe a way of making decisions. Wittgenstein writes, "[I]t is not our aim to refine or complete the system of rules for the use of our words in unheard-of ways. For the clarity we are aiming at is indeed complete clarity. But this simply means that the philosophical problems should completely disappear." Using Bayes' theorem and applying machine learning makes persistent problems disappear.

In our previous blog post, we laid out a division of machine learning approaches into four groups. These distinctions were made using the dual axes of model complexity and data size. Group 1 problems are those that require only simple models and small data, Group 2 probems those that require simple models but big data, Group 3 problems those that require complex models but only involve small data, and Group 4 problems those that require both complex models and big data.

We further asserted that too much industry focus is placed on applying simple models to large amounts of data, rather than applying models that are more complex. In essence, that Group 3 problems are routinely misdiagnosed as Group 1 or Group 2 problems, sometimes based on the mistaken assumption that having access to more data will automatically solve the problem of interest. This limits the accuracy that can be achieved, and leads to the misguided notion that understanding data comes down to thumbing through models and manually tuning them until reasonable predictions are made.

This model of learning about data routinely leads to miscommunication and bias. Those who apply models sequentially are sometimes unaware of the assumptions these models implicitly make, and are more often unaware of the biases introduced by tuning models manually until their predictions are reasonable. Those who use these predictions are as a rule unaware of these assumptions and of the model-building and model-validation processes. Their only recourse, then, is either blind faith in those who have built the model, or else blind skepticism of all predictions about the future.

Neither of these is ideal in an executive, and there is a clear better way. This better way is described above: keeping a clear ledger of one's beliefs, and updating them automatically, completely objectively, using data. Scenarios can help you with that. Join the alpha program here.