This section addresses some common misunderstandings and issues related to the *Scenarios* application. If you have a problem that isn't addressed below, or the below solutions haven't worked for you, contact Invrea here.

As it always is, our first piece of advice if you have run into a bug is to restart. Not your computer, in this case - simply Excel. If *Scenarios* has hit a bug, it's a good idea to exit Excel without saving the spreadsheet, provided your work is nonessential. Reopen Excel and try running *Scenarios* again. If the bug persists, try restarting the inference engine (detailed here). If *Scenarios*' bug has caused you to lose a large amount of work, see here for information on how to retrieve backup files automatically saved by *Scenarios*.

(↩)

When a user generates scenarios, the mathematical heavy lifting is done in a process that sits alongside Excel that we somewhat-anachronistically call the **inference engine**. If ever the inference engine runs into a bug, it may shut down and stop responding to user requests to generate scenarios. This should only happen if the user has attempted a task that is too large for the memory of the host computer to handle (i.e. features too many cells and too many scenarios). If the engine has crashed, when the user attempts to press the 'Run' button, a message will pop up saying "Invrea's external inference process is not responding. Try restarting it using the Task Manager."

In this case, the fix is simple. Bring up the Windows task manager with CTRL-ALT-DELETE. Press "More Details", and go to the "Services" tab. You should see Invrea's inference engine in this list, under the name "InvreaExtEngineWrapperV1". To restart the inference server, simply right-click this service and press "Restart", or right-click it and press "Stop" and then "Start". Restarting the service will take a little while - probably just a few seconds, but depending on your computer, maybe as much as 30 seconds. After that, you should be able to generate scenarios again.

If you find a bug that is not fixed either by restarting Excel or by restarting the inference engine, please use the bug-reporting software (see here) to make us aware of it.

(↩)

Every time the user generates scenarios or highlights assumptions, *Scenarios* automatically saves a backup copy of the Excel file. (This process, on old computers, may cause up to a second-long delay running scenarios or generating assumptions.) This means that, if the worst happens and *Scenarios* (or Excel) crashes while running, you'll still be able to retrieve your file. *Scenarios* backs up files to the user's temporary directory, which is generally in the folder `C:\Users\YourUserName\AppData\Local\Temp`

, where `YourUserName`

is your username on the host computer. In this directory, the latest backup file will be called `InvreaSavedTmpWorkbook`

.

As for locating this backup folder: if there is a bug, *Scenarios* will tell you the location of your temporary directory. The most direct way to determine the location otherwise is to open the Command Prompt and type the command `echo %TEMP%`

; this will print the location of your temporary directory. If you go to that location, the most recent file backed up by *Scenarios* will be there.

(↩)

If your computer doesn't have enough memory or power to generate the number of scenarios you have requested, *Scenarios* will throw an 'Out of Memory' error after you press the 'Run' button. The most direct solution to this problem is to generate less scenarios, or to run *Scenarios* on a smaller spreadsheet.

This error may occur because your computer physically doesn't have enough RAM to run the requested model, or because the JVM installed on your machine can only access a restricted portion of the computer's memory. For acceptable performance, Invrea recommends using *Scenarios* on a computer with a JVM that has access to at least 200 MB of memory.

(↩)

Sometimes, after being confronted with a model, *Scenarios* will stop scenario generation and return what it calls a **zero-probability error**. The dialog box that pops up contains the recommendation to examine your model - consisting both of the assumptions made within it as well as the data used to train those assumptions - for correctness. That dialog box will forward you to this location for more information. This documentation entry gives a detailed explanation of what a zero-probability error is, why it happens, and what you can do about it.

In short, a zero-probability error occurs when *Scenarios* is given a model that it deems to be impossible, or so close to impossible that it literally makes no odds. For a very basic example, consider the following spreadsheet:

In this model, an assumption is made that the cell `A1`

is between 0 and 2. However, after that, it is provided as a data point to learn from that the cell `A1`

is actually between 4 and 6. Obviously, both of these statements cannot be true. As a result, when the user attempts to generate scenarios using this model, a **zero-probability** error will be shown. The initial assumption about cell `A1`

leads to any value not between 0 and 2 being assigned zero probability, and the actual involving `A1`

leads to any value not between 4 and 6 being assigned zero probability. Because there are no values between 0 and 2 and also between 4 and 6 at the same time, this model assigns every possible scenario zero probability, and so *Scenarios* cannot report any results.

However, zero-probability errors can also be more insidious than this - they can also occur when a model considers every possible scenario to be so unlikely that they are basically rounded off to zero-probability. Consider the following model:

In this model, cell `A1`

is assumed to be a normal random variable with mean 0 and standard deviation 1. Then, an actual to learn from specifies that the value of the cell `A1`

is actually between 7 and 8.

To discuss exactly how unlikely this result is, it's necessary to go through a little mathematics. The commonly-used **68-95-99.7** rule specifies that all data points more than three standard deviations away from the mean of a normal distribution are about 0.3% of its probability mass, so we would assign them a likelihood of about **one in 300**. Unlikely, but not *that* unlikely. However, the observed data point in this model is between 7 and 8 standard deviations away - going through the mathematics, the prior likelihood of this result is approximately **one in 800 billion**. So in rough numbers, if you haven't run at least a billion scenarios, then this result is basically impossible to ever see. Therefore, *Scenarios* will consider this model to have zero probability.

Finally, before we move on to discuss what you can do about zero-probability errors, we'll discuss one more kind of zero-probability error. This kind occurs when the user does not fully appreciate the difference between discrete and continuous probability distributions. Consider the following spreadsheet:

In this model, cell `A1`

is again assumed to be a standard normal random variable (with mean 0 and standard deviation 1). However, this time, it is included as an `ACTUAL`

that the cell `A1`

is a draw from a `CHOICE`

distribution whose parameters are 0 and 1. The problem here is that `GAUSSIAN`

is a continuous distribution, which returns results that are floating-point numbers, while the `CHOICE`

distribution used in the `ACTUAL`

is discrete, and could only be 0 or 1. If `A1`

is not *exactly* zero or *exactly* one, then all generated scenarios will have zero probability. And in exceedingly non-rigorous terms, because the `GAUSSIAN`

distribution is continuous, and therefore ranges over an infinity of possible numbers, the probability that it is *exactly* zero or one is two divided by infinity, which is zero. So if *Scenarios* is confronted with a model like this, where discrete and continuous random variables are mixed up, it will throw a zero-probability error.

What you, the user, should do about zero-probability errors depends on the nature of your model. First of all, check for any obvious user errors along the lines of the three examples above - making incompatible assumptions, including essentially-impossible data points, or confusing discrete and continuous distributions or types. If none of these are the case, we recommend inspecting your model by highlighting assumptions, to see if any `ACTUAL`

statements are particularly unlikely, or removing certain `ACTUAL`

statements and then generating scenarios to see which `ACTUAL`

statement is responsible for the zero-probability error. It may nevertheless be the case that a zero-probability error is occurring because your model is simply extremely complex (i.e. features an extremely large number of assumptions or actuals) or is not well matched at all to the existing data. Depending on how much computing power you have access to, these problems can be fixed by generating more scenarios. If you think you've done all you can do, contact Invrea here; we'd be glad to help simplify or otherwise improve your model.

(↩)

The current, alpha version of *Scenarios* comes with the following limits: it can run no more than 10 million scenarios per inference run, and it can be run on spreadsheets with no more than 10,000 cells. If either of those limits prevents you from adapting *Scenarios* to your use case, please email us here and let us know! We'd be happy to meet your challenge, whether by enhancing *Scenarios* or by providing a custom solution.

(↩)

The question of how many scenarios you should run has a short answer and a long answer.

**Short answer:** Run as many as possible; how long are you willing to wait? The more scenarios, the clearer and more accurate your results will be, but the longer your results will take to produce; so run as many as you can. At Invrea, when creating and testing our models, we run on the order of 10,000 scenarios; when presenting results, we generate and report at least 100,000. We do not recommend ever running less than 1,000 scenarios except for the most basic of tests, and *Scenarios* currently cannot run more than 10 million.

**Long answer:** Some problems are 'easy' - they only take a small number of scenarios to solve - and some are 'hard', in that they take a large number of scenarios to solve. Telling the difference between easy and hard problems is not simple, not even for experts, so you're best served running as many scenarios as possible until the result is clear enough that it cannot be misinterpreted. Interpreting the plausibility and accuracy of your results is an art rather than a science - it is the core difficulty in building a model that cannot be fully automated. Generally, your intuitions about plausibility and accuracy will take the form of assumptions about the range of possible results, and about the smoothness of the distribution of possible results, depending on the type of problem. If your results are not in the range you expect, you may have built the model wrongly, or you may need to run more scenarios. If the distribution of your results are not as smooth as you expect, then you probably need to run more scenarios. For example, a spreadsheet model whose only distribution is `GAUSSIAN`

will most likely have a smooth output - if it does not, run more scenarios. The same does not hold for spreadsheet models that use discrete distributions, such as `BINOMIAL`

or `POISSONIAN`

; however, for these problems, it will probably be easier to identify bounds on the results you expect to see.

(↩)