Intermediate Financial Modelling
Intermediate Financial Modelling
Intermediate Financial Modelling
October 2020
Master Financial Modelling Specialist
Course overview
This course will focus on the tools and techniques available in Microsoft Excel which will enable you to develop, run
and review financial models across multiple scenarios.
You will gain a sound and practical insight in financial modelling building, comparison of financial values and
evaluation of financial risks.
At the end of the course you will be able to present your own project.
Course Objectives
Methodology:
• Practical and interactive by using questions, exercises and case studies so the participants can put
the learning objectives into practice.
• Participants will go through examples and case studies for them to build their own models and to
put into practice the learning objectives
• A significant aspect of the course is that delegates will
• A significant aspect of the course is that delegates will design, construct and test their own
practical financial model that they will use at work. This task might include the correction and
enhancement of an existing model.
Please note, this is a significant task as several parts of the course will be devoted to it.
Income Statement
Balance Sheet
Statement of Cash Flow
• Analysing
o Profitability
o Return
o Liquidity
o Asset Usage
o Cash Flow
o Investor
• Common size statements
• Growth ratios
Business Intelligence 1
• Introduction to
o Power Query
o Power Pivot
o Power BI
Budget Model
• Budget variances
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 3 of 95
• Cash flow budgets
• Monthly cash model
• Breakeven analysis in theory and in practice
Business Intelligence 2
• Enhancing your Power Pivot reports by using
o DAX: Data Analytic eXpressions
o Cube Functions
Investment Decisions
• Calculating the NPV, XNPV, IRR, MIRR, XIRR, Profitability Indices and payback period
• Sensitivity Analysis for a Business Case
• Developing and Managing Scenarios
Risk Modelling
Simulation Techniques
Business Intelligence 3
In order to enhance the interactivity of the course, you are being given the chance to work through
several of the exercises in advance of the start of the course. Here is a list of the Excel files you are
being given in advance and your task is to work through them and be ready to discuss and defend
your results during the live sessions. You will also see here some of the questions that you should
consider before the course begins.
Hours 1 to 3
Every file contains an intro tab that explain what it is that you should do
What are the steps we need to take to create that Data Table?
What is the profit figure when the Selling Price is $1.25 and the Unit Cost is $0.55?
What is the profit figure when the Selling Price is $2.75 and the Unit Cost is $0.45?
Are there any combinations of Demand and Selling Prices that never make a profit? If so, what are
they?
Are there any combinations of Demand and Selling Prices that never make a loss? If so, what are
they?
Do we see the break even combination of Demand and Selling Prices in this Data Table?
How many break even points could there be in a Data Table based on Demand and Selling Prices?
The following correlation matrix comes from a completely different data set to the Fortune data
set(s): how would you explain what it is telling us?
Creating the Connection Only Queries should have been simple for you … was it?
Did you face any problems?
You will need your file up to date so that we can work on the in class exercises during the course
Hours 4 to 6
Did you add any new columns to the data provided? If so, what were they?
How many graphs did you draw? What did you put on it/them?
What are the totals for the budget for each quarter of the 2021 budget?:
Q1
Q2
Q3
Q4
Total
Master Budget feel free to attempt this exercise but all I want you to do, really, at this stage, is to
make yourself familiar with the data in the file shovel_company_delegates.xlsx
Is the Sales Budget always the first budget an organisation should prepare? Say why it is or it isn’t.
The amount of steel required for production of shovels and rakes in January is 15,120 Kg: True or
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 7 of 95
False?
The amount of steel required for production of shovels in February is 9,840 Kg: True or False?
The amount of steel required for production of rakes in April is 14,520 Kg: True or False?
Forecasting and Projections models again, you are only required to familiarise yourself with the
data in this file since we will work through it in full during the live sessions
board_games_delegates.xlsx
Using the Forecast Sheet in Excel is not appropriate for the total board games data set? Explain your
answer.
Does the following graph help to explain why the Forecast Sheet method is not appropriate for the
entire data set? Say why you think that and explain what you would do after seeing this graph.
Explain what you see on the following graph from this exercise:
Hours 7 to 9
• Payback
• NPV
• IRR
• PI
• MIRR
• XNPV
• XIRR
Does it help to use the Scenario Manager in an exercise such as this one? Explain why you think that.
The optimal profit value for the sheet 1 exercise is 1,066, yes or no?
The optimal cost value for the sheet 1 (2) exercise is 113, yes or no?
The optimal profit value for the sheet 1 (3) exercise is 146.6667, yes or no?
Be ready to discuss the map you prepared and the other visualisations you have prepared.
For the other visualisations, you should have categorised them as, at least:
• Download
• Upload
• Latency
Google share prices simulation Make yourself familiar with the contents of the file at this stage
only goog_sh_prices_2004_08072020_simulation.xlsx
Be familiar with the contents of this file and then we will work on it from scratch in this session.
Excel Functions
There are more than 470 functions listed as being available in Excel 2019 in a wide variety of
categories:
• Compatibility functions
• Cube functions
• Database functions
• Date and time functions
• Engineering functions
• Financial functions
• Information functions
• Logical functions
• Lookup and reference functions
• Maths and trigonometry functions
• Statistical functions
• Text functions
• User defined functions that are installed with add ins
• Web functions
We simply won’t have time to look at all of the functions or indeed all categories. We will look at
some functions in the following categories during this session and at some of them in other sessions.
In addition, we will have looked at some of these functions already!
Let’s begin with the latest functions, the Dynamic Array Functions that are now widely released:
SORT
SORTBY
RANDARRAY
UNIQUE
mathematics
ABS function
AGGREGATE function
INT function
LN function
MINVERSE function
MMULT function
MOD function
PRODUCT function
statistical
AVERAGE function
AVERAGEA function
AVERAGEIF function
AVERAGEIFS function
CONFIDENCE.NORM function
CONFIDENCE.T function
financial
FV function
FVSCHEDULE function
INTRATE function
IPMT function
IRR function
ISPMT function
MDURATION function
MIRR function
logical
AND function
FALSE function
IF function
IFERROR function
IFNA function
NOT function
OR function
TRUE function
XOR function
CHOOSE function
GETPIVOTDATA function
HLOOKUP function
INDEX function
INDIRECT function
LOOKUP function
MATCH function
OFFSET function
ROW function
ROWS function
TRANSPOSE function
VLOOKUP function
XLOOKUP
XMATCH
text
CHAR function
CLEAN function
CODE function
CONCATENATE function
FIND, FINDB functions
LEFT, LEFTB functions
LEN, LENB functions
MID, MIDB functions
REPT function
RIGHT, RIGHTB functions
TEXT function
TRIM function
database
DAVERAGE function
DCOUNT function
DCOUNTA function
DMAX function
DMIN function
DGET Function
DPRODUCT function
DSTDEV function
DSTDEVP function
DSUM function
Excel Tables
An Excel Table turns a range of cells into a searchable, sortable, programmable table. These tables
link to Pivot Tables as never before, they link to graphs as never before and they carry out functions
as never before.
As an example of the power of Excel Tables, for want of a better solutions, I set up and managed the
student records for a college with 1,700 students each studying for four semesters plus a language
course. In each semester there were six subjects and each subject had six marks or grades to be
awarded to it.
I used Excel Tables for this job because there was no budget for any other software and the team I
was working with could not manage a database system on their own even with or without training.
1,700 * 6 * 6 * 4
= 652,800 bits of information
In addition the table was programmed to distinguish between pass, fail and referred. It had to take
account of updates and to issue a final pass or fail award.
I ran this system on an ordinary Windows laptop with no special features and it coped very well. I
could search for any student at any time, for any subject, for any teacher for any mark or grade: all
using the features of an Excel Table.
Open the file excel_tables_demo_delegates.xlsx to see the above examples and then do the
following:
1 Add this information to each table, in row 31: Plastics, 16 April 2014, Shah, $225,437
2 Insert an additional column between the two tables so that there are two empty columns, E and F.
3 Do the following:
4 Do the following:
5 Use the two tables to show ONLY the Plastics sales information. Then show all of the information
again.
6 Use the two tables to create a total for the Value column
Let’s apply this information to other examples: find a much larger table and repeat the exercise we
have just done and then move forward.
Pivot Tables
In this section on Pivot Tables, we will begin our work on, among other things:
• Sales report
• Profit report
• Calculated Fields
• Value As
• Grouping
I am assuming that you are already at least familiar with Pivot Tables so I will not include
introductory materials here. We know that a Pivot Table (PT) summarises data in a flash: it takes a
list or, preferably, an Excel Table and organises it and reorganises it in almost any way you care to
name.
Having worked through the previous examples, we will now work on the communications aspects of
PTs: among other things:
• Slicers
• Timelines
• Conditional Formatting
• Pivot Charts
I will provide you with examples and data as we enhance our Pivot Table knowledge and skills.
Open the file basic_lookup_examples_delegates.xlsx to consider but I also want to introduce you to
the OFFSET function, which has many uses. For some reason, OFFSET does not seem to be widely
used so these examples should help you to overcome that for yourself.
• VLOOKUP() is good but it is designed to look for information by going down and looking to
the right.
• OFFSET() is designed so that it can go from looking down, looking right, as well as looking up
and to the left
That’s not to say that we can’t do more with VLOOKUP(), we can; but on average with VLOOKUP()
we go from one green to one blue to one orange.
We have also included further OFFSET() examples for anyone to practise with, also at the end of the
section too.
Although it’s not important for the examples you are about to see, one important feature of the
OFFSET function is that it can read left and right (using positive and negative references) whereas
functions such as VLOOKUP can only read from left to right: this can be a massive bonus.
Imagine you are following a company using a database such as the one found on www.ft.com. In this
database you can find the:
• Income statement
• Balance sheet
• Statement of cash flows
for five continuous years for any listed company and can copy and paste these statements into an
Excel worksheet. Now, though, imagine that the new year comes along and you want to update your
worksheet just for the new year. You can keep the old data, of course, so that you then have six
years’ worth of data then seven years and so on. Take a look at the screenshot below for a potential
problem, Figure 2, which comes from the ratio_analysis.xlsx file that we used when we discussed
ratio analysis:
Typically, financial reports read from right to left, as in the case above: the earliest year on the right
and the latest year on the left. That means that we have to insert 2015 in between the current
columns A and B. Well, that part’s easy, of course and can be achieved in a few ways.
Before we use the OFFSET function let’s see what happens when we use traditional ratio calculations
Let's say that you want to analyse some of amazon.com’s recent financial results, Figure 3 and
please note, none of the changes we are discussing here are in the file we created for this section;
so please work through everything here by yourself!
In addition to the profit margin calculations that we will discuss below, we have added average
revenue and standard deviation of revenue calculations:
• G9=Average Revenues
• G10=Standard Deviation Revenues
• H9=AVERAGE(A8:F8)
• H10=STDEV(A8:F8)
Now, when you update your worksheet next year to include 2015 in a new column B, you should see
revised formulas like this:
Can you see: in column I, all of the formulas were updated by shifting them right, from B to C: what
was, for example,
So, you say, add some absolute addressing and change them to, for example,
($B8-$B10)/$B10*100
but all you get is =($C8-$C10)/$C10*100
The OFFSET function can do all of the work for you and let’s start with the easier of the two sets of
calculations first: average and standard deviation of the five years’ revenues.
Average Revenues
Instead of this formula =AVERAGE(B8:F8)
Let’s use this formula =AVERAGE(OFFSET(A8,0,1,1,5))
What we are saying here with OFFSET is start with the reference, cell A8, stay in that row
(value=0), move one column right (1), the height of the range you are selecting is one row (1) and
you have chosen a width of five columns (5). Like the first example in the graphic below:
In the second example above we have changed the columns to two so that OFFSET now stays in the
target row, row 8 and then moves to column two of the range which is C8 and we have left the
height alone at [1] and changed the [width] to 4 columns so now OFFSET is setting up the range
C8:F8 for the average or standard deviation functions to use and not B8:F8.
There is no doubt that it could be a challenge to understand the difference between the AVERAGE
and STDEV formulas that include the OFFSET function so here is something to help you … N! By
adding +N() to the end of the OFFSET functions we have added a comment INSIDE the formula to
explain what it does so that it can help the user to see what’s going on if it’s not immediately obvious.
In the case of the profit margin calculations, we have set them up to find the margin year by year
and not just for one period or a range of periods. That means filling right, as we saw above:
So the OFFSET functions tells Excel where to begin, the reference cell. Then Excel starts its
calculations without directly naming any range or setting out cell addresses and in that way, when a
new column is inserted, there are no references to get updated or overwritten.
Play with the OFFSET function now and don’t forget, as with any function, the options in square
brackets […] are optional. Reminder: there are more examples to work on at the end of this article.
Let's work on the gross and operating income margins of the last five years for amazon.com. We start
in cell H7, with the ratio titles being in cells G7:G8
In this case OFFSET starts in $A8, stays in the same row (0), moves 1 column to the right into B8 (we
have used a cell reference to make that happen, not just by typing 1, then it chooses one row and
one column to work with. Fill that formula right and you see this in the next column:
OFFSET starts in cell $A8, stays in that row, moves I5, ie 2, columns to the right, which is two
columns to get to cell C8 and then picks up the value in that cell …
Now that you have come this far, go back and insert a new column for another year and enter or
create some new data for the cells we are using here and see if the OFFSET function has done what
we want … working on the last five years’ worth of data only!
I have to say that if you have a lot of OFFSET functions your worksheet might slow down. This is
because the OFFSET function is volatile: so every formula containing OFFSET will be recalculated
after each worksheet change. But using OFFSET could well be significantly easier than manually
adjusting each formula.
More examples if you think you need the practise: we have colour coded the cells in the range A3 to
C7 and your task is to derive the OFFSET function to do the following:
Whilst we have not provided an Excel file here for you, the solutions to these examples are:
1 =OFFSET(H12,0,0)
2 =OFFSET(H12,-1,1) … or =OFFSET(H12,0,0)/10 … … or =OFFSET(H12,0,0)/(10+5)
3 =OFFSET(H12,1,-2)
4 =SUM(OFFSET(H12,2,3,2)) … or AVERAGE((OFFSET(H12,2,3,2)) … or something else!
5 =SUM(OFFSET(H12,-3,-4,3,3)) … or AVERAGE(OFFSET(H12,-3,-4,3,3)) … or something else
It is not possible to work through too many array functions in the time available to us so I will provide
you with a few examples here to help you. The essence of an array function is that it is often
something that ordinary Excel cannot do and it does it in memory rather than in the sheet! The bad
thing about array functions is that they are not intuitive and they are often undocumented.
Matrix Calculations
By a matrix we mean a 1 x 2 or 2 x 1 or 3 x 3 or 100 x 500 range or vector. You might have learned
algebra and then matrix algebra at school and either used them over and over or never have used
them again.
I use matrix algebra to solve simultaneous equations and that can help me with Overhead Allocations
and Apportionments in Cost Accounting.
From my blog: excelmaster.co: this page from my Excel Blog also uses the SOLVER routine in Excel
which is a skill that financial modellers need to have too.
Goal Seek
Goal seek is a very handy little routine that has been built into Excel and you can find it here:
The previous example, using matrices in Excel, also included an example in which SOLVER is useful.
If you worked through that example, you already know a little bit about SOLVER. I a including other
SOLVER based questions in these materials for you.
I have summarised the main points of this list by considering extracts from various sources. We will
fill in any gaps in our discussion of the kinds of models that you use and develop at work.
The fundamental issue in any evaluation model is how to assess the risk of future cash flow
projections. Think about an example with a stock that trades in the market.
Let's assume that stock ABC is trading at $60 per share. We might say, based on financial theory, we
want to own stock ABC. It helps us to diversify our portfolio overall and looks like a solid
company. That's great but we still don't know whether we should actually buy the stock based on its
current price of $60 per share.
We can turn to a mathematical model: some sort of a computation to evaluate whether or not this
stock is undervalued or overvalued and maybe our mathematical model suggests that stock ABC is
worth $30 per share. The problem is that the mathematical model is not taking into account what's
actually going on in the markets themselves, the current practical market information. It's not taking
into account things like, what is the prevailing P/E ratio in the market?
Instead, we need to take into account what the mathematical model says and combine that with
practical market information to build a comprehensive, overall financial model. Using that financial
model then, we can take into account both the valuation from a mathematical perspective and the
existing market trends and make a decision about that investment.
There are five different areas where we're most likely to need a financial model.
1 DCF, or discounted cash flow analysis. We're typically going to use a DCF model to value a
business or another asset of some sort. And so, what we're really doing is taking our
theoretical model and we're trying to put actual practical values, based on what's going on in
the market, into that DCF model to make a decision whether we should buy or sell that
particular investment.
2 M&A analysis lets us go through and evaluate the attractiveness of a potential merger or
acquisition target, or a divestiture of one our business segments.
3 Capital raising. For example, we might be interested in analysing the pro forma impact of
raising debt or equity or other kinds of capital events. We might ask ourselves okay, given
what's going on in markets as a whole, does it make sense to raise capital in order to expand
the firm in order to pursue new growth opportunities or new investments? A financial model
can help us make this decision.
4 LBO analysis. If we're working for a private equity firm we might ask, how much borrowing
can we use to acquire a company? How much debt can we use to buy out the equity of that
company, to try to improve its operations, and earn a return?
5 Perhaps most common area where you might want to use a financial model is in sensitivity
analysis. We're going to be interested in not only a go … no go decision around the financial
model but looking at how confident we are in that go … no go decision. In other words, if are
assumptions prove to be incorrect, we need to be able to evaluate whether the decision still
makes sense in the context of weaker or stronger assumptions. If our growth projections
prove too optimistic, and we instead loosen those growth projections, so we're now growing
at a slower pace, does it still make sense to pursue this particular investment?
In diagram form:
More and more areas of life involve long term financial models. The results of these models can affect
thousands or even millions of people and billions of pounds. So it more important than ever that
these models are up to the job. For example the UK Government was left with red faces and a very
large bill when the model used for granting rail franchises was found to be not good enough.
So what makes a good model? Fundamentally you need an experienced competent person or team
running the model. Beyond that there are some basic features that good model should have. I have
listed 9 key features but there are plenty of other features that might be desirable for a good model.
Equally there may be times when you have to disregard one of these principles. But if you do you
should know you are breaking the rules and you should have a very good reason.
1 The model must be able to reflect the risk profile and possible outcomes of whatever is
being modelled
Remember that if you don't build a feature into a model, then it's no surprise if is doesn't turn up in
the model results. If it's important, model it. If it's not important ignore it, and know that you have
ignored it. All models are incomplete. Good modellers know where the gaps are and why they don't
matter.
In other words. Make sure the model does what you say it will. Make sure it will do it in any
conceivable circumstances. And make you clearly record what you have modelled, how you have
modelled it and why.
If you can't adequately test the model, how can you be sure that it is correct? Ideally test it against a
manual calculation or a trusted independent model. This is of course over and above ensuring that
your model is theoretically correct (i.e. doing that you think it should).
4 Key features should be parameterised and parameters should be set carefully taking
into account the purpose of the calculation
It's good practice to make any key features of the model that might change, such as an interest rate,
parameters so that they can be changed without changing the model itself. This is to make life easier
for everyone and reduce the need to testing the model in depth - which you would need to, if you
changed the model itself.
These parameters will of course affect the result of the modelling so you should make sure that they
are appropriate for whatever task you are currently working on. You may need to gather data, or ask
key decision makers to justify the parameters you are using.
Not all models will be simple. It will depend on what you need to model (see point 1). But adding
bells and whistles to a model when it won't change the results in any meaningful way only increases
the time and costs spent to no gain. It also makes it harder to spot and/or fix errors. And it also
means that it will be harder to explain exactly what the results mean.
6 The results should be easy to find, clearly defined and in a format that can be explained
easily to the end users
It is no use having a really clever model if you have to trawl through pages of irrelevant information
to find what you want. That increases the dangers that users end up picking out results that are not
what they think they are and using them inappropriately. For the same reason, all results should be
clearly and unambiguously labelled.
Unless your model is the size of the universe, it will have limitations. It's better for you and end users
if you know what these limitations are and are up front about them.
This makes it easier to test and parameterise the model and gives focus to the results obtained (see
point 6).
9 You should be able to develop and update the model without too much difficulty
It's unlikely that any model achieve its final form immediately. Users may come up with new
requirements and circumstances may change meaning that the model needs to be updated. (And this
is yet another reason why documentation is so important.)
Finally, a warning. Don't treat that complicated black box with too much reverence. It is just a tool,
made by people for people. Nothing more. Models are great tools, but really bad masters. Master
your model and make sure it serves your purpose, not the other way around.
http://hubpages.com/money/12-features-that-make-a-good-financial-model
We will be using Excel’s Data Tables functionality as well as the new Dynamic Array functionality, the
SEQUENCE(), PMT() and IPMT() functions to create data tables
Income Statements
Balance Sheet
• Analysing
o Profitability
o Return
o Liquidity
o Asset Usage
o Cash Flow
o Investor
This chapter contains what I have called basic analysis and we will enhance this chapter as
appropriate throughout the day.
One way to calculate the volatility of a financial asset is to use historical prices. Because the volatility
is constantly changing, any calculation based on historical data is only an estimate of the current
volatility. Calculating a standard deviation of the returns of al l the available data normally isn't adequate
because it averages over periods of high and low market activity. In the following sections, I offer a
variety of ways of calculating volatility.
The simplest calculation of volatility to use is the standard deviation over M days of returns:
In words, this formula says that the square of the volatility a, on day n is the average of the squared
returns on the M previous days. You can choose to let M be much smaller than the total number of
data points so that you can make a running chart of the volatility, such as the one shown in Figure 7-
2.
Weighting returns
The volatility of a financial asset is constantly changing, and if you use the latest returns, M, to calculate
it, you'll be using some stale information. If you treat each of the M historic data points as equally
important, data from M days ago have the same importance as data from today. You can avoid this by
using the method of exponential smoothing which is also called an exponentially weighted moving
average (EWMA).
ln a weighted average, you don't treat the numbers being averaged equally.
Some are considered more important than others and get a higher weight. In the EWMA, recent returns
get higher weight than the returns from a longer time ago. It intuitively makes sense that market
participants pay more attention to recent history.
In this method, you calculate the square of the volatility on day n using a weighted average of the
squared volatility on the previous day and the squared return on day n. You can choose the value of
the weighting, X, to be anything between 0 and 1. Sometimes A is called a smoothing parameter
because by averaging over previous values of the squared returns, the EWMA smooths out jumps in
the squared returns. If you choose a value close to 0, there is very little smoothing of the squared
returns. On the other hand, a value close to 1 leads to lots of smoothing and the EWMA can be used
to see trends in volatility. Many market practitioners use a value of 0.95. You can see a chart in which
various values of A. have been used in Figure 7-3. Finally, the formula for volatility using exponential
weighting is:
Now you can see that the squared volatility on day n is a weighted average of all the historical squared
returns. As you go further and further back in time, however (higher value of the index, i), the weighting
gets smaller and smaller. That's because A. is between O and 1 and if raised to the power , it becomes
smaller still. Your data set will never go back all the way to infinity, but don't worry: because the terms
get smaller and smaller, you can just set the return at the beginning of your data set to zero. That
means you can always calculate the formula. But to calculate the EWMA, always prefer the simple
iterative calculation because the computation time is much shorter.
Figure 7-2 shows a plot of this volatility indicated as EWMA volatility. You can clearly see that after the
financial crisis of 2008 the EWMA volatility drops quicker than the equally weighted 64-day volatility. In
the EWMA calculation, I use the value Lambda = 0.98.
Exponential weighting is used to make a running or day by day, calculation of the volatility (but you can
use any time increment). A running calculation is, in effect, an update formula from the previous day's
value. The first few values you calculate will be inaccurate because at the beginning of your data you're
forced to use a return of zero. Sometimes this initial period with inaccurate values is called a warm-up
phase, like a steam engine that takes time to work up to full power.
Figure 7-3 shows the effect of different values of Lambda. on the calculation of the EWMA volatility. For
the lowest value of ___, the EWMA volatility is more reactive to big market events, but the high volatility
then dies away quickly.
Power Query
Power Pivot is excellent without a doubt but Power Query is magic. There is so much to say about
Power Query so we will see just a few of its features here.
If you want an overview of what we can do with Power Query, here is part of the contents page from
Chris Webb’s 2014 book Power Query for Power BI and Excel Apress
There are other things that we can do with Power Query too, although we won’t have time to do
them here.
In this session I will give you a series of files that we will put through Power Query to demonstrate
some of the features Webb discusses in his book. We will look at examples such as these:
• Getting data from various sources such as Excel files and CSV files
• We will see how to combine several CSV files into one file
• We will merge Excel files
Unpivoting Data
The Unpivoting of data is something that we all need to do from time to time and Power Query has
made this process so simple. In essence, what Unpivoting does is to take a table like the table on the
top and turn it into the table on the bottom:
Note: Power Query calls the first column it creates the Attribute column and the second column the
Value column: you can change them as you wish.
What is so magical about being unpivoted? Well, now that everything is in a list, it is much easier to
analyse. When we had the top table, we could analyse things row by row. With the long list version,
we can create a pivot table and pivot the data in myriad ways.
Power Query can combine two or more worksheets for us with almost no effort at all: we will work
through an example of that. I will demonstrate combining CSV files first of all and then other
examples f time allows.
Depending on the source page, getting data from the internet can be exceptionally simple now and
providing we have a good connection, I can demonstrate that to you.
Power Query does things in the background that we used to spend a lot of time thinking about: such
as formatting cells as values or text … Power Query can often work that out for you. The same
applies to columns of data, Power Query anticipates what you want.
We can force Power Query to find and delete extraneous symbols and test from our various sources.
Again, as simple as ABC now.
We will be continuing our financial statement analysis section at the same time as we start to use
Power Query together with the data in the following files:
• A_AYYLF_IS_A.csv
• B_AYYLF_BS_A.csv
• C_AYYLF_CF_A.csv
We will use those files to import the data into an Excel file and then carry out the following:
Power Pivot and Power Query are advanced data analysis tools that are evolving in Excel and
elsewhere. You will have heard of the phrase Business Intelligence or BI. Because you must have
Excel 2010 as a minimum and have Power Pivot etc installed, we will probably limit our discussion of
BI to a review of Power BI … another add in that we can use to explore what is available to us.
The Excel Data Model was introduced in Excel 2010 and here is a discussion of what the data model
is and what it does. It is another step on the road to making at least parts of Excel behave like a
relational database. I have to say it is clever and I am sure you will agree with that!
Excel’s data modeling feature can help you to combine data and tables of data in the most east way
and in this case study I demonstrate what we used to do versus what we can now do.
The Data
What we used to do: open the file combine_tables_power_pivot_delegates.xlsx to work along with
this example. Look at the combine tab first of all: the combine tab shows us what we need to do to
begin our analysis if we rely on
• VLOOKUP
• SUMIFS
• And switching columns in an input table/range to ensure VLOOKUP works
We will work through this together so your delegates file will transform itself into the solutions file.
Having created out analysis we will then work on the Data Model which leads into a multi source Pivot
Table that helps us to do away with the need for VLOOKUP and SUMIFS in this case.
It’s good spreadsheet practice to convert lists and ranges to Excel Tables and Excel's data model
demands that we do that. So, let’s do that. As you do so, give each table a realistic and easy to
remember name: click on your table then the Design tab and over on the LHS of that tab you will see
where you can change the name of your table:
We can see that the two tables contain all of the information we need: we have already seen how to
use VLOOKUP and SUMIFS to combine the tables to get them to do what we want.
Now, though, we are using the data model to do the work. Firstly, we create a relationship between
the tables:
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 35 of 95
• Start by adding each of the three tables in the file to the data model
• In Power Pivot
• Click Data View
• Create relationships like this:
Click on the Pivot Table Icon on the Home tab in Power Pivot and select Pivot Table
Create New Pivot Table and tell Excel where to put it
Click All to make sure you can see all three tables
Add Customer ID to the Filters area and filter out a few of them to see what could be a vital report
for you now:
Budget Model
Pro forma financial statements are financial reports issued by an entity, using assumptions or
hypothetical conditions about events that may have occurred in the past or which may occur in the
future. These statements are used to present a view of corporate results to outsiders, perhaps as part
of an investment or lending proposal. A budget may also be considered a variation on pro forma
financial statements, since it presents the projected results of an organization during a future period,
based on certain assumptions.
Full-year pro forma projection. This is a projection of a company's year-to-date results, to which are added
expected results for the remainder of the year, to arrive at a set of full-year pro forma financial
statements. This approach is useful for projecting expected results both internally to management, and
externally to investors and creditors.
Investment pro forma projection. A company may be seeking funding, and wants to show investors how
the company's results will change if they invest a certain amount of money in the business. This
approach may result in several different sets of pro forma financial statements, each designed for a
different investment amount.
Historical with acquisition. This is a backward-looking projection of a company's results in one or more
prior years that includes the results of another business that the company wants to purchase, net of
acquisition costs and synergies. This approach is useful for seeing how a prospective acquisition could
have altered the financial results of the acquiring entity. You can also use this method for a shorter
look-back period, just to the beginning of the current fiscal year; doing so gives investors a view of how
the company would have performed if a recent acquisition had been made as of the beginning of the
year; this can be a useful extrapolation of the results that may occur in the next fiscal year.
Risk analysis. It may be useful to create a different set of pro forma financial statements that reflect best-
case and worst-case scenarios for a business, so that managers can see the financial impact of different
decisions and the extent to which they can mitigate those risks.
Adjustments to GAAP or IFRS. Management may believe that the financial results it has reported under
either the GAAP or IFRS accounting frameworks are inaccurate, or do not reveal a complete picture of
the results of their business (usually because of the enforced reporting of a one-time event). If so, they
may issue pro forma financial statements that include the corrections they believe are necessary to
provide a better view of the business. The Securities and Exchange Commission takes a dim view of
this kind of adjusted reporting, and has issued regulations about it in their Regulation G.
There can be a significant problem with issuing pro forma financial statements to the public, since
they contain management's assumptions about business conditions that may vary substantially from
actual events, and which may, in retrospect, prove to be extremely inaccurate. Generally, pro forma
financial statements tend to portray a business as being more successful than it really is, and having
more financial resources available than may actually be the case. Consequently, investors should be
extremely cautious when evaluating these types of financial statements, and spend time
understanding how they differ from the issuing firm's normal financial statements.
https://www.accountingtools.com/articles/what-are-pro-forma-financial-
statements.html#:~:text=Here%20are%20several%20examples%20of%20pro%20forma%20fina
ncial,the%20company%20wants%20to%20...%20More%20items...%20
Variance analysis
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 39 of 95
• Budget variances
• Cash flow budgets
• Monthly cash model
As far as reporting is concerned, let’s consider that in the light of Dashboarding and BI.
Budget Model
I will provide for you a series of examples that vary in complexity that you can use to explore how to
create and maintain such budgets.
We will work through a detailed example that helps us to appreciate some of the ways in which
forecasts are made from historical data. By considering the drivers of financial forecasting, we will
conclude this section having gained great insights into what can be a tricky area.
Variance analysis
• Budget variances
• Cash flow budgets
• Monthly cash model
All of these headings will be dealt with by referring to the Excel based examples that I will provide for
you.
Business Intelligence 2
• Enhancing your Power Pivot reports by using
o CUBE Functions
o DAX: Data Analytic eXpressions
CUBE functions are a different kettle of fish altogether and I will introduce them to some extent.
Again, there is a great deal to learn vis a vis CUBE functions so an introduction is the best that we
can expect in a course such as this one.
DAX programming is relatively involved and takes time to master. We will scratch the surface here to
some extent but you will find it illuminating.
CUBE Functions
Cube Functions
Cube formulas get values from the data cube and they are the
link between the Data Model and the Excel Worksheet. In this
chapter we will be using the pivot_consolidate_cubes.xlsx
file.
Notice that all of the pivot table formatting has gone and as
we click on the cells of what is left, we see the CUBE()
functions that have been created for us:
C3 and B4 contain the same column and row headers that they
had in the pivot table
B5 =CUBEMEMBER("ThisWorkbookDataModel","[ord].[Product].&[Apples]")
B6 =CUBEMEMBER("ThisWorkbookDataModel","[ord].[Product].&[Avocados]")
C4 =CUBEMEMBER("ThisWorkbookDataModel","[ord].[Regular/Organic].&[Organic]")
D4 =CUBEMEMBER("ThisWorkbookDataModel","[ord].[Regular/Organic].&[Regular]")
This tells us that column headers and row headers contain cube
members and those cells all use the CUBEMEMBER() function
The other cells, that contain values use the CUBEVALUE()
function.
One question you might just have asked is, can I insert sub
totals in rows 9 and 12 from my new table, above, using Cube
Formulas? The answer is yes! Let’s take a look. To make this
work, we need to use the CUBESET() function together with the
CUBEMEMBER() function; but it is probably easier than you
think!
B9 =CUBESET("ThisWorkbookDataModel",B5:B8,"Non Citrus")
C9 =CUBEVALUE("ThisWorkbookDataModel",$B$3,C4,$B9)
D9 =CUBEVALUE("ThisWorkbookDataModel",$B$3,D4,$B9)
B12 =CUBESET("ThisWorkbookDataModel",B10:B11,"Citrus")
C12 =CUBEVALUE("ThisWorkbookDataModel",$B$3,C4,$B12)
D12 =CUBEVALUE("ThisWorkbookDataModel",$B$3,D4,$B12)
Notice, I did not add a new row for the Pears and
Pomegranates; but you could do that and call them Other … but
you will see my answer to that in the
pivot_consolidate_cubes.xlsx file on the appropriate tab.
The Grand Total row is already in the original pivot table so
there is no need to create that row but do notice that B15:
D15 all use the CUBEMEMBER() function.
Trace Precedents
Put your cursor in any other cell and click Trace Precedents
again to see what happens.
=CUBEVALUE("ThisWorkbookDataModel",$B$3,$B8,D$4)
Excel put the names of the measures back to what they were!
There is a solution to change the headings to what we want: we
add a Caption. Here is the CUBEMEMBER syntax and I will show
you the before and after we change the heading from Sum of $
Sale to Total Sales ($):
= CUBEMEMBER(Connection,member_Expression,[Caption])
=CUBEMEMBER("ThisWorkbookDataModel",{"[ord].[Regular/Organic].&[Organic]","
[Measures].[Sum of $ Sale]"})
We need this:
=CUBEMEMBER("ThisWorkbookDataModel",{"[ord].[Regular/Organic].&[Organic]","
[Measures].[Sum of $ Sale]"},"Total Sales ($)")
Let’s use our new table to create something new. In our new
table, I want to see, by category
You can try that for yourself before you read my answer:
No need for a CUBE function at all and the same is true of the
entries in columns H, I and J.
C16 =AVERAGE(C6:C13)
C17 =STDEV.S(C6:C13)
=CUBEVALUE("ThisWorkbookDataModel",$C$1,$B10,E$3)
Hints:
Adding Slicers
Now that we have added Slicers and CUBE functions, let’s have
a look at some of the CUBE functions in our new table:
Firstly recreate the same pivot table and table that we have
just been working on but this time, do not include Calendar
Year in the Filters area. You should see this:
Now, let’s find the name of the Occupation Slicer and add that
to the CUBEVALUE functions
Conclusion
Function Description
CUBEKPIMEMBER Returns a key performance indicator
function (KPI) property and displays the KPI name
in the cell. A KPI is a quantifiable
measurement, such as monthly gross
profit or quarterly employee turnover,
that is used to monitor an
organization's performance.
CUBEMEMBER Returns a member or tuple from the cube.
function Use to validate that the member or tuple
exists in the cube.
CUBEMEMBERPROPERTY Returns the value of a member property
function from the cube. Use to validate that a
member name exists within the cube and
to return the specified property for
this member.
CUBERANKEDMEMBER Returns the nth, or ranked, member in a
function set. Use to return one or more elements
in a set, such as the top sales
performer or the top 10 students.
CUBESET function Defines a calculated set of members or
tuples by sending a set expression to
the cube on the server, which creates
the set, and then returns that set to
Microsoft Excel.
CUBESETCOUNT Returns the number of items in a set.
function
CUBEVALUE function Returns an aggregated value from the
cube.
https://support.office.com/en-us/article/cube-functions-
reference-2378132b-d3f2-4af1-896d-
• Customers
• InvoiceHeader
• InvoiceDetails
• Year
• Month
• Month Name
Year
Month Name
If you copy and paste the formula for FORMAT from the previous
example, you could well find that Power Pivot says there is an
error in it. There is an error but it is not your fault: this
is an age old Office problem in which some characters in Word
and Power Pivot are different to the equivalent character in
Excel. To correct the error, just retype the character causing
the problem in the formula in Power Pivot and it should solve
the problem. Note: sometimes it is not obvious which character
might be at fault: trial and error works best here ☹.
Sorting
Open the Power Pivot window and with the cursor anywhere in
the data area of the InvoiceHeader table, click the Sort by
Column icon on the Home tab and complete it as follows:
The relationships for this model are here and notice that the
links are:
Customers Customer ID
InvoiceHeader Customer ID
And
InvoiceHeader Customer ID
InvoiceDetails Invoice Number
Pivot Table
Keeping it very simple, we can see how the month names behave
in this screenshot of our Pivot Table:
• Double click the Customers[Discount Amount] field, close the parentheses and
then press Enter.
• Power Pivot automatically renames the column to Calculated Column 1.
• Double click on the column label and rename the column Discounts %.
• Starting in the next column, click on the first blank cell in the empty column
labelled Add Column. In the Formula bar, type =[UnitPrice]*[Quantity]*(1-
[Discounts %]) and then press Enter.
• Again, Power Pivot automatically renames the column to Calculated Column 1.
• Double click on the column label and rename the column Discounted Revenue.
Imagine that you want to show the difference in unit costs between the years 2007 and 2006
for each of your customers. To do this, you have to find the sum of unit costs for 2007,
determine the sum of unit costs for 2006 and then subtract the sum of 2007 from the sum of
2006. Using calculated measures is the only way to calculate the cost difference between
2007 and 2006.
We will be using the CALCULATE function here and its syntax is:
CALCULATE(<expression>,<filter1>,<filter2>…)
Where the parameters of the function are
Reference: Michael Alexander (2016) Power Pivot for Dummies John Wiley & Sons
Investment Decisions
• Calculating the NPV, XNPV, IRR, MIRR, XIRR, Profitability Indices and payback period
• Sensitivity Analysis for a Business Case
• Developing and Managing Scenarios
Risk Modeling
There are two examples in the file equiv_loan_method.xlsx for us to explore: I will work through one
of the two methods and you will evaluate the second example. Essentially we are dealing with Lease
v Buy situations here.
• The lessee has to decide whether, given a rental rate on the leased asset, it is preferable
to buy the asset or lease it.
• The lessor has to decide what minimum rental rate justifies the purchase of the asset in
order to lease it out.
One way of solving the lessor’s problem is to turn the preceding analysis around. We use the Excel
Goal Seek (Tools Goal Seek) to get $121,047 as the lessor’s minimum acceptable rental:
To do this:
To find the highest acceptable lease payment we use Goal Seek to do the following
To test the possibility of an asset having a residual value, we can amend the model to allow for a
residual value and see the effect it has. It could, for example, increase the IRR of the lease.
Investment Decisions
Open the file cia_example_delegates.xlsx and follow the instructions you will see there. You will
program a spreadsheet using both tables and formulas. You will also be encouraged to use some of
the inbuilt functions that are provided by Excel.
The following tables contains the three mutually exclusive projects you are required to evaluate using
these methods:
• Payback
• Net Present Value (NPV)
• Profitability Index (PI)
• Internal Rate of Return (IRR)
You are required to present your results both with and without inflation, as appropriate.
r 8.00%
Financial Functions
Open the file fin_func_delegates.xlsx to evaluate the functions you see there:
• FV(RATE,NPER,PMT,PV,TYPE)
• RATE(NPER,PMT PV,FV,TYPE,GUESS)
• NPER(RATE,PMT,PV,FV,TYPE)
• PV(RATE,NPER,PMT,FV,TYPE)
• IPMT(rate,period,nper,pv,fv,type)
• PMT(rate,nper,pv,fv,type)
• PPMT(rate,period,nper,pv,fv,type)
Payment Functions
• IPMT: Returns the interest payment for a given period for an investment based on
periodic, constant payments and a constant interest rate.
• PMT: Calculates the payment for a loan based on constant payments and a constant
interest rate.
• PPMT: Returns the payment on the principal for a given period for an investment based
on periodic, constant payments and a constant interest rate.
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 66 of 95
• NPER: Returns the number of periods for an investment based on periodic, constant
payments and a constant interest rate.
Learn why the Weighted Average Cost of Capital (WACC) is used in Capital Budgeting Models
Using the Built in functions for NPV, XNPV, IRR, XIRR and MIRR
We will use the three assumed projects from the previous section to explore these built
in functions. As we do this we will be using some or all of the following functions:
• FV(RATE,NPER,PMT,PV,TYPE)
• RATE(NPER,PMT PV,FV,TYPE,GUESS)
• NPER(RATE,PMT,PV,FV,TYPE)
• PV(RATE,NPER,PMT,FV,TYPE)
• IPMT(rate,period,nper,pv,fv,type)
• PMT(rate,nper,pv,fv,type)
• PPMT(rate,period,nper,pv,fv,type)
• IRR(VALUES, (GUESS))
• MIRR(VALUES,FINANCE_RATE,REINVEST_RATE)
• NPV()
• XNPV()
• XIRR()
Using Net Present Value (NPV) and Internal Rate of Return (IRR) Models in Making
Capital Budgeting Decisions
Applying the functions: let’s explore the above functions by applying them to the
following mutually exclusive situations:
Cash flows
Project 1 2 3
Year 0 -10,000 -14,000 -15,000
Year 1 6,000 6,000 9,000
Year 2 6,000 6,500 7,000
Year 3 6,000 8,500 6,000
Year 4 6,000 9,000 4,500
Residuals* 0 4,000 5,300
Cost of capital 12%
Inflation 3%
Reinvestment rate 9%
* Note the residuals are to be received at the end of year 4.
Sensitivity analysis means that we take a figure and change it or watch it change. We then take a
related or other figure and monitor any change to it. We might then see that, for example, sales
increased by 3% over the year but profits fell by 1%. Or that total assets increased by 12% and cost
of goods sold rose by 8%.
We might be able to draw conclusions from these changes in that we might conclude they are highly
correlated and they are linked. Alternatively, that any patterns we see are spurious and not to be
trusted or repeated.
1. independent effects of a 1% increase in Gross Margin, a 1% decline in the tax rate, and a 5%
increase in Sales.
2. independent effects of a 2% increase in Gross Margin, a 2% decline in the tax rate, and a 5%
decrease in Sales.
3. composite effects of a 5% increase in Sales, a 2% decline in Gross Margin, a 5% increase is SG&A
as % of Sales, and a 2% decline in the tax rate.
4. independent effects of a 1% increase in Gross Margin, a 1% increase in the tax rate, and a 5%
increase in Sales.
5. composite effects of a 5% increase in Sales; a 2% decline in Gross Margin, a 3% increase is SG&A
as % of Sales, and a 2% decline in the tax rate.
Managing Scenarios
Developing Alternative Scenarios
Let’s consider a problem that you can use SOLVER to sort out: set out in formal, linear
programming style:
What this means is, in summary, there are two products, ‘a’ and ‘b’, for us to consider. Product ‘a’
earns us £17.1667 per unit contribution (or profit) and product ‘b’ earns us £25.8667 per unit
contribution.
There are three constraints, such as raw materials, direct labour and cash; or constraints such as
skilled labour, unskilled labour and machine time.
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 68 of 95
That’s the problem: SOLVER will tell us how many units of ‘a’ to make and how many units of ‘b’ to
make in order to maximise profits: it will also tell us if we have used all of the resources available to
us.
a) solve this problem using SOLVER and make a note of the profit earned; AND when Excel offers to
save your model as a scenario, say yes and type a name like scenario 1.
b) change the contribution per unit of b and ask SOLVER to rework the problem and make a note of
the profit earned; AND save this attempt as a new scenario … scenario 2.
c) with your solution to part b) you will find an answer such as x.xxxx units of a and y.yyyy units of b:
let’s assume that we cannot make fractions of a product and make a note of the profit earned … use
the integer setting in SOLVER to force it to give us our answers for a and b as integers or whole
numbers. Save this as scenario 3.
d) this is a profit maximisation problem: change it to a problem in which the profit MUST be, say
£xxx … rework the problem and see what happens …
The problem is not complex but it is rich in the ideas and feedback it provides. We hope you like the
scenario manager addition, by the way. By selecting
• Data
• Data Tools
• What-If Analysis
• Scenario Manager
you can compare the three scenarios you will be able to save during this exercise.
We will have covered these ideas under the heading of capital investment appraisal and so on
We have already used Goal Seek in this course but I will introduce further examples that will be
useful to you.
Investment Decisions
• Developing the model by identifying the costs and developing the inputs
This will allow us to take capital investment appraisal forward and I will provide you with examples
and spreadsheet templates to help you here.
Risk Modelling
What is Risk?
Uncertainty can impact our decisions and actions in desirable as well as undesirable ways. If we
own shares of stock, the future price is uncertain:- it may go higher, which is desirable, or it may go
lower, which is undesirable. When contemplating large payoffs or penalties, most people are risk
averse. For example, suppose we can choose between a coin toss that gains $50 or breaks even,
and a coin toss that gains $150 or loses $100. Most people would choose the first coin toss, even
though the average or 'expected' outcome of both tosses is $25. Hence, in risk analysis we usually
focus on what can go wrong: the outcomes that represent loss or damage: although a good
analysis will also help us understand what can go right as well.
• If the quantity we'd like to know is a competing firm's planned product price, uncertainty arises
from our lack of knowledge: The price may be well-known to that firm's employees, but it's
unknown to us.
• If the quantity is market demand for products like ours, uncertainty arises from the complexity
of the process: Demand depends on economic factors, fashions and preferences, and our and
other firms' actions: and even if we knew all of these, we couldn't precisely calculate their net
impact on final demand.
• If the quantity is a material thickness in nanometers, uncertainty may arise from limits on
our ability to measure this physical quantity. We may also have limits on our ability to
control fabrication of the material.
• Many processes that we want to model: from the failure rate of an electronic component to the
behaviour of a macromolecule: have inherent randomness for all intents and purposes.
Uncertainty that is inherent in nature is sometimes called irreducible uncertainty. You may be
able to reduce the effect of the random variation on your model, or reduce your model's sensitivity to
this variation, but it will always be there. In other situations you may be dealing with reducible
uncertainty: through market research, physical tests, better calibration, or other means you may be
able to reduce the uncertainty itself.
Uncertainty is normally an intrinsic feature of some part of nature: it is the same for all
observers. But risk is specific to a person or company: it is not the same for all observers. The
possibility of rain tomorrow is uncertain for everyone; but the risk of getting wet is specific to me, if
(i) I intend to go outdoors and (ii) I view getting wet as undesirable. The possibility that stock A will
decline in price tomorrow is an uncertainty for both you and me; but if you own the stock long and I
do not, it is a risk only for you. If I have sold the stock short, a decline in price is a desirable
outcome for me.
Many, but not all, risks involve choices. By taking some action, we may deliberately expose
ourselves to risk: normally because we expect a gain that more than compensates us for bearing
the risk. If you and I come to a bridge across a canyon that we want to cross, and we notice signs of
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 70 of 95
weakness in its structure, there is uncertainty about whether the bridge can hold our weight,
independent of our actions. If I choose to walk across the bridge to reach the other side, and you
choose to stay where you are, I will bear the risk that the bridge will not hold my weight, but you will
not. Most business and investment decisions are choices that involve "taking a calculated risk":
and risk analysis can give us better ways to make the calculation.
If the stakes are high enough, we can and should deal with risk explicitly, with the aid of
a quantitative model. As humans, we have heuristics or "rules of thumb" for dealing with risk, but
these don't serve us very well in many business and public policy situations. In fact, much research
shows that we have cognitive biases, such as over-weighting the most recent adverse event and
projecting current good or bad outcomes too far into the future, that work against our desire to make
the best decisions. Quantitative risk analysis can help us escape these biases, and make better
decisions.
It helps to recognize up front that when uncertainty is a large factor, the best decision does not
always lead to the best outcome. The "luck of the draw" may still go against us. Risk analysis can
help us analyse, document, and communicate to senior decision makers and stakeholders the
extent of uncertainty, the limits of our knowledge, and the reasons for taking a course of action.
Risk analysis is the systematic study of uncertainties and risks we encounter in business,
engineering, public policy, and many other areas. Risk analysts seek to identify the risks faced by
an institution or business unit, understand how and when they arise, and estimate the impact
(financial or otherwise) of adverse outcomes. Risk managers start with risk analysis, then seek to
take actions that will mitigate or hedge these risks.
Some institutions, such as banks and investment management firms, are in the business of
taking risks every day. Risk analysis and management is clearly crucial for these institutions. One of
the roles of risk management in these firms is to quantify the financial risks involved in each
investment, trading, or other business activity, and allocate a risk budget across these
activities. Banks in particular are required by their regulators to identify and quantify their risks, often
computing measures such as Value at Risk (VaR), and ensure that they have adequate capital to
maintain solvency should the worst (or near-worst) outcomes occur.
One way to learn how to deal with uncertainty is to perform an experiment. But often, it is too
dangerous or expensive to perform an experiment in the "real world" so we resort to using models:
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 71 of 95
such as a scale model of an airplane in a wind tunnel. With a model, we can perform
many experiments to simulate what could happen in the real world. For example, subjecting our
model airplane to various air currents and forces, we might learn how an actual aircraft design is
likely to behave in the real world. We can introduce uncertainty into our experiments by
allowing some conditions to vary randomly. A single experiment that involves a randomly
generated condition might not tell us very much, but if we perform a simulation that consists of
many such experiments (or random trials), and collect statistics about the results, we can learn
quite a lot.
If we have the skills and software tools needed to create a mathematical model of a project or
process on a computer, we can perform a simulation with many trials in a very short time, and
at very low cost. With such advantages over experiments in the real world, it's no wonder that
computer-based simulation has become so popular. For business models, Microsoft Excel is an ideal
tool for creating such a model: and simulation software such as Frontline Systems' Risk Solver can be
used to get maximum insight from the model.
Monte Carlo simulation: named after the city in Monaco famed for its casinos and games of
chance: is a powerful mathematical method for conducting quantitative risk analysis. Monte Carlo
methods rely on random sampling: the computer-based equivalent of a coin toss, dice roll, or
roulette wheel. The numbers from random sampling are "plugged into" a mathematical model and
used to calculate outcomes of interest. This process is repeated many hundreds or thousands
of times. With the aid of software, we can obtain statistics and view charts and graphs of the
results.
Monte Carlo simulation is especially helpful when there are several different sources of
uncertainty that interact to produce an outcome. For example, if we're dealing with uncertain
market demand, competitors' pricing, and variable production and raw materials costs at the same
time, it can be very difficult to estimate the impacts of these factors: in combination: on Net
Profit. Monte Carlo simulation can quickly analyse thousands of 'what-if' scenarios, often yielding
surprising insights into what can go right, what can go wrong, and what we can do about it.
The process of risk analysis includes identifying and quantifying uncertainties, estimating their impact
on outcomes that we care about, building a risk analysis model that expresses these elements in
quantitative form, exploring the model through simulation and sensitivity analysis, and making risk
management decisions that can help us avoid, mitigate, or otherwise deal with risk.
In risk analysis, our goal is to identify each important source of uncertainty, and quantify its
magnitude as well as we can. For example, we may not know our competitor's exact price, but we
can place bounds on it, based on known production and marketing costs. While we can't predict the
exact number of people shopping at a store each day, we can examine past data for
the frequency of days when (say) 10, 20, 30, ..., 100 people shopped, and use this to estimate
a distribution of shoppers on future days. This process of identifying and quantifying uncertainties
is a key step in risk analysis.
Our next step is to accurately estimate the impact of the uncertainties on the outcomes we care
about. For example, we may not be able to predict demand for our product exactly; but given a
number for demand, since we know our costs and margins, we can often calculate the impact on our
Net Profit. We may not know the exact number of shoppers on any future day; but given a number
of shoppers, we can calculate how many store salespeople we need to service them, and estimate the
sales we're likely to generate. In doing this, we build a model that allows us to compute "outputs":
outcomes such as Net Profit: for any given "inputs".
If we can complete these steps, we'll have a risk analysis model (or simply risk model). The
model has inputs which are uncertain: these may be called uncertain variables, random variables,
assumptions, or simply inputs. For any given set of input values, the model calculates outputs:
outcomes such as Net Profit. Unlike other kinds of models, a risk analysis model requires us to think
in ranges: Because the inputs are uncertain and may take on many different values,
the outputs are also uncertain and may take on a range of values. If management asks, "Give
me a number for next year's sales", a risk analyst must respond that a single number is not going to
be meaningful: it will defeat the purpose of risk analysis.
We can use our risk model in several ways: but one effective way is to explore the possible
outcomes using simulation. For a model in Excel, we can perform a Monte Carlo simulation
on our model. Simulation performs many (thousands of) experiments or trials: each one
samples possible values for the uncertain inputs and calculates the corresponding output values for
that trial. The first run of a simulation model can often yield results that are surprising to the
modellers or to management: especially when there are several different sources of
uncertainty that interact to produce an outcome. Even before an in depth analysis of the results,
simply seeing the range of outcomes: for example, how low and how high Net Profit can be, given
our model and sources of uncertainty: can encourage a re-thinking of the risks we face, and the
actions we can take.
Because a simulation yields many possible values for the outcomes we care about: from Net Profit to
environmental impact: some work is needed to analyse the results. For instance, we can
summarize the range of outcomes using various kinds of statistics, such as the mean or median, the
standard deviation and variance, or the 5th and 95th percentile or Value at Risk. It is also very useful
to create charts to help us visualize the results: such as frequency charts and cumulative
frequency charts
Another powerful tool for assessing model results is sensitivity analysis, which can help us identify
the uncertain inputs with the biggest impact on our key outcomes. Using software, we can also
run multiple simulations, with an input we choose taking a different value on each simulation, and
assess the results. Analysing the model can give us more information, but also insight about our
real-world problem. For example, a tornado chart can give us a quick visual summary of the
uncertainties with the greatest positive and negative impact on Net Profit.
Tornado Chart
The payoff comes when we use our risk analysis model and simulation results to make choices
or decisions that may help us avoid or mitigate risk: or perhaps earn greater returns that help
compensate us for taking these risks. We can also compare the risk and return of different projects
or investments, and we can seek to diversify our position so that no single risk can do too much
harm. By doing this, we can practice risk management.
While we can't avoid uncertainty and risk altogether, there are often many steps we can take to
better cope with risk. Risk analysis helps us determine the right steps to take. Our next step in this
Tutorial is to take a closer look at a risk analysis model.
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 74 of 95
Using Models for Risk Analysis
A risk analysis model could be a physical scale model, but it is most often a mathematical
model. The model can be created by writing code in a programming language, statements in a
simulation modelling language, or formulas in a Microsoft Excel spreadsheet. Regardless of how it is
expressed, a risk analysis model will include:
• Model inputs that are uncertain numbers: we'll call these uncertain variables
• Intermediate calculations as required
• Model outputs that depend on the inputs: we'll call these uncertain functions
It is essential to realize that model outputs that depend on uncertain inputs are uncertain themselves:
hence we talk about uncertain variables and uncertain functions. To make use of a risk analysis
model, we will test many different numeric values for the uncertain input variables, and we'll obtain
many different numeric values for the uncertain output functions. We'll use statistics to analyze and
summarize all the values for the uncertain functions (and, if we wish, the uncertain variables).
Since a risk analysis model will be subject to intensive computations, you'll generally want to create
the model using available risk analysis tools. An Excel spreadsheet can be a simple, yet
powerful tool for creating your model.
Model Simplification
Like all models, a risk analysis model is a simplification and approximation of reality. The art of
modelling involves choices of what essential factors must be included, and what factors may be
ignored or safely excluded from the model. As Albert Einstein suggested, a model should be "as
simple as possible, but no simpler."
We must also choose what sample values to test for the uncertain variables. While we should do
our best to choose the right sample values, we derive a great benefit simply by moving from fixed
values to almost any reasonable sample of values for an uncertain quantity.
Dr Sam Savage likes to use the analogy of shaking a ladder before you use it to climb up on a
roof. When you do this, you subject the ladder to a random set of forces, to see how it
behaves. Even though the forces when you are shaking are not distributed in the same way as the
forces when you are climbing, shaking a ladder is still a good “stress test” in advance.
Risk analysis tools are used to create a risk model, perform simulations using the model,
and analyse the results. An Excel spreadsheet can be a simple, yet powerful tool for creating
your model.
Once we have a complete model in a form appropriate for our chosen risk analysis tool, we can
execute or "run" it, performing one or more simulations, to get results. We want the software to do
the work, since it would take many hours to run the model manually, thousands of times.
1. Choose sample values for the uncertain variables, and "plug them into" the model. Evaluate the
model: Run the program, or recalculate the spreadsheet.
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 75 of 95
2. Observe and record the values of the uncertain functions.
For models with few uncertain variables, where the possible values of these variables cover a limited
range, it may be possible to run a series of trials, where we systematically "step through" the range
of each variable, in all combinations. For example, if we had two variables, X ranging from 1 to
100 and Y ranging from 10 to 50 in steps of 5, we'd perform a trial with X = 1 and Y = 10, then X =
1 and Y = 15, and so on for a total of 900 trials. But for most models of any size, this approach is
impractical: We would need millions or even billions of trials, and running them all might not actually
tell us very much.
Hence, simulation normally relies on random sampling of values for the uncertain variables: In
step 1 above, we draw one or more random numbers: analogous to flipping a coin or rolling dice
on the computer: and we use these numbers to randomly select sample values from the range of
possible values (the distribution) of each uncertain variable. If we do this effectively (using high-
quality random number generation and sampling methods), we can obtain excellent "coverage" of the
possible values and model outcomes: even if the model has many uncertain variables.
A simulation can be run for as many trials as you specify. To obtain more accurate results, you must
run more trials: so there is a trade off between accuracy of the results, and the time taken to run the
simulation. But there are several ways you can improve this trade off, and obtain good accuracy in a
limited amount of time:
• Ensure that each trial runs as fast as possible. If you create your model in a compiled
programming language, it can execute trials at the fastest possible speed: but this usually
requires more development time. Running a model in Excel is slower, since your Excel formulas
must be interpreted on each simulation run.
• Use a sampling method that provides better coverage of the possible values of variables, and
lower variance (higher accuracy) for the outcomes, than standard Monte Carlo sampling for the
same number of trials.
• For demanding applications, such as those found in quantitative finance, use multiple streams
of random numbers in combination with a high quality, long period random number
generator to minimize any dependencies between samples for your uncertain variables.
Because a simulation yields many possible values for the outcomes we care about: from Net Profit to
environmental impact: some work is needed to analyse the results. It is very useful to create
charts to help us visualize the results. We can summarize the range of outcomes using various kinds
of statistics, such as the mean or median, the standard deviation and variance, or the 5th and 95th
percentile or Value at Risk. Another powerful tool for assessing model results is sensitivity analysis,
which can help us identify the uncertain inputs with the biggest impact on our key outcomes. Using
software, we can also run multiple simulations, with an input we choose taking a different value
on each simulation, and assess the results. Analysing the model can give us more information, but
also insight about our real-world problem.
Conclusion
Simulation Techniques
Simulation Techniques
Business Intelligence 3
A discrete random variable can be specified by a list of values and the probability of occurrence for
each value of the random variable. Because a continuous random variable can assume an infinite
number of values, you can't list the probability of occurrence for each value of a continuous random
variable. A continuous random variable is completely described by its probability density function. For
example, the probability density function for a randomly chosen person's IQ is shown in Figure 62-2.
A set of random variables are independent if knowledge of the value of any of their subsets tells you
nothing about the values of the other random variables. For example, the number of games won by
the Indiana University football team during a year is independent of the percentage return on
Microsoft during the same year. Knowing that Indiana did very well would not change your view of
how Microsoft stock did during the year.
On the other hand, the returns on Microsoft stock and Intel stock are not independent. If yc are told
that Microsoft stock had a high return in one year, in all likelihood, computer sales were high, which
tells you that Intel probably had a good year as well.
Problems
2 Compute the mean, variance, and standard deviation of the number of dots showing when a die is
tossed.
4 The current price of a company’s stock is $20. The company is a takeover target. If the takeover is
successful, the company’s stock price will increase to $30. If the takeover is unsuccessful, the stock
price will drop to $12. Determine the range of values for the probability of a successful takeover that
would make it worthwhile to purchase the stock today. Assume your goal is to maximize your
expected profit. Hint: Use the Goal Seek command,
5 When a roulette wheel is spun, the possible outcomes are 0, 00, 1, 2, ... ,36. If you bet on a
number coming up you win $35 if your number comes up and you lose $1 otherwise. What is the
mean and standard deviation of your winnings on a single play of the game?
Reference: Wayne Winston (2011) Excel 2010 Data Analysis ad Business Modelling Microsoft Press
Imagine you are the marketing manager of a firm that is planning to introduce a new product. You
need to estimate the first year net profit from this product, which will depend on:
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 78 of 95
• Sales volume in units
• Price per unit
• Unit cost
• Fixed costs
Net profit will be calculated as Net Profit = Sales Volume* (Selling Price - Unit cost) - Fixed
costs. Fixed costs (for overhead, advertising, etc.) are known to be $120,000. But the other factors
all involve some uncertainty. Sales volume (in units) can cover quite a range, and the selling price
per unit will depend on competitor actions. Unit costs will also vary depending on vendor prices and
production experience.
Uncertain Variables
To build a risk analysis model, we must first identify the uncertain variables -- also called random
variables. While there's some uncertainty in almost all variables in a business model, we want to
focus on variables where the range of values is significant.
Based on your market research, you believe that there are equal chances that the market will
be Slow, OK, or Hot.
• In the Slow market scenario, you expect to sell 50,000 units at an average selling price of
$11.00 per unit.
• In the OK market scenario, you expect to sell 75,000 units, but you'll likely realize a lower
average selling price of $10.00 per unit.
• In the Hot market scenario, you expect to sell 100,000 units, but this will bring in competitors
who will drive down the average selling price to $8.00 per unit.
As a result, you expect to sell 75,000 units (ie (50,000+75,000+100,000)/3 = 75,000) at an average
selling price of $9.67 per unit (ie ($11+$10+$8)/3 = $9.67).
Unit Cost
Another uncertain variable is Unit Cost. Your firm’s production manager advises you that unit costs
may be anywhere from $5.50 to $7.50, with a most likely cost of $6.50. In this case, the most
likely cost is also the average cost.
Uncertain Functions
Net Profit
Our next step is to identify uncertain functions -- also called functions of a random variable. Recall
that Net Profit is calculated as Net Profit = Sales Volume * (Selling Price - Unit cost) - Fixed
costs. However, Sales Volume, Selling Price and Unit Cost are all uncertain variables, so Net Profit
is an uncertain function.
Intuition might suggest that plugging the average value of our uncertain inputs (Sales Volume, Selling
Price, and Unit Cost) into our model should produce the average value of the output (Net
Profit). However, as we’ll see in a moment, the Net Profit figure of $117,750 calculated by this
model, based on average values for the uncertain factors, is quite misleading. The true average Net
Profit is closer to $93,000! As Dr. Sam Savage warns, "Plans based on average assumptions will
be wrong on average."
To turn the spreadsheet model on the previous page into a risk analysis model, we need to replace
the fixed average Sales Volume, Selling Price, and Unit Cost amounts with variable amounts that
reflect their uncertainty.
Since there are equal chances that the market will be Slow, OK, or Hot, we want to create
an uncertain variable that selects among these three possibilities by generating a random number -
- say 1, 2 or 3 -- with equal probability. We could then associate the number 1 with the "Hot Market"
state, the number 2 with the "OK Market" state, and the number 3 with the "Slow Market" state. We
can generate these numbers easily in Risk Solver using an integer uniform probability distribution.
We'll then base the Sales Volume and Selling Price on the value of this uncertain variable.
With cell F4 selected, we click the Distributions button on the Risk Solver Ribbon and then
select IntUniform from the Discrete distributions gallery (as shown below).
When we click the Save icon in the dialog toolbar, the formula =PsiIntUniform(1,3) is written to
cell F4. F4 is now an uncertain variable. (Note that you could also type the
formula =PsiIntUniform(1,3) in cell F4.) If we now repeatedly press function key [F9] to
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 81 of 95
recalculate the spreadsheet several times, a different value -- either 1, 2 or 3 -- appears in cell F4
with each recalculation.
Next, we will use the values of 1, 2 or 3 in cell F4 to display the appropriate Sales Volume and Selling
Price values in cells F5 and F6, respectively, associated with each market state . With cell F5 selected,
we enter the formula:
This will cause F4 to return 100,000, 75,000, or 50,000, depending on the value in F4 (which will be
1, 2 or 3). Next, with cell F6 selected, we enter the formula:
This will cause F6 to return $8, $10 or $11, depending on the value in F4 (again, 1, 2, or 3). Notice
that the values returned by F5 and F6 are related or correlated: Higher sales volume is
accompanied by lower selling prices, and vice versa. If we allowed scenarios with 100,000 units sold
at $11 each, our model would be unrealistic. Risk Solver supports more versatile ways to specify
correlation between uncertain variables, but this approach is easy to understand in this example.
With cell F7 (representing Unit Cost) selected, from the Risk Solver Ribbon we click
the Distributions button and then select Triangular from the Common distributions
gallery. (Unlike a discrete distribution, a sample drawn from a continuous distribution can
be any numeric value, such as 5.8 or 6.01, in a range.)
We’ve now defined the uncertain variables in our risk analysis model. Anything calculated from these
uncertain variables is an uncertain function, but usually we’re interested only in specific results
such as Net Profit in cell F10. When we “turn on” Interactive Simulation, F10 will effectively hold
an array of values, each one calculated from different values sampled for F4, F5, F6 and F7.
What would we like to know about the array of values for Net Profit at cell F10? The
simplest summary resultis the average (or mean) Net Profit. Note that this will be
the average Net Profit across 1,000 or more scenarios or trials -- not a single calculation from
average values of the inputs. With cell F10 selected, we click the Results button on the Risk Solver
Ribbon followed by clicking on the Statistics option which shows us the available statistics functions,
which we can "drag and drop" into a worksheet cell.
With an old fashioned simulation software package, you'd press a button to start a simulation, then
perhaps get a cup of coffee. Because simulations ran slowly, software packages were designed for
"batch" operation: You'd spend time getting everything set up just right, run a simulation and wait
(sometimes quite a while), then spend time analysing the results. But with Risk Solver, simulations
run so fast that fully Interactive Simulation is practical. To turn on Interactive Simulation, we
simply click the Simulate light bulb on the Ribbon. It will “light up,” as shown below. In the blink
of an eye, your first Monte Carlo simulation is complete!
From now on (until we click the light bulb again to turn it off), 1,000 Monte Carlo simulation trials
(the default number) will be run each time you change the spreadsheet, and cell F11 will display
the average Net Profit for these 1,000 trials:
We've turned our spreadsheet model into a risk analysis model, and we've turned on Interactive
Simulation. One immediate insight is that the mean or true average Net Profit, over 1,000
different simulated outcomes, is much less than we expected from our naïve “Flawed Average”
model. We’ve also seen that in some outcomes, our Net Profit is actually a loss.
A quick look at the dropdown galleries for Statistic, Risk Measure and Range on the Risk Solver
Ribbon suggests that we can easily compute and view many other statistics about Net Profit. But
we’d really like to see the full range of outcomes in this model. This is very easy to do in Risk
Solver. With Interactive Simulation turned on, simply move the mouse pointer to F10 and wait
about 1 second. A miniature, live frequency distribution chart of the simulation trial values for cell
F10 appears automatically:
From this graph, we immediately see that in some cases we can lose a lot of money -- more than
$50,000! But we also see that we make a profit most of the time. The vertical red line at $0 (which
we can click and drag to different positions on the chart) allows us to immediately determine the
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 88 of 95
percentage of outcomes in our simulation that fell below and above the chosen value. So at the top
of the chart, we see that 7.3% of the trials produced net profit values less than $0 while 92.7% of
the trials resulted in positive net profits.
So far in our business forecast risk model, we've looked at charts of the full range of Net Profit
outcomes, in the form of a frequency bar chart. As shown below, the Statistics pane in the
Uncertain Function dialog provides a variety of statistics for the current set of simulation trials. For
instance, we see that the minimum and maximum net profit values were -$66,869 and $211,052,
respectively.(These same statistics could be computed in the spreadsheet using PsiMin(F10)
and PsiMax(F10), respectively.) The Value at Risk 95%statistic shows that we have a 5% chance
of making $173,752 or more, and the Conditional Value at Risk95% statistic shows that the
average of all the observed Net Profit values below the VaR 95% level is $87,454. (These same
statistics could be computed in the spreadsheet using PsiBVar(F10,0.95) and PsiCVar(F10,0.95),
respectively.)
Also note that an Upper Cut off value of 175 has been entered in the Statistics pane (above),
resulting in a second red vertical line in the chart at this value. The top of the chart now indicates
that 4.6% of the simulation trials produced net profit values in excess of $175,000 -- and 88.1% of
the net profit values fell between $0 and $175,000.
Another view of the full range of outcomes is shown on the Cumulative Frequency tab, shown
below. Each bar on this tab shows the frequency with which Net Profit was less than or equal to the
We now have a pretty good idea of the full range of outcomes for Net Profit. In a conventional
what-if spreadsheet model, we'd now be asking "How can we increase Net Profit?" But with a risk
analysis model, we can also ask and answer questions like "How can we reduce the chance of a
loss?" and "How can we reduce the variability of Net Profit?" Sensitivity analysis can help us
answer all these questions, as discussed on the next page.
Sensitivity Analysis
We've converted our spreadsheet model to a risk analysis model, run a Monte Carlo simulation
with Risk Solver, and examined the full range of outcomes for Net Profit through statistics and
percentiles, charts and graphs. Now we can begin to take steps towards risk management: Using
the model to determine how we can reduce the chance of a loss -- and increase the chance of a
(larger) profit.
The Sensitivity tab in the Uncertain Function displays a Tornado chart that shows you how much
Net Profit (cell F10) changes with a change in the uncertain variables -- our integer uniform
distribution at cell F4, and the triangular distribution for Unit Cost at cell F7. In this model there are
only two uncertain variables, but in a large model with many such variables, it’s usually not obvious
which ones have the greatest impact on outcomes such as Net Profit. A Tornado chart highlights the
key variables, as shown below:
Since we used =CHOOSE(F4,B4,B5,B6) for Sales Volume, on each trial where F4 is 1 we use 100,000
units sold; when F4 is 2 we use 75,000 units sold; and when F4 is 3 we use 50,000 units sold. The
fact the F4 is positively correlated with Net Profit is telling us that we make higher profits when
the market is slow, not when it's hot. Our typical Selling Price is lower when the market is hot, and
the increased Sales Volume doesn't make up for this.
The state of the market is outside of our company and our direct control. But our production
costs, while variable and uncertain, are more subject to our control. Having seen that our Net Profit
suffers in a hot market because of the narrow margin we have between our typical Selling Price and
our Unit Cost, we're motivated to try to improve this situation.We want to ask "what if our
(uncertain) Unit Costs could be reduced?" -- in the presence of our uncertain Sales Volume and
Selling Price. Risk Solver empowers us to ask and answer exactly this question, as shown on the
next page.
Interactive Simulation with Charts and Graphs
Interactive Simulation makes Risk Solver fundamentally different from other Monte Carlo
simulation tools for Excel. The kinds of charts we’ve just seen can be produced by other tools, but
only at the end of a “simulation run.” In contrast, Risk Solver makes these charts live as you play
what-if with your model.
After seeing this model, your production manager might think of a way to reduce the maximum Unit
Cost to $7.00 instead of $7.50. What would be the impact of this change on Net Profit, over
the full range outcomes? With Risk Solver, this is as easy as any other 'what-if’ question in
Excel. We click the Frequency tab in the Uncertain Function to re-display the frequency chart of
Intermediate Financial Modelling
Original Materials © Duncan Williamson October 2020
Page 91 of 95
outcomes for cell F10. Then simply change the number in cell B11-- the high end of the Unit Cost
distribution -- from 7.50 to 7.00. (The other parameters of our triangular distribution -- low $5.50,
most likely $6.50 -- haven't changed.) Immediately, a thousand Monte Carlo trials are performed,
and the chart is updated. The effect is striking: We have a 98.4% chance of making a profit, and --
checking the Statistics pane -- we see that instead of a worst-case loss of -$66,869, we have a worst-
case loss of only -$15,632!
The original spreadsheet 'Flawed Average' model presented a limited and misleading picture of this
business situation. In contrast, the Risk Solver model has illuminated the situation
considerably. We can see what can go right, and what can go wrong. We can make an informed
decision about whether the reward is worth therisk. And -- most important -- we can interactively
explore ways to improve the reward and reduce the risk. This is risk analysis and risk
management at work.
We've now progressed all the way through the process of building a simulation model and
using Monte Carlo simulation for risk analysis and decision-making. But most often, completing
the analysis doesn't mean we are finished -- we must present our results to others. And this
example model was very small -- most realistic models will contain many more uncertain variables
and functions, statistics, correlations and other features. We'll explore how Risk Solver can help us
with these issues in two concluding pages.
Charts and Graphs for Presentations
Often, you may be called up to present your results to others. With Risk Solver, one great way to
do this is inExcel itself, live! But you can quickly create high-quality charts and graphs of your
results, print them, or copy and paste them into PowerPoint or any Windows application. You can
control chart color, dimensionality and transparency, bin density, titles and legends, axis labels and
number formats, horizontal axis scaling, and more.
Click the Clipboard icon to copy the currently displayed chart to the Windows Clipboard. You can
then choose Edit Paste in Word, Excel, PowerPoint and many other applications to paste the chart
image into your document. Click the Print icon to immediately print the currently displayed chart on
your default printer, or click the down arrow next to this icon to display the menu choices shown
above. You can choose a printer and set printer options, set page margins, and preview your output
using these menu choices.
In this tutorial, we’ve used Risk Solver to define two uncertain variables (an integer uniform
distribution to model the Market State and a triangular distirubiont to model the Unit Cost), one
uncertain function (Net Profit), and one summary statistic (the Mean or True Average). In a larger
model, we might define a great many uncertain variables and functions, statistics, and correlation
matrices on a worksheet. They might even be spread across multiple worksheets in a
workbook. How do we find them? Since all of Risk Solver’s function start with 'Psi’, we could use the
Clicking the Model button on the Risk Solver Ribbon toggles on and off the Risk Solver task pane
(shown below) that provides a summary of your entire risk analysis model in outline form. You
can expand and collapse outline groups by clicking the + or - icons that appear in this display. You
can quickly locate cells containing uncertain variables, uncertain functions, statistics, or correlation
matrices.
A single-click on any of the cells displayed in the task pane will display the properties associated
with the selection at the bottom of the task pane. For instance, with cell F7 selected in the Uncertain
Variables section of the task pane, the Formula and Distribution settings associated with this
cell are displayed. This provides a very convenient way to inspect, verify and adjust settings for your
simulation model.
Conclusion
This concludes our Monte Carlo Simulation Tutorial -- we hope you've enjoyed it! If you haven't done
so already, consult our Risk Analysis Tutorial -- it's designed to sharpen your thinking about
uncertainty and risk, and how to identify and quantify the uncertainties you face. For background on
simulation analysis and simulation models, consult our Simulation Introduction.
http://www.solver.com/monte-carlo-simulation-example
For this section you will be given various files, worksheets and data sets on which to base your BI
analysis. You will also be asked to import data using both Excel and Power BI.
Importing data often brings with it dirty data: one of our key tasks will be to clean the dirty data so
that we can use it without let or hindrance!