By James Bell
We are going to go over eight different ways of measuring demand forecast error. We end with one of the most popular forecasting error metrics but, for better understanding, we start simple and work up to it. Keep in mind that budgeting and forecasting are a bit different. Forecasting is part of budgeting but the scope is different. Budgeting is used more as a target or tool of control where as forecasting has a much broader use.
You can use these measurements as error analysis for all different types of forecasting including production, finance, inventory, etc.. The accuracy and performance of your forecasts are measured by the errors. The use of these metrics not only helps you scientifically improve your forecasting, it also assists in reducing the abstractness of forecasting through scenario modeling. By this we can say that only through creating multiple forecasts can we then compare these forecasts against each other.
Measuring forecasting error is extremely important. If what actually happens doesn’t match what you expected to happen, you want to know why. Many analysts report the discrepancy and then seek to find the driver of the discrepancy without the awareness that there may be problems and deficiencies with their forecasting methodology. One of the greatest benefits of understanding the drivers of forecast errors is being able to forecast better and with more confidence.
You’ll notice that these formulas sit in one of two categories:
Bias errors are the ones where we do not square the error to make our analysis. Many consider these the worst type of errors. When a high bias exists, this means that their is something systematically wrong with your forecasting. Positive and negative errors values both likely exist and have a cancelling effect when we add them together so above zero is a sign that the overall bias of the forecast is high. Inversly a negative bias means that we systematically forecasted low.
Random errors as the ones where we do square the error and use the absolute values. I like to think of random errors as looking at the volatility or variability within our forecasts. Random errors are a bit trickier and can feel unpredictable. As an analyst, you’ll want to take a look at what is driving the errors to get a better idea of what is going on. Keep in mind that forecasting errors are normal. It is somewhat challenging to perfectly predict the future.
This the basis of many of our further error measures. I’ve seen organizations use this measure and stop here.
This is essentially what you thought it was minus what actually happened. You will also see this with a little subscript “t“. This means the error for each forecast but we could easily have added dimensions such as account, segment, country, etc.. If you have two forecasts then you have two errors, one for each forecast.
This is what we thought it would be, our forecast.
This is our “actual” amount that really happened.
This is what we call our bias. We want this number to be as close to zero as we can get it. Your actual data will likely show that some forecasts were too high and some were too low. CFE tells us if we tend to forecast higher, lower, or zero if it’s just right.
This is the sum of the all the Errors that we calculated in the prior section.
This is also a mean bias error metric where we take the formula from above and divide it by the total number of forecasts to get the average error per forecast.
In statistics, sample means are denoted using the bar. So in this method, we call this E bar.
This is the same standard deviation that you see in descriptive statistics. We see the variability of the error using this statistic. Assuming the distribution of errors is a regular bell shaped Gaussian curve, the 68-95-99.7 rule tells us that 68% of values are within 1 standard deviation of the mean. 95% are within 2 standard deviations and so on.
This is another statistic that describes the variability in forecast error. You may recognize this if you are familiar with Linear Regression. We square the error as a way to get the absolute values thus MSE will always be positive. This removes the offset of positive and negative values that we saw with CFE. We want this number as close to zero as possible. Outliers will pull the mean higher due to the squaring that can create very large numbers.
You can calculate the Root Mean Squared Error (RSME) by taking the square root of the MSE. This helps reduce some of the exaggeration of MSE caused by outliers.
MAD is great when you want to look at a single or low volume forecasts. This is a very common statistic used to analyze individual forecasting error. When aggregating MAD, make sure that your units are consistent. Exponential smoothing can be used with MAD that we will cover in a future article.
Absolute Value of Error
The numbers within the || characters are what tells us that this is an absolute value. One way to get an absolute value is to take a number, square it, and then take the square root. You could also simply take the error and make it positive if it’s negative. We do not want the positives and negatives to negate one another with MAD.
This gives us an average forecast error as a percentage. If we get a value of 5%, this means that the error is approximately 5% of the actual demand. Many organizations really like MAPE and may use it exclusively as communicating percentages makes sense to people.
There are some things to consider when using MAPE. When aggregating multiple MAPEs, make sure that you are consistent with your units. You may want to break out forecasts into categories when aggregating so that it makes more sense. MAPE is very sensitive to scale. Seriously avoid using this if you have a low volume of data. You can see some very extreme and odd values when using a low volume of data.
There is a lot of information here to digest. There are many different ways that these forecast errors can be used and we hope it helps you forecast better. Hopefully you now understand the different methods and are able to apply them effectively. Don’t feel that you need to overcrowd your messaging by using all of these metrics at once. When communicating to higher levels of management, the more clear and simple you can make your message, the better it will be heard.
15 March 2020
15 December 2019
14 November 2019
07 November 2019
12 October 2019
26 September 2019
11 September 2019
Powered By Impressive Business WordPress Theme