Time Series Forecasting Performance Measures With Python
Time Series Forecasting Performance Measures With Python
Time Series Forecasting Performance Measures With Python
Navigation
Search...
Time series prediction performance measures provide a summary of the skill and capability of the
forecast model that made the predictions.
There are many different performance measures to choose from. It can be confusing to know which
measure to use and how to interpret the results.
In this tutorial, you will discover performance measures for evaluating time series forecasts with
Python.
Time series generally focus on the prediction of real values, called regression problems. Therefore the
performance measures in this tutorial will focus on methods for evaluating real-valued predictions.
Basic measures of forecast performance, including residual forecast error and forecast bias.
Time series forecast error calculations that have the same units as the expected outcomes such as
mean absolute error.
Widely used error calculations that punish large errors, such as mean squared error and root mean
squared error.
Discover how to prepare and visualize time series data and develop autoregressive forecasting models
in my new book, with 28 step-by-step tutorials, and full python code.
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 1/14
01/10/2019 Time Series Forecasting Performance Measures With Python
Email
Time Series Forecasting Performance Address
Measures With Python
Photo by Tom Hall, some rights reserved.
The forecast error can be calculated for each prediction, providing a time series of forecast errors.
The example below demonstrates how the forecast error can be calculated for a series of 5 predictions
compared to 5 expected values. The example was contrived for demonstration purposes.
Running the example calculates the forecast error for each of the 5 predictions. The list of forecast
errors is then printed.
The units of the forecast error are the same as the units of the prediction. A forecast error of zero
indicates no error, or perfect skill for that forecast.
Take my free 7-day email course and discover how to get started (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
1 mean_forecast_error = mean(forecast_error)
Your Start in Machine ×
Forecast errors can be positive and negative. This means that when the average of these values is
Learning
calculated, an ideal mean forecast error would be zero.
You can master applied Machine Learning
A mean forecast error value other than zero suggests without
a tendency
mathof
orthe
fancymodel to over forecast
degrees.
(negative error) or under forecast (positive error). As such, thehow
Find out mean forecast
in this error
free and is also called the
practical course.
forecast bias.
Email Address
The forecast error can be calculated directly as the mean of the forecast values. The example below
demonstrates how the mean of the forecast errors can be calculated manually.
START MY EMAIL COURSE
1 expected = [0.0, 0.5, 0.0, 0.5, 0.0]
2 predictions = [0.2, 0.4, 0.1, 0.6, 0.2]
3 forecast_errors = [expected[i]-predictions[i] for i in range(len(expected))]
4 bias = sum(forecast_errors) * 1.0/len(expected)
5 print('Bias: %f' % bias)
Running the example prints the mean forecast error, also known as the forecast bias.
In this case the result is negative, meaning that we have over forecast.
1 Bias: -0.100000
The units of the forecast bias are the same as the units of the predictions. A forecast bias of zero, or a
very small number near zero, shows an unbiased model.
Forcing values to be positive is called making them absolute. This is signified by the absolute function
abs() or shown mathematically as two pipe characters around the value: |value|.
Where abs() makes values positive, forecast_error is one or a sequence of forecast errors, and mean()
calculates the average value.
Your Start in Machine Learning
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 3/14
01/10/2019 Time Series Forecasting Performance Measures With Python
We can use the mean_absolute_error() function from the scikit-learn library to calculate the mean
absolute error for a list of predictions. The example below demonstrates this function.
Running the example calculates and prints the mean absolute error for a list of 5 expected and
predicted values.
1 MAE: 0.140000
These error values are in the original units of the predicted values. A mean absolute error of zero
indicates no error.
Your Start in Machine ×
Mean Squared Error Learning
You can master applied Machine Learning
The mean squared error, or MSE, is calculated as the average of the squared forecast error values.
without math or fancy degrees.
Squaring the forecast error values forces them to be positive; it also has the effect of putting more
Find out how in this free and practical course.
weight on large errors.
1 mean_squared_error = mean(forecast_error^2)
We can use the mean_squared_error() function from scikit-learn to calculate the mean squared error for
a list of predictions. The example below demonstrates this function.
Running the example calculates and prints the mean squared error for a list of expected and predicted
values.
1 MSE: 0.022000
The error values are in squared units of the predicted values. A mean squared error of zero indicates
perfect skill, or no error.
It can be transformed back into the original units of the predictions by taking the square root of the
mean squared error score. This is called the root mean squared error, or RMSE.
1 rmse = sqrt(mean_squared_error)
Your Start in Machine Learning
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 4/14
01/10/2019 Time Series Forecasting Performance Measures With Python
This can be calculated by using the sqrt() math function on the mean squared error calculated using the
mean_squared_error() scikit-learn function.
1 RMSE: 0.148324
The RMES error values are in the same units as the predictions. As with the mean squared error, an
RMSE of zero indicates no error. Your Start in Machine ×
Learning
Further Reading
You can master applied Machine Learning
Below are some references for further reading on timewithout
series math
forecast errordegrees.
or fancy measures.
Find out how in this free and practical course.
Section 3.3 Measuring Predictive Accuracy, Practical Time Series Forecasting with R: A Hands-On
Guide.
Email Address
Section 2.5 Evaluating Forecast Accuracy, Forecasting: principles and practice
scikit-learn Metrics API
Section 3.3.4. Regression metrics, scikit-learn API START
GuideMY EMAIL COURSE
Summary
In this tutorial, you discovered a suite of 5 standard time series performance measures in Python.
How to calculate forecast residual error and how to estimate the bias in a list of forecasts.
How to calculate mean absolute forecast error to describe error in the same units as the
predictions.
How to calculate the widely used mean squared error and root mean squared error for forecasts.
Do you have any questions about time series forecast performance measures, or about this tutorial
Ask your questions in the comments below and I will do my best to answer.
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 5/14
01/10/2019 Time Series Forecasting Performance Measures With Python
How to Decompose Time Series Data into Trend and Seasonality How to Work Through a Time Series Forecast Project
REPLY
Peter Marelas February 1, 2017 at 2:24 pm #
I’ve seen MAPE used a few times to evaluate our forecasting models. Do you see this used
often and when would you use one over the other?
REPLY
Jason Brownlee February 2, 2017 at 1:55 pm #
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 6/14
01/10/2019 Time Series Forecasting Performance Measures With Python
REPLY
Ian February 3, 2017 at 3:21 am #
REPLY
Jason Brownlee February 3, 2017 at 10:10 am #
REPLY
Jason Brownlee February 7, 2017 at 10:15 am #
Sorry, I do not have the capacity to prepare this example for you.
REPLY
Devakar Kumar Verma August 8, 2017 at 6:44 pm #
What should be range of values for all different measures of performance for a acceptable
model.
REPLY
Jason Brownlee August 9, 2017 at 6:25 am #
Good question, it really depends on your problem and the units of your variable.
REPLY
Devakar Kumar Verma August 9, 2017 at 2:14 pm #
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 7/14
01/10/2019 Time Series Forecasting Performance Measures With Python
REPLY
Jason Brownlee August 10, 2017 at 6:48 am #
If you have accuracy scores between 0 and 100, maybe 60% is good because the
problem is hard, maybe 98% is good because the problem is easy.
A good way to figure out if a model is skillful is to compare it to a lot of other models or
against a solid base line model (e.g. relative measure of good).
Hi Jason, Learning
And what about if we perform multivariate time series You can master applied Machine Learning
forecasting?
Imagine we forecast 3 time series with the same model,without math or
how would youfancy degrees.
provide the results? per time
series? the mean of the errors ? Find out how in this free and practical course.
You can decide how to evaluate the skill of the model, perhaps RMSE across all forecasted
data points.
REPLY
Carlos May 19, 2018 at 1:45 am #
Good evening, one question , if i want to get max error, how could it be?
REPLY
Jason Brownlee May 19, 2018 at 7:43 am #
REPLY
Kate August 15, 2018 at 11:33 pm #
Your Start in Machine Learning
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 8/14
01/10/2019 Time Series Forecasting Performance Measures With Python
Hi, thanks for the post. If I understand correctly, the method mentioned here is useful for
correcting predictions if the ground truths of the test examples are readily available and are included in
the correction process. I was wondering if there are similar approaches for situations where there is a
noticeable trend for residuals in your training/testing data, and I’d like to create a model utilizing these
trends in an environment where ground truths for new examples are not available?
REPLY
Jason Brownlee August 16, 2018 at 6:06 am #
ARIMA and ETS models can handle the trend in your data.
According to my internet search, I found that Mean Absolute Scaled Error is a perfect measure for sales
Email Address
forecasting. But I didn’t found any concrete explanations on how to use and calculate it. As I am
working with multiple stores and multiple products, I have multiple time series in dataset. I have all the
predictions but don’t know how to evaluate? START MY EMAIL COURSE
Please give some details on how to do this and calculate MASE for multiple time series.
REPLY
Jason Brownlee November 1, 2018 at 6:04 am #
REPLY
Parth Gadoya November 2, 2018 at 5:49 pm #
REPLY
Carla December 7, 2018 at 1:58 am #
Hi,
in “Statistical ans Machine Learning Forecasting Methods: Concerns and ways forward” by Spyros,
Makridakis, they used this Code for sMAPE. Add this as def and use it in the same way as you use
mse. I assume it should work.
Your Start in Machine Learning
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 9/14
01/10/2019 Time Series Forecasting Performance Measures With Python
1 def sMAPE(a, b):
2 """
3 Calculates sMAPE
4 :param a: actual values
5 :param b: predicted values
6 :return: sMAPE
7 """
8 a = np.reshape(a, (-1,))
9 b = np.reshape(b, (-1,))
10 return np.mean(100*2.0 * np.abs(a - b) / (np.abs(a) + np.abs(b))).item()
REPLY
bobby November 7, 2018 at 9:03 am #
Hi,
Your
Do you know any error metrics that punish longer lasting errorsStart
in time in Machine
series more than large magnitude
×
errors? Learning
thanks,
You can master applied Machine Learning
bobby
without math or fancy degrees.
Find out how in this free and practical course.
REPLY
Daniël Muysken December 4, 2018 at 12:06 am #
Hey, I was wondering if you know of an error measure that is not so sensitive to outliers? I have
some high peaks in my timeseries that are difficult to predict and I want this error to not carry to much
weight, when evaluating my prediction.
REPLY
Jason Brownlee December 4, 2018 at 6:03 am #
REPLY
Atharva February 4, 2019 at 5:50 pm #
hey, can u tell me that how can i know the accuracy of my model from rmse value
REPLY
Jason Brownlee February 5, 2019 at 8:13 am #
You cannot calculate accuracy for a regression problem, I explain this more here:
https://machinelearningmastery.com/classification-versus-regression-in-machine-learning/
Your Start in Machine Learning
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 10/14
01/10/2019 Time Series Forecasting Performance Measures With Python
REPLY
Abid Mehmood February 20, 2019 at 8:53 pm #
How to know which error(RMSE,MSE, MAE) can we use in our time series predictions?
REPLY
Jason Brownlee February 21, 2019 at 7:55 am #
You can talk to project stakeholders and discover what they would like to know about the
performance of a model on the problem – then choose a metric accordingly.
If unsure, use RMSE as the units will be in the scale of the target variable and it’s easy to
understand. Your Start in Machine ×
Learning
You can master applied Machine Learning
REPLY
Dav June 8, 2019 at 2:59 am # without math or fancy degrees.
Find out how in this free and practical course.
Hi
Email Address
Once again, great articles and sorry, I just asked you a question on another topic as well.
Tracking Error = Standard deviation of difference between Actual and Predicted values
START MY EMAIL COURSE
I am thinking about using Tracking Error to measure Time Series Forecasting Performance. Any reason I
shouldn’t use it?
Thanks
Dav
REPLY
Jason Brownlee June 8, 2019 at 7:03 am #
REPLY
Francisco June 28, 2019 at 5:59 pm #
Hi Jason,
I’m confused with the Forecast bias: “A mean forecast error value other than zero suggests a tendency
of the model to over forecast (positive error) or under forecast (negative error)”
actual – prediction > 0 if prediction is below, and it’d understand that’s under forecast, but in your
example the bias is negative and the prediction is above:
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 11/14
01/10/2019 Time Series Forecasting Performance Measures With Python
REPLY
Jason Brownlee June 29, 2019 at 6:45 am #
Fixed.
REPLY
duderino July 20, 2019 at 5:52 am #
Your
I really enjoyed reading your post, thank you for Start
this. one in Machine
question if I may: ×
Learning
let’s say we are working with a dataset when you are forecasting population growth (number of people)
and your dataset’s most recent value shows roughly 37mil population.
You can master applied Machine Learning
Assuming we do all of the forecasting and calculationswithout
correctly,
mathand
orI fancy
(we) are currently sitting at
degrees.
Find out how in this free and practical course.
Mean Absolute Error: 52,386
Mean Squared Error: 3,650,276,091
Root Mean Squared Error: 60,417 Email Address
(and just for fun) Mean Absolute Percentage Error: 0.038
How does one interpret these numbers when working with a dataset
START of this
MY EMAIL scale? I’ve read that “closer
COURSE
to zero is best” but I feel like the size of my dataset means that 60,417 is actually a pretty good number,
but I’m not sure.
REPLY
Jason Brownlee July 20, 2019 at 10:58 am #
Leave a Reply
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 12/14
01/10/2019 Time Series Forecasting Performance Measures With Python
Name (required)
Email Address
Welcome! I'm Jason Brownlee PhD and I help developers get results with machine learning.
Read More
START MY EMAIL COURSE
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 13/14
01/10/2019 Time Series Forecasting Performance Measures With Python
The Time Series Forecasting with Python EBook is where I keep the Really Good stuff.
Email Address
https://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/ 14/14