FORECASTING:FORECASTING ERRORS

FORECASTING ERRORS

Several forecasting methods now have been presented. How does one choose the appropriate method for any particular application? Identifying the underlying model that best fits the time series (constant-level, linear trend, etc., perhaps in combination with seasonal effects) is an important first step. Assessing how stable the parameters of the model are, and so how much reliance can be placed on older data for forecasting, also helps to narrow down the selection of the method. However, the final choice between two or three methods may still not be clear. Some measure of performance is needed.

The goal is to generate forecasts that are as accurate as possible, so it is natural to base a measure of performance on the forecasting errors.

The forecasting error (also called the residual ) for any period t is the absolute value of the deviation of the forecast for period t (Ft) from what then turns out to be the observed value of the time series for period t (xt). Thus, letting Et denote this error,

Et = ½xt - Ft½.

For example, column J of the spreadsheet in Fig. 27.5 gives the forecasting errors when applying exponential smoothing with trend to the CCW example.

Given the forecasting errors for n time periods (t = 1, 2, . . . , n), two popular mea- sures of performance are available. One, called the mean absolute deviation (MAD) is simply the average of the errors, so

INTRODUCTION TO OPERATIONS RESEARCH-0657

This measure is provided by MSE (M33) in Fig. 27.5.

The advantages of MAD are its ease of calculation and its straightforward interpre- tation. However, the advantage of MSE is that it imposes a relatively large penalty for a large forecasting error that can have serious consequences for the organization while al- most ignoring inconsequentially small forecasting errors. In practice, managers often pre- fer to use MAD, whereas statisticians generally prefer MSE.

Either measure of performance might be used in two different ways. One is to compare alternative forecasting methods in order to choose one with which to begin forecasting. This is done by applying the methods retrospectively to the time series in the past (assuming such data exist). This is a very useful approach as long as the future behavior of the time series is expected to resemble its past behavior. Similarly, this retrospective testing can be used to help select the parameters for a particular fore- casting method, e.g., the smoothing constant(s) for exponential smoothing. Second, after the real forecasting begins with some method, one of the measures of performance (or possibly both) normally would be calculated periodically to monitor how well the method is performing. If the performance is disappointing, the same measure of performance can be calculated for alternative forecasting methods to see if any of them would have performed better.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM