FORECASTING:FORECASTING METHODS FOR A CONSTANT-LEVEL MODEL

FORECASTING METHODS FOR A CONSTANT-LEVEL MODEL

We now present four alternative forecasting methods for the constant-level model introduced in the preceding paragraph. This model, like any other, is only intended to be an idealized representation of the actual situation. For the real time series, at least small shifts in the value of A may be occurring occasionally. Each of the following methods reflects a different assessment of how recently (if at all) a significant shift may have occurred.

Last-Value Forecasting Method

By interpreting t as the current time, the last-value forecasting procedure uses the value of the time series observed at time t (xt) as the forecast at time t + 1. Therefore,

Ft+1 = xt.

For example, if xt represents the sales of a particular product in the quarter just ended, this procedure uses these sales as the forecast of the sales for the next quarter.

This forecasting procedure has the disadvantage of being imprecise; i.e., its variance is large because it is based upon a sample of size 1. It is worth considering only if (1) the underlying assumption about the constant-level model is “shaky” and the process is chang- ing so rapidly that anything before time t is almost irrelevant or misleading or (2) the as- sumption that the random error et has constant variance is unreasonable and the variance at time t actually is much smaller than at previous times.

The last-value forecasting method sometimes is called the naive method, because statisticians consider it naive to use just a sample size of one when additional relevant data are available. However, when conditions are changing rapidly, it may be that the last value is the only relevant data point for forecasting the next value under current conditions. Therefore, decision makers who are anything but naive do occasionally use this method under such circumstances.

Averaging Forecasting Method

This method goes to the other extreme. Rather than using just a sample size of one, this method uses all the data points in the time series and simply averages these points. Thus, the forecast of what the next data point will turn out to be is

INTRODUCTION TO OPERATIONS RESEARCH-0646

This estimate is an excellent one if the process is entirely stable, i.e., if the assumptions about the underlying model are correct. However, frequently there exists skepticism about the persistence of the underlying model over an extended time. Conditions inevitably change eventually. Because of a natural reluctance to use very old data, this procedure generally is limited to young processes.

Moving-Average Forecasting Method

Rather than using very old data that may no longer be relevant, this method averages the data for only the last n periods as the forecast for the next period, i.e.,

INTRODUCTION TO OPERATIONS RESEARCH-0647

Note that this forecast is easily updated from period to period. All that is needed each time is to lop off the first observation and add the last one.

The moving-average estimator combines the advantages of the last value and aver- aging estimators in that it uses only recent history and it uses multiple observations. A disadvantage of this method is that it places as much weight on xt-n+1 as on xt. Intuitively, one would expect a good method to place more weight on the most recent observation than on older observations that may be less representative of current conditions. Our next method does just this.

Exponential Smoothing Forecasting Method

This method uses the formula

INTRODUCTION TO OPERATIONS RESEARCH-0648

where a (0 < a < 1) is called the smoothing constant. (The choice of a is discussed later.) Thus, the forecast is just a weighted sum of the last observation xt and the preceding

forecast Ft for the period just ended. Because of this recursive relationship between

Ft+1 and Ft, alternatively Ft+1 can be expressed as

Ft+1 = axt + a(1 - a)xt-1 + a(1 - a)2xt-2 + . . . .

In this form, it becomes evident that exponential smoothing gives the most weight to xt and decreasing weights to earlier observations. Furthermore, the first form reveals that the forecast is simple to calculate because the data prior to period t need not be retained; all that is required is xt and the previous forecast Ft.

Another alternative form for the exponential smoothing technique is given by

Ft+1 = Ft + a(xt - Ft),

which gives a heuristic justification for this method. In particular, the forecast of the time series at time t + 1 is just the preceding forecast at time t plus the product of the fore- casting error at time t and a discount factor a. This alternative form is often simpler to use.

A measure of effectiveness of exponential smoothing can be obtained under the as- sumption that the process is completely stable, so that X1, X2, . . . are independent, iden- tically distributed random variables with variance a- 2. It then follows that (for large t)

INTRODUCTION TO OPERATIONS RESEARCH-0648

so that the variance is statistically equivalent to a moving average with (2 - a)/a observations. For example, if a is chosen equal to 0.1, then (2 - a)/a = 19. Thus, in terms of its variance, the exponential smoothing method with this value of a is equivalent to the moving-average method that uses 19 observations. However, if a change in the process does occur (e.g., if the mean starts increasing), exponential smoothing will react more quickly with better tracking of the change than the moving-average method.

An important drawback of exponential smoothing is that it lags behind a continuing trend; i.e., if the constant-level model is incorrect and the mean is increasing steadily, then the forecast will be several periods behind. However, the procedure can be easily adjusted for trend (and even seasonally adjusted).

Another disadvantage of exponential smoothing is that it is difficult to choose an appropriate smoothing constant a. Exponential smoothing can be viewed as a statistical filter that inputs raw data from a stochastic process and outputs smoothed estimates of a mean that varies with time. If a is chosen to be small, response to change is slow, with resultant smooth estimators. On the other hand, if a is chosen to be large, response to change is fast, with resultant large variability in the output. Hence, there is a need to compromise, depending upon the degree of stability of the process. It has been suggested that a should not exceed 0.3 and that a reasonable choice for a is approximately 0.1. This value can be increased temporarily if a change in the process is expected or when one is just starting the forecasting. At the start, a reasonable approach is to choose the forecast for period 2 according to

F2 = ax1 + (1 - a)(initial estimate),

where some initial estimate of the constant level A must be obtained. If past data are avail- able, such an estimate may be the average of these data.

The Excel files for this chapter in your OR Courseware includes a pair of Excel tem- plates for each of the four forecasting methods presented in this section. In each use, one template (without seasonality) applies the method just as described here. The second tem- plate (with seasonality) also incorporates into the method the seasonal factors discussed in the next section.

The forecasting area of your IOR Tutorial also includes procedures for applying these four forecasting methods (and others). You enter the data (after making any needed sea- sonal adjustment yourself), and each procedure then shows a graph that includes both the data points (in blue) and the resulting forecasts (in red) for each period. You then have the opportunity to drag any of the data points to new values and immediately see how the subsequent forecasts would change. The purpose is to allow you to play with the data and gain a better feeling for how the forecasts perform with various configurations of data for each of the forecasting methods.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

INTEGER PROGRAMMING:THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS