LINEAR PROGRAMMING UNDER UNCERTAINTY:CHANCE CONSTRAINTS

CHANCE CONSTRAINTS

The parameters of a linear programming model typically remain uncertain until the actual values of these parameters can be observed at some later time when the adopted solution is implemented for the first time. The preceding section describes how robust optimization deals with this uncertainty by revising the values of the parameters in the model to ensure that the resulting solution actually will be feasible when it finally is im- plemented. This involves identifying an upper and lower bound on the possible value of each uncertain parameter. The estimated value of the parameter then is replaced by whichever of these two bounds make it more difficult to achieve feasibility.

This is a useful approach when dealing with hard constraints, i.e., those constraints that must be satisfied. However, it does have certain shortcomings. One is that it might not be possible to accurately identify an upper and lower bound for an uncertain parameter. In fact, it might not even have an upper and lower bound. This is the case, for example, when the underlying probability distribution for a parameter is a normal distribution, which has long tails with no bounds. A related shortcoming is that when the underlying probability distribution has long tails with no bounds, the tendency would be to assign values to the bounds that are so wide that they would lead to overly conservative solutions.

Chance constraints are designed largely to deal with parameters whose distribution has long tails with no bounds. For simplicity, we will deal with the relatively straightforward case where the only uncertain parameters are the right-hand sides (the bi) where these bi are in- dependent random variables with a normal distribution. We will denote the mean and stan- dard deviation of this distribution for each bi by µi and cri respectively. To be specific, we also assume that all the functional constraints are in :s form. (The ::: form would be treated sim- ilarly, but chance constraints aren't applicable when the original constraint is in = form.)

Introduction to Operations Research-0072

Introduction to Operations Research-0073

In other words, if mi corresponds to the original estimated value of bi, then reducing this right-hand side by 1.645ui will ensure that the constraint will be satisfied with probabil- ity at least 0.95. (This probability will be exactly 0.95 if this deterministic form holds with equality but will be greater than 0.95 if the left-hand side is less than the right-hand side.) Figure 7.23 illustrates what is going on here. This normal distribution represents the probability density function of the actual value of bi that will be realized when the solu- tion is implemented. The cross-hatched area (0.05) on the left side of the figure gives the probability that bi will turn out to be less than mi -1.645ui, so the probability is 0.95 that bi will be greater than this quantity. Therefore, requiring that the left-hand side of the con- straint be this quantity means that this left-hand side will be less than the final value of bi at least 95 percent of the time.

Example

To illustrate the use of chance constraints, we return to the original version of the Wyn- dor Glass Co. problem and its model as formulated in Sec. 3.1. Suppose now that there is some uncertainty about how much production time will be available for the two new products when their production begins in the three plants a little later. Therefore, b1, b2, and b3 now are uncertain parameters (random variables) in the model. Assuming that these parameters have a normal distribution, the first step is to estimate the mean and standard deviation for each one. Table 3.1 gives the original estimate for how much production time will be available per week in the three plants, so these quantities can be taken to be the mean if they still seem to be the most likely available production times. The standard deviation provides a measure of how much the actual production time available might deviate from this mean. In particular, the normal distribution has the property that ap- proximately two-thirds of the distribution lay within one standard deviation of the mean. Therefore, a good way to estimate the standard deviation of each bi is to ask how much the actual available production time could turn out to deviate from the mean such that there is a 2-in-3 chance that the deviation will not be larger than this.

Another important step is to select an appropriate value of a as defined above. This choice depends on how serious it would be if an original constraint ends up being vio- lated when the solution if implemented. How difficult would it be to make the necessary adjustments if this were to happen? When dealing with soft constraints that actually can be violated a little bit without very serious complications, a value of approximately a = 0.95 would be a common choice and that is what we will use in this example. (We will discuss the case of hard constraints in the next subsection.)

Table 7.11 shows the estimates of the mean and standard deviation of each bi for this example. The last two columns also show the original right-hand side (RHS) and the ad- justed right-hand side for each of the three functional constraints.

Introduction to Operations Research-0074

Its optimal solution is x1 = 1.726 and x2 = 5.589, with Z = 33.122 (a total profit of $33,122 per week). This total profit per week is a significant reduction from the $36,000 found for the original version of the Wyndor model. However, by reducing the production rates of the two new products from their original values of x1 = 2 and x2 = 6, we now have a high probability that the new production plan actually will be feasible without needing to make any adjustments when production gets under way.

We can estimate this high probability if we assume not only that the three bi have nor- mal distributions but also that these three distributions are statistically independent. The new production plan will turn out to be feasible if all three of the original functional constraints are satisfied. For each of these constraints, the probability is at least 0.95 that it will be satisfied, where the probability will be exactly 0.95 if the deterministic equivalent of the corresponding chance constraint is satisfied with equality by the optimal solution for the linear programming model. Therefore, the probability that all three constraints are satisfied is at least (0.95)3 = 0.857. However, only the second and third deterministic equiv- alents are satisfied with equality in this case, so the probability that the first constraint will be satisfied is larger than 0.95. In the best case where this probability is essentially 1, the probability that all three constraints will be satisfied is essentially (0.95)2 = 0.9025. Consequently, the probability that the new production plan will turn out to be feasible is somewhere between the lower bound of 0.857 and the upper bound of 0.9025. (In this case, x1 = 1.726 is more that 11 standard deviations below 4, the mean of b1, so the prob- ability of satisfying the first constraint is essentially 1, which means that the probability of satisfying all three constraints is essentially 0.9025.)

Dealing with Hard Constraints

Chance constraints are well suited for dealing with soft constraints, i.e., constraints that actually can be violated a little bit without very serious complications. However, they also might have a role to play when dealing with hard constraints, i.e., constraints that must be satisfied. Recall that robust optimization described in the preceding section is espe- cially designed for addressing problems with hard constraints. When bi is the uncertain parameter in a hard constraint, robust optimization begins by estimating the upper bound and the lower bound on bi. However, if the probability distribution of bi has long tails with no bounds, such as with a normal distribution, it becomes impossible to set bounds on bi that will have zero probability of being violated. Therefore, an attractive alternative approach is to replace such a constraint by a chance constraint with a very high value of a, say, at least 0.99. Since K0.99 = –2.33, this would further reduce the right-hand sides calculated in Table 7.10 to b1 = 3.534, b2 = 10.835, and b3 = 15.67.

Although a. = 0.99 might seem reasonably safe, there is a hidden danger involved.

What we actually want is to have a very high probability that all the original constraints will be satisfied. This probability is somewhat less than the probability that a specific single original constraint will be satisfied and it can be much less if the number of functional constraints is very large.

We described in the last paragraph of the preceding subsection how to calculate both a lower bound and an upper bound on the probability that all the original constraints will be satisfied. In particular, if there are M functional constraints with uncertain bi, the lower bound is a.M. After replacing the chance constraints by their deterministic equivalents and solving for the optimal solution for the resulting linear programming problem, the next step is to count the number of these deterministic equivalents that are satisfied with equal- ity by this optimal solution. Denoting this number by N, the upper bound is a.N. Thus, a.M ::: Probability that all the constraints will be satisfied ::: a.N.

When using a = 0.99, these bounds on this probability can be less than desirable if M and N are large. Therefore, for a problem with a large number of uncertain bi, it might be advisable to use a value of a much closer to 1 than 0.99.

Extensions

Thus far, we have only considered the case where the only uncertain parameters are the bi. If the coefficients in the objective function (the cj) also are uncertain parameters, it is quite straightforward to deal with this case as well. In particular, after estimating the prob- ability distribution of each cj, each of these parameters can be replaced by the mean of this distribution. The quantity to be maximized or minimized then becomes the expected value (in the statistical sense) of the objective function. Furthermore, this expected value is a linear function, so linear programming still can be used to solve the model.

The case where the coefficients in the functional constraints (the aij) are uncertain parameters is much more difficult. For each constraint, the deterministic equivalent of the corresponding chance constraint now includes a complicated nonlinear expression. It is not impossible to solve the resulting nonlinear programming model. In fact, LINGO has special features for converting a deterministic model to a chance-constrained model with probabilistic coefficients and then solving it. This can be done with any of the major probability distributions for the parameters of the model.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM