OTHER ALGORITHMS FOR LINEAR PROGRAMMING:THE DUAL SIMPLEX METHOD

THE DUAL SIMPLEX METHOD

The dual simplex method is based on the duality theory presented in Chap. 6. To describe the basic idea behind this method, it is helpful to use some terminology introduced in Tables 6.10 and 6.11 of Sec. 6.3 for describing any pair of complementary basic solutions in the primal and dual problems. In particular, recall that both solutions are said to be pri- mal feasible if the primal basic solution is feasible, whereas they are called dual feasible if the complementary dual basic solution is feasible for the dual problem. Also recall (as indicated on the right side of Table 6.11) that each complementary basic solution is opti- mal for its problem only if it is both primal feasible and dual feasible.

The dual simplex method can be thought of as the mirror image of the simplex method. The simplex method deals directly with basic solutions in the primal problem that are pri- mal feasible but not dual feasible. It then moves toward an optimal solution by striving to achieve dual feasibility as well (the optimality test for the simplex method). By con- trast, the dual simplex method deals with basic solutions in the primal problem that are dual feasible but not primal feasible. It then moves toward an optimal solution by striving to achieve primal feasibility as well.

Furthermore, the dual simplex method deals with a problem as if the simplex method were being applied simultaneously to its dual problem. If we make their initial basic solutions complementary, the two methods move in complete sequence, obtaining complementary basic solutions with each iteration.

The dual simplex method is very useful in certain special types of situations. Ordinarily it is easier to find an initial basic solution that is feasible than one that is dual feasible. However, it is occasionally necessary to introduce many artificial variables to construct an initial BF solution artificially. In such cases it may be easier to begin with a dual feasible basic solution and use the dual simplex method. Furthermore, fewer iterations may be required when it is not necessary to drive many artificial variables to zero.

When dealing with a problem whose initial basic solutions (without artificial variables) are neither primal feasible nor dual feasible, it also is possible to combine the ideas of the simplex method and dual simplex method into a primal-dual algorithm that strives toward both primal feasibility and dual feasibility.

As we mentioned several times in Chaps. 6 and 7, as well as in Sec. 4.7, another important primary application of the dual simplex method is its use in conjunction with sensitivity analysis. Suppose that an optimal solution has been obtained by the simplex method but that it becomes necessary (or of interest for sensitivity analysis) to make mi- nor changes in the model. If the formerly optimal basic solution is no longer primal fea- sible (but still satisfies the optimality test), you can immediately apply the dual simplex method by starting with this dual feasible basic solution. (We will illustrate this at the end of this section.) Applying the dual simplex method in this way usually leads to the new optimal solution much more quickly than would solving the new problem from the beginning with the simplex method.

The dual simplex method also can be useful in solving certain huge linear programming problems from scratch because it is such an efficient algorithm. Computational experience with the most powerful versions of linear programming solvers indicates that the dual simplex method often is more efficient than the simplex method for solving particularly massive problems encountered in practice.

The rules for the dual simplex method are very similar to those for the simplex method. In fact, once the methods are started, the only difference between them is in the criteria used for selecting the entering and leaving basic variables and for stopping the algorithm.

To start the dual simplex method (for a maximization problem), we must have all the coefficients in Eq. (0) nonnegative (so that the basic solution is dual feasible). The basic solutions will be infeasible (except for the last one) only because some of the variables are negative. The method continues to decrease the value of the objective function, always retaining nonnegative coefficients in Eq. (0), until all the variables are nonnegative. Such a basic solution is feasible (it satisfies all the equations) and is, therefore, optimal by the simplex method criterion of nonnegative coefficients in Eq. (0).

The details of the dual simplex method are summarized next.

Summary of the Dual Simplex Method

1. Initialization: After converting any functional constraints in 2: form to ::: form (by multiplying through both sides by -1), introduce slack variables as needed to con- struct a set of equations describing the problem. Find a basic solution such that the coefficients in Eq. (0) are zero for basic variables and nonnegative for nonbasic vari- ables (so the solution is optimal if it is feasible). Go to the feasibility test.

2. Feasibility test: Check to see whether all the basic variables are nonnegative. If they are, then this solution is feasible, and therefore optimal, so stop. Otherwise, go to an iteration.

3. Iteration:

Step 1 Determine the leaving basic variable: Select the negative basic variable that has the largest absolute value.

Step 2 Determine the entering basic variable: Select the nonbasic variable whose coefficient in Eq. (0) reaches zero first as an increasing multiple of the equa- tion containing the leaving basic variable is added to Eq. (0). This selection is made by checking the nonbasic variables with negative coefficients in that equation (the one containing the leaving basic variable) and selecting the one with the smallest absolute value of the ratio of the Eq. (0) coefficient to the coefficient in that equation.

Step 3 Determine the new basic solution: Starting from the current set of equa- tions, solve for the basic variables in terms of the nonbasic variables by Gaussian elim- ination. When we set the nonbasic variables equal to zero, each basic variable (and Z ) equals the new right-hand side of the one equation in which it appears (with a coeffi- cient of +1). Return to the feasibility test.

To fully understand the dual simplex method, you must realize that the method pro- ceeds just as if the simplex method were being applied to the complementary basic solutions in the dual problem. (In fact, this interpretation was the motivation for constructing the method as it is.) Step 1 of an iteration, determining the leaving basic variable, is equivalent to determining the entering basic variable in the dual problem. The negative variable with the largest absolute value corresponds to the negative coefficient with the largest absolute value in Eq. (0) of the dual problem (see Table 6.3). Step 2, determining the entering basic variable, is equivalent to determining the leaving basic variable in the dual problem. The co- efficient in Eq. (0) that reaches zero first corresponds to the variable in the dual problem that reaches zero first. The two criteria for stopping the algorithm are also complementary.

An Example

We shall now illustrate the dual simplex method by applying it to the dual problem for the Wyndor Glass Co. (see Table 6.1). Normally this method is applied directly to the prob- lem of concern (a primal problem). However, we have chosen this problem because you have already seen the simplex method applied to its dual problem (namely, the primal prob- lem1) in Table 4.8 so you can compare the two. To facilitate the comparison, we shall con- tinue to denote the decision variables in the problem being solved by yi rather than xj.

Introduction to Operations Research-0078Introduction to Operations Research-0079

Since negative right-hand sides are now allowed, we do not need to introduce artificial variables to be the initial basic variables. Instead, we simply convert the functional constraints to ::: form and introduce slack variables to play this role. The resulting initial set of equations is that shown for iteration 0 in Table 8.1. Notice that all the coefficients in Eq. (0) are nonnegative, so the solution is optimal if it is feasible.

The initial basic solution is y1 = 0, y2 = 0, y3 = 0, y4 = -3, y5 = -5, with Z = 0, which is not feasible because of the negative values. The leaving basic variable is y5 (5 > 3), and the entering basic variable is y2 (12/2 < 18/2), which leads to the second set of equations, labeled as iteration 1 in Table 8.1. The corresponding basic solution is y1 = 0,

Introduction to Operations Research-0080

The next leaving basic variable is y4, and the entering basic variable is y3 (6/3 < 4/1),

which leads to the final set of equations in Table 8.1. The corresponding basic solution is optimal.

Notice that the optimal solution for the dual of this problem2 is x*1 = 2, x*2 = 6, x*3 = 2, x*4 = 0, x*5 = 0, as was obtained in Table 4.8 by the simplex method. We suggest that you now trace through Tables 8.1 and 4.8 simultaneously and compare the comple- mentary steps for the two mirror-image methods.

As mentioned earlier, an important primary application of the dual simplex method is that it frequently can be used to quickly re-solve a problem when sensitivity analysis results in making small changes in the original model. In particular, if the formerly opti- mal basic solution is no longer primal feasible (one or more right-hand sides now are neg- ative) but still satisfies the optimality test (no negative coefficients in Row 0), you can immediately apply the dual simplex method by starting with this dual feasible basic so- lution. For example, this situation arises when a new constraint that violates the formerly optimal solution is added to the original model. To illustrate, suppose that the problem solved in Table 8.1 originally did not include its first functional constraint (y1 + 3y3 2: 3).

2The complementary optimal basic solutions property presented in Sec. 6.3 indicates how to read the optimal solution for the dual problem from row 0 of the final simplex tableau for the primal problem. This same conclusion holds regardless of whether the simplex method or the dual simplex method is used to obtain the final tableau.

After deleting Row 1, the iteration 1 tableau in Table 8.1 shows that the resulting optimalanalysis leads to adding the originally omitted constraint, y1 + 3y3 2: 3, which is violated by the original optimal solution since both y1 = 0 and y3 = 0. To find the new optimal solution, this constraint (including its slack variable y4) now would be added as Row 1 of the middle tableau in Table 8.1. Regardless of whether this tableau had beenobtained by applying the simplex method or the dual simplex method to obtain the orig- inal optimal solution (perhaps after many iterations), applying the dual simplex method to this tableau leads to the new optimal solution in just one iteration.

If you would like to see another example of applying the dual simplex method, one is provided in the Solved Examples section of the book’s website.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

DECISION SUPPORT SYSTEMS:DIALOG GENERATION AND MANAGEMENT SYSTEMS