Nonlinear Programming:Quadratic Programming

Quadratic Programming

As indicated in Sec. 13.3, the quadratic programming problem differs from the linear programming problem only in that the objective function also includes xj and xi xj (i -=1 j) terms. Thus, if we use matrix notation like that introduced at the beginning of Sec. 5.2, the problem is to find x so as to where the objective function is concave, c is a row vector, x and b are column vectors, Q and A are matrices, and the superscript T denotes the transpose (see Appendix 4). The qij (elements of Q) are given constants such that qij = qji (which is the reason for the factor of 1 in the objective function). By performing the indicated vector and matrix multiplications, the objective function then is expressed in terms of these qij , the cj (elements of c), and the variables as follows:

INTRODUCTION TO OPERATIONS RESEARCH-0201

14For a unified survey of various approaches to duality in nonlinear programming, see A. M. Geoffrion, “Duality in Nonlinear Programming: A Simplified Applications-Oriented Development,” SIAM Review, 13: 1–37, 1971.

As can be verified from the results in Appendix 2 (see Prob. 13.7-1a), the objective function is strictly concave, so this is indeed a quadratic programming problem. In this case,

INTRODUCTION TO OPERATIONS RESEARCH-0202

in the objective function. The fact that q12 = q21 = -4 illustrates that both -qij and -qji give the total coefficient of the product of xi and xj.

Several algorithms have been developed for quadratic programming problem while using its assumption that the objective function is a concave function. (The results in Appendix 2 make it easy to check whether this assumption holds when the objective func- tion has only two variables. With more than two variables, another way to verify that the objective function is concave is to verify the equivalent condition that

xTQx > 0

for all x, that is, Q is a positive semidefinite matrix.) We shall describe one15 of these al- gorithms, the modified simplex method, that has been quite popular because it requires using only the simplex method with a slight modification. The key to this approach is to construct the KKT conditions from the preceding section and then to reexpress these con- ditions in a convenient form that closely resembles linear programming. Therefore, be- fore describing the algorithm, we shall develop this convenient form.

The KKT Conditions for Quadratic Programming

For concreteness, let us first consider the above example. Starting with the form given in the preceding section, its KKT conditions are the following:

INTRODUCTION TO OPERATIONS RESEARCH-020315P. Wolfe, “The Simplex Method for Quadratic Programming,” Econometrics, 27: 382–398, 1959. This paper develops both a short form and a long form of the algorithm. We present a version of the short form, which as- sumes further that either c = 0 or the objective function is strictly concave.

INTRODUCTION TO OPERATIONS RESEARCH-0204

where the elements of the column vector u are the ui of the preceding section and the el- ements of the column vectors y and v are slack variables.

Because the objective function of the original problem is assumed to be concave and because the constraint functions are linear and therefore convex, the corollary to the theo- rem of Sec. 13.6 applies. Thus, x is optimal if and only if there exist values of y, u, and v such that all four vectors together satisfy all these conditions. The original problem is thereby reduced to the equivalent problem of finding a feasible solution to these constraints.

It is of interest to note that this equivalent problem is one example of the linear com- plementarity problem introduced in Sec. 13.3 (see Prob. 13.3-6), and that a key constraint for the linear complementarity problem is its complementarity constraint.

The Modified Simplex Method

The modified simplex method exploits the key fact that, with the exception of the com- plementarity constraint, the KKT conditions in the convenient form obtained above are nothing more than linear programming constraints. Furthermore, the complementarity con- straint simply implies that it is not permissible for both complementary variables of any pair to be (nondegenerate) basic variables (the only variables > 0) when (nondegenerate) BF solutions are considered. Therefore, the problem reduces to finding an initial BF so- lution to any linear programming problem that has these constraints, subject to this addi- tional restriction on the identity of the basic variables. (This initial BF solution may be the only feasible solution in this case.)

As we discussed in Sec. 4.6, finding such an initial BF solution is relatively straight- forward. In the simple case where cT < 0 (unlikely) and b > 0, the initial basic variables are the elements of y and v (multiply through the first set of equations by -1), so that the desired solution is x = 0, u = 0, y = -cT, v = b. Otherwise, you need to revise the problem by introducing an artificial variable into each of the equations where cj > 0 (add the variable on the left) or bi < 0 (subtract the variable on the left and then multiply through by -1) in order to use these artificial variables (call them z1, z2, and so on) as initial basic variables for the revised problem. (Note that this choice of initial basic variables satisfies the complementarity constraint, because as nonbasic variables x = 0 and u = 0 automatically.)

Next, use phase 1 of the two-phase method (see Sec. 4.6) to find a BF solution for the real problem; i.e., apply the simplex method (with one modification) to the following linear programming problem

INTRODUCTION TO OPERATIONS RESEARCH-0205subject to the linear programming constraints obtained from the KKT conditions, but with these artificial variables included.

The one modification in the simplex method is the following change in the procedure for selecting an entering basic variable.

Restricted-Entry Rule: When you are choosing an entering basic variable, exclude from consideration any nonbasic variable whose complementary variable already is a basic variable; the choice should be made from the other nonbasic variables according to the usual criterion for the simplex method.

This rule keeps the complementarity constraint satisfied throughout the course of the algorithm. When an optimal solution

INTRODUCTION TO OPERATIONS RESEARCH-0206

The starting point for solving this example is its KKT conditions in the convenient form obtained earlier in the section. After the needed artificial variables are introduced, the linear programming problem to be addressed explicitly by the modified simplex method then is

INTRODUCTION TO OPERATIONS RESEARCH-0207

is not included explicitly, because the algorithm automatically enforces this constraint be- cause of the restricted-entry rule. In particular, for each of the three pairs of complementary variables—(x1, y1), (x2, y2), (u1,v1)—whenever one of the two variables already is a basic variable, the other variable is excluded as a candidate for the entering basic variable. Re- member that the only nonzero variables are basic variables. Because the initial set of ba- sic variables for the linear programming problem—z1, z2, v1—gives an initial BF solution that satisfies the complementarity constraint, there is no way that this constraint can be violated by any subsequent BF solution.

Table 13.5 shows the results of applying the modified simplex method to this problem. The first simplex tableau exhibits the initial system of equations after converting from minimizing Z to maximizing -Z and algebraically eliminating the initial basic variables from Eq. (0), just as was done for the radiation therapy example in Sec. 4.6. The three iterations proceed just as for the regular simplex method, except for eliminating certain candidates for the entering basic variable because of the restricted-entry rule. In the first tableau, u1 is eliminated as a candidate because its complementary variable (v1) al- ready is a basic variable (but x2 would have been chosen anyway because -4 < -3). In the second tableau, both u1 and y2 are eliminated as candidates (because v1 and x2 are basic variables), so x1 automatically is chosen as the only candidate with a negative co- efficient in row 0 (whereas the regular simplex method would have permitted choosing either x1 or u1 because they are tied for having the largest negative coefficient). In the third tableau, both y1 and y2 are eliminated (because x1 and x2 are basic variables). How- ever, u1 is not eliminated because v- 1 no longer is a basic variable, so u1 is chosen as the entering basic variable in the usual way.

The resulting optimal solution for this phase 1 problem is x1 = 12, x2 = 9, u1 = 3, with the rest of the variables zero. (Problem 13.7-1c asks you to verify that this solution is optimal by showing that x1 = 12, x2 = 9, u1 = 3 satisfy the KKT conditions for the original problem when they are written in the form given in Sec. 13.6.) Therefore, the op- timal solution for the quadratic programming problem (which includes only the x1 and x2 variables) is (x1, x2) = (12, 9).

INTRODUCTION TO OPERATIONS RESEARCH-0208

The Solved Examples section of the book’s website include another example that illustrates the application of the modified simplex method to a quadratic programming problem. The KKT conditions also are applied to this example.

Some Software Options

Your IOR Tutorial includes an interactive procedure for the modified simplex method to help you learn this algorithm efficiently. In addition, Excel, MPL/Solvers, LINGO, and LINDO all can solve quadratic programming problems.

The procedure for using Excel is almost the same as with linear programming. The one crucial difference is that the equation entered for the cell that contains the value of the objective function now needs to be a quadratic equation. To illustrate, consider again the example introduced at the beginning of the section, which has the objective function

INTRODUCTION TO OPERATIONS RESEARCH-0209

where the symbol ^2 indicates an exponent of 2.

The standard Excel Solver does not have a solving method that is specifically for quadratic programming. However, it does include a solving method called GRG Nonlinear for solving convex programming problems. As pointed out in Sec. 13.3, quadratic programming is a special case of convex programming. Therefore, GRG Nonlinear should be chosen as the solving method in the Solver Parameters dialog box (along with the option of Make Variables Nonnegative) instead of the LP Simplex solving method that al- ways was chosen for solving linear programming problems. The ASPE Solver includes this same solving method, but it also has another one called Quadratic that should be chosen instead because it has been designed specifically to solve quadratic programming problems very efficiently.

When using MPL/Solvers, you should set the model type to Quadratic by adding the fol- lowing statement at the beginning of the model file.

OPTIONS

Model Type = Quadratic

(Alternatively, you can select the Quadratic Models option from the MPL Language option dialog box, but then you will need to remember to change the setting when dealing with linear programming problems again.) Otherwise, the procedure is the same as with linear programming except that the expression for the objective function now is a quadratic function. Thus, for the example, the objective function would be expressed as

15x1 + 30x2 + 4x1*x2 - 2(x1^2) - 4(x2^2).

Two of the elite solvers included in the student version of MPL—CPLEX and GUROBI— include a special algorithm for solving quadratic programming problems.

This objective function would be expressed in this same way for a LINGO model. LINGO/LINDO then will automatically call its nonlinear solver to solve the model.

In fact, the Excel, MPL/Solvers, and LINGO/LINDO files for this chapter in your OR Courseware all demonstrate their procedures by showing the details for how these software packages set up and solve this example.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM