Nonlinear Programming:types of nonlinear programming problems

types of nonlinear programming problems

Nonlinear programming problems come in many different shapes and forms. Unlike the simplex method for linear programming, no single algorithm can solve all these different types of problems. Instead, algorithms have been developed for various individual

INTRODUCTION TO OPERATIONS RESEARCH-0172

When f (x) is a concave function, this condition also is sufficient, so then solving for x* reduces to solving the system of n equations obtained by setting the n partial derivatives equal to zero. Unfortunately, for nonlinear functions f (x), these equations often are going to be nonlinear as well, in which case you are unlikely to be able to solve analytically for their simultaneous solution. What then? Sections 13.4 and 13.5 describe algorithmic search procedures for finding x*, first for n = 1 and then for n > 1. These procedures also play an important role in solving many of the problem types described next, where there are constraints. The reason is that many algorithms for constrained problems are designed so that they can focus on an unconstrained version of the problem during a portion of each iteration.

When a variable xj does have a nonnegativity constraint xj > 0, the preceding neces- sary and (perhaps) sufficient condition changes slightly to

INTRODUCTION TO OPERATIONS RESEARCH-0173

for each such j. This condition is illustrated in Fig. 13.11, where the optimal solution for a problem with a single variable is at x = 0 even though the derivative there is negative rather than zero. Because this example has a concave function to be maximized subject to a nonnegativity constraint, having the derivative less than or equal to 0 at x = 0 is both a necessary and sufficient condition for x = 0 to be optimal.

A problem that has some nonnegativity constraints but no functional constraints is one special case (m = 0) of the next class of problems.

Linearly Constrained Optimization

Linearly constrained optimization problems are characterized by constraints that com- pletely fit linear programming, so that all the gi(x) constraint functions are linear, but the objective function f (x) is nonlinear. The problem is considerably simplified by having just one nonlinear function to take into account, along with a linear programming feasible region. A number of special algorithms based upon extending the simplex method to con- sider the nonlinear objective function have been developed.

One important special case, which we consider next, is quadratic programming.

INTRODUCTION TO OPERATIONS RESEARCH-0174

Quadratic programming problems again have linear constraints, but now the objective function f (x) being maximised must be both quadratic and concave. Thus in addition to the concave assumption, the only difference between such a problem and a linear pro- gramming problem is that some of the terms in the objective function involve the square of a variable or the product of two variables.

Several algorithms have been developed specifically to solve quadratic programming problems very efficiently. Section 13.7 presents one such algorithm that involves a direct extension of the simplex method.

Quadratic programming is very important, partially because such formulations arise naturally in many applications. For example, the problem of portfolio selection with risky securities described in Sec. 13.1 fits into this format. However, another major reason for its importance is that a common approach to solving general linearly constrained opti- mization problems is to solve a sequence of quadratic programming approximations.

Convex Programming

Convex programming covers a broad class of problems that actually encompasses as special cases all the preceding types when f (x) is a concave function to be maximized. Continuing to assume the general problem form (including maximization) presented at the beginning of the chapter, the assumptions are that

1. f (x) is a concave function.

2. Each gi(x) is a convex function.

As discussed at the end of Sec. 13.2, these assumptions are enough to ensure that a local max- imum is a global maximum. (If the objective were to minimize f(x) instead, subject to either gi(x) < bi or -gi(x) > bi for i = 1, 2, . . . , m, the first assumption would change to requiring that f(x) must be a convex function, since this is what is needed to ensure that a local mini- mum is a global minimum.) You will see in Sec. 13.6 that the necessary and sufficient condi- tions for such an optimal solution are a natural generalization of the conditions just given for unconstrained optimization and its extension to include nonnegativity constraints. Section 13.9 then describes algorithmic approaches to solving convex programming problems.

Separable Programming

Separable programming is a special case of convex programming, where the one addi- tional assumption is that

3. All the f (x) and gi(x) functions are separable functions.

A separable function is a function where each term involves just a single variable, so that the function is separable into a sum of functions of individual variables. For exam- ple, if f (x) is a separable function, it can be expressed as

INTRODUCTION TO OPERATIONS RESEARCH-0175

where f1(x1) = 126x1 - 9x1 and f2(x2) = 182x2 - 13x2 are each a function of a single variable—x1 and x2, respectively. By the same reasoning, you can verify that the objective function considered in Fig. 13.7 also is a separable function.

It is important to distinguish separable programming problems from other convex programming problems, because any such problem can be closely approximated by a linear programming problem so that the extremely efficient simplex method can be used. This approach is described in Sec. 13.8. (For simplicity, we focus there on the linearly con- strained case where the special approach is needed only on the objective function.)

Nonconvex Programming

Nonconvex programming encompasses all nonlinear programming problems that do not satisfy the assumptions of convex programming. Now, even if you are successful in finding a local maximum, there is no assurance that it also will be a global maximum. Therefore, there is no algorithm that will find an optimal solution for all such prob- lems. However, there do exist some algorithms that are relatively well suited for ex- ploring various parts of the feasible region and perhaps finding a global maximum in the process. We describe this approach in Sec. 13.10. Section 13.10 also will introduce two global optimizers (available with LINGO and MPL) for finding an optimal solution for nonconvex programming problems of moderate size, as well as a search procedure (available with both the standard Excel Solver and the ASPE Solver) that generally will find a near-optimal solution for rather large problems.

Certain specific types of nonconvex programming problems can be solved without great difficulty by special methods. Two especially important such types are discussed briefly next.

Geometric Programming

When we apply nonlinear programming to engineering design problems, as well as certain economics and statistics problems, the objective function and the constraint functions frequently take the form

INTRODUCTION TO OPERATIONS RESEARCH-0176

In such cases, the ci and aij typically represent physical constants, and the xj are design variables. These functions generally are neither convex nor concave, so the techniques of convex programming cannot be applied directly to these geometric programming problems. How- ever, there is one important case where the problem can be transformed to an equivalent con- vex programming problem. This case is where all the ci coefficients in each function are strictly positive, so that the functions are generalized positive polynomials now called posynomials and the objective function is to be minimized. The equivalent convex pro- gramming problem with decision variables y1, y2, . . . , yn is then obtained by setting

xj = eyj, for j = 1, 2, . . . , n

throughout the original model, so now a convex programming algorithm can be applied. Alternative solution procedures also have been developed for solving these posynomial programming problems, as well as for geometric programming problems of other types.

Fractional Programming

Suppose that the objective function is in the form of a fraction, i.e., the ratio of two functions,

INTRODUCTION TO OPERATIONS RESEARCH-0178

Such fractional programming problems arise, e.g., when one is maximizing the ratio of output to person-hours expended (productivity), or profit to capital expended (rate of re- turn), or expected value to standard deviation of some measure of performance for an in- vestment portfolio (return/risk). Some special solution procedures have been developed for certain forms of f1(x) and f2(x).

When it can be done, the most straightforward approach to solving a fractional programming problem is to transform it to an equivalent problem of a standard type for which effective solution procedures already are available. To illustrate, suppose that f (x) is of the linear fractional programming form

INTRODUCTION TO OPERATIONS RESEARCH-0179

which can be solved by the simplex method. More generally, the same kind of trans- formation can be used to convert a fractional programming problem with concave f1(x), convex f2(x), and convex gi (x) to an equivalent convex programming problem.

The Complementarity Problem

When we deal with quadratic programming in Sec. 13.7, you will see one example of how solving certain nonlinear programming problems can be reduced to solving the complementarity problem. Given variables w1, w2, . . . , wp and z1, z2, . . . , zp, the complementarity problem is to find a feasible solution for the set of constraints

INTRODUCTION TO OPERATIONS RESEARCH-0180Here, w and z are column vectors, F is a given vector-valued function, and the superscript T denotes the transpose (see Appendix 4). The problem has no objective function, so technically it is not a full-fledged nonlinear programming problem. It is called the complementarity problem because of the complementary relationships that either

wi = 0 or zi = 0 (or both) for each i = 1, 2, . . . , p.

An important special case is the linear complementarity problem, where

F (z) = q + Mz,

where q is a given column vector and M is a given p X p matrix. Efficient algorithms have been developed for solving this problem under suitable assumptions6 about the properties of the matrix M. One type involves pivoting from one basic feasible (BF) solution to the next, much like the simplex method for linear programming.

In addition to having applications in nonlinear programming, complementarity problems have applications in game theory, economic equilibrium problems, and engineering equilibrium problems.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM