Nonlinear Programming:graphical illustration of nonlinear programming problems

graphical illustration of nonlinear programming problems

When a nonlinear programming problem has just one or two variables, it can be represented graphically much like the Wyndor Glass Co. example for linear programming in Sec. 3.1. Because such a graphical representation gives considerable insight into the properties of optimal solutions for linear and nonlinear programming, let us look at a few examples. To highlight the difference between linear and nonlinear programming, we shall use some nonlinear variations of the Wyndor Glass Co. problem.

Figure 13.5 shows what happens to this problem if the only changes in the model shown in Sec. 3.1 are that both the second and the third functional constraints are replaced by the single nonlinear constraint 9x2 + 5x2 < 216. Compare Fig. 13.5 with Fig. 3.3. The optimal solution still happens to be (x1, x2) = (2, 6). Furthermore, it still lies on the boundary of the feasible region. However, it is not a corner-point feasible (CPF) solution. The optimal solution could have been a CPF solution with a different objective function (check Z = 3x1 + x2), but the fact that it need not be one means that we no longer have the tremendous simplification used in linear programming of limiting the search for an optimal solution to just the CPF solutions.

3Important research includes the following papers. B. I. Jacobs, K. N. Levy, and H. M. Markowitz: “Portfolio Optimization with Factors, Scenarios, and Realistic Short Positions,” Operations Research, 53(4): 586–599, July–Aug. 2005; A. F. Siegel and A. Woodgate: “Performance of Portfolios Optimized with Estimation Error,” Management Science, 53(6): 1005–1015, June 2007; H. Konno and T. Koshizuka: “Mean-Absolute Deviation Model,” IIE Transactions, 37(10): 893–900, Oct. 2005; T. P. Filomena and M. A. Lejeune: “Stochastic Portfo- lio Optimization with Proportional Transaction Costs: Convex Reformulations and Computational Experiments,” Operations Research Letters, 40(3): 212–217, May 2012.

INTRODUCTION TO OPERATIONS RESEARCH-0168

then Fig. 13.7 illustrates that the optimal solution turns out to be (x1, x2) = (3, 3), which lies inside the boundary of the feasible region. (You can check that this solution is opti- mal by using calculus to derive it as the unconstrained global maximum; because it also satisfies the constraints, it must be optimal for the constrained problem.) Therefore, a gen- eral algorithm for solving similar problems needs to consider all solutions in the feasible region, not just those on the boundary.

Another complication that arises in nonlinear programming is that a local maximum need not be a global maximum (the overall optimal solution). For example, consider the function of a single variable plotted in Fig. 13.8. Over the interval 0 < x < 5, this func- tion has three local maxima—x = 0, x = 2, and x = 4—but only one of these—x = 4— is a global maximum. (Similarly, there are local minima at x = 1, 3, and 5, but only x = 5 is a global minimum.)

Nonlinear programming algorithms generally are unable to distinguish between a local maximum and a global maximum (except by finding another better local maximum). There- fore, it becomes crucial to know the conditions under which any local maximum is guar- anteed to be a global maximum over the feasible region. You may recall from calculus that

INTRODUCTION TO OPERATIONS RESEARCH-0169

INTRODUCTION TO OPERATIONS RESEARCH-0170

Such a function that is always “curving downward” (or not curving at all) is called a concave function.4 Similarly, if < is replaced by >, so that the function is always “curving upward” (or not curving at all), it is called a convex function.5 (Thus, a linear function is both concave and convex.) See Fig. 13.9 for examples. Then note that Fig. 13.8 illustrates a function that is neither concave nor convex because it alternates between curv- ing upward and curving downward.

Functions of multiple variables also can be characterized as concave or convex if they always curve downward or curve upward. These intuitive definitions are restated in precise terms, along with further elaboration on these concepts, in Appendix 2. (Concave and convex functions play a fundamental role in nonlinear programming, so if you are not very familiar with such functions, we suggest that you read further in Appendix 2.) Appendix 2 also provides a convenient test for checking whether a function of two vari- ables is concave, convex, or neither.

Here is a convenient way of checking this for a function of more than two variables when the function consists of a sum of smaller functions of just one or two variables each.

4Concave functions sometimes are referred to as concave downward.

5Convex functions sometimes are referred to as concave upward.

If each smaller function is concave, then the overall function is concave. Similarly, the overall function is convex if each smaller function is convex.

To illustrate, consider the function

INTRODUCTION TO OPERATIONS RESEARCH-0171

which is the sum of the two smaller functions given in square brackets. The first smaller function 4x1 - x2 is a function of the single variable x , so it can be found to be concave by noting that its second derivative is negative. The second smaller function -(x2 - x3)2 is a function of just x2 and x3, so the test for functions of two variables given in Appendix 2 is applicable. In fact, Appendix 2 uses this particular function to illustrate the test and finds that the function is concave. Because both smaller functions are concave, the over- all function f (x1, x2, x3) must be concave.

If a nonlinear programming problem has no constraints, the objective function being concave guarantees that a local maximum is a global maximum. (Similarly, the objective function being convex ensures that a local minimum is a global minimum.) If there are constraints, then one more condition will provide this guarantee, namely, that the feasible region is a convex set. For this reason, convex sets play a key role in nonlinear programming.

As discussed in Appendix 2, a convex set is simply a set of points such that, for each pair of points in the collection, the entire line segment joining these two points is also in the collection. Thus, the feasible region for the original Wyndor Glass Co. problem (see Fig. 13.6 or 13.7) is a convex set. In fact, the feasible region for any linear programming problem is a convex set. Similarly, the feasible region in Fig. 13.5 is a convex set.

In general, the feasible region for a nonlinear programming problem is a convex set whenever all the gi(x) [for the constraints gi(x) < bi] are convex functions. For the example of Fig. 13.5, both of its gi(x) are convex functions, since g1(x) = x1 (a linear function is automatically both concave and convex) and g2(x) = 9x1 + 5x2 (both 9x1 and 5x2 are convex functions so their sum is a convex function). These two convex g (x) lead to the feasible region of Fig. 13.5 being a convex set.

Now let’s see what happens when just one of these gi(x) is a concave function instead. In particular, suppose that the only changes in the original Wyndor Glass Co. example are that the second and third functional constraints are replaced by 2x2 < 14 and region shown in Fig. 13.10 is not a convex set. Why? Because this feasible region contains pairs of points, for example, (0, 7) and (4, 3), such that part of the line segment joining these two points is not in the feasible region. Consequently, we cannot guarantee that a local maximum is a global maximum. In fact, this example has two local maxima, (0,7) and (4, 3), but only (0, 7) is a global maximum.

Therefore, to guarantee that a local maximum is a global maximum for a nonlinear programming problem with constraints gi(x) < bi (i = 1, 2, . . . , m) and x > 0, the objective function f (x) must be a concave function and each gi(x) must be a convex function. Such a problem is called a convex programming problem, which is one of the key types of nonlinear programming problems discussed in Sec. 13.3.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM