METAHEURISTICS:THE NATURE OF METAHEURISTICS
THE NATURE OF METAHEURISTICS
To illustrate the nature of metaheuristics, let us begin with an example of a small but modestly difficult nonlinear programming problem:
An Example: A Nonlinear Programming Problem with Multiple Local Optima
Consider the following problem:
Maximize f(x) = 12x5 - 975x4 + 28,000x3 - 345,000x2 + 1,800,000x, subject to
0 x 31.
Figure 14.1 graphs the objective function f(x) over the feasible values of the single vari- able x. This plot reveals that the problem has three local optima, one at x = 5, another at x = 20, and the third at x = 31, where the global optimum is at x = 20.
The objective function f(x) is sufficiently complicated that it would be difficult to de- termine where the global optimum lies without the benefit of viewing the plot in Fig. 14.1. Calculus could be used, but this would require solving a polynomial equation of the fourth degree (after setting the first derivative equal to zero) to determine where the critical points lie. It would even be difficult to ascertain that f(x) has multiple local optima rather than just a global optimum.
This problem is an example of a nonconvex programming problem, a special type of nonlinear programming problem that typically has multiple local optima. Section 13.10
discusses nonconvex programming and even introduces a software package (Evolutionary Solver) that uses the kind of metaheuristic described in Sec. 14.4.
For nonlinear programming problems that appear to be somewhat difficult, like this one, a simple heuristic method is to conduct a local improvement procedure. Such a procedure starts with an initial trial solution and then, at each iteration, searches in the neighborhood of the current trial solution to find a better trial solution. This process con- tinues until no improved solution can be found in the neighborhood of the current trial solution. Thus, this kind of procedure can be viewed as a hill-climbing procedure that keeps climbing higher on the plot of the objective function (assuming the objective is max- imization) until it essentially reaches the top of the hill. A well-designed local improve- ment procedure usually will be successful in converging to a local optimum (the top of a hill), but it then will stop even if this local optimum is not a global optimum (the top of the tallest hill).
For example, the gradient search procedure described in Sec. 13.5 is a local improvement procedure. If it were to start with, say, x = 0 as the initial trial solution in Fig. 14.1, it would climb up the hill by trying successively larger values of x until it essentially reaches the top of the hill at x = 5, at which point it would stop. Figure 14.2 shows a typical sequence of values of f(x) that would be obtained by such a local improvement procedure when starting from far down the hill.
Since the nonlinear programming example depicted in Fig. 14.1 involves only a sin- gle variable, the bisection method described in Sec. 13.4 also could be applied to this particular problem. This procedure is another example of a local improvement procedure, since each iteration starts from the current trial solution to search in its neighborhood (defined by a current lower bound and upper bound on the value of the variable) for a better solution. For example, if the search were to begin with a lower bound of x = 0 and an upper bound of x = 6 in Fig. 14.1, the sequence of trial solutions obtained by the bi- section method would be x = 3, x = 4.5, x = 5.25, x = 4.875, and so forth as it converges to x = 5. The corresponding values of the objective function for these four trial solutions are 2.975 million, 3.286 million, 3.300 million, and 3.302 million, respectively. Thus, the second iteration provides a relatively large improvement over the first one (311,000), the third iteration gives a considerably smaller improvement (14,000), and the fourth itera- tion yields only a very small improvement (2000). As depicted in Fig. 14.2, this pattern is rather typical of local improvement procedures (although with some variation in the rate of convergence to the local maximum).
Comments
Post a Comment