INTEGER PROGRAMMING:THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION TO BINARY INTEGER PROGRAMMING

THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION TO BINARY INTEGER PROGRAMMING

Because any bounded pure IP problem has only a finite number of feasible solutions, it is natural to consider using some kind of enumeration procedure for finding an optimal solution. Unfortunately, as we discussed in the preceding section, this finite number can be, and usually is, very large. Therefore, it is imperative that any enumeration procedure be cleverly structured so that only a tiny fraction of the feasible solutions actually need be examined. For example, dynamic programming (see Chap. 11) provides one such kind of procedure for many problems having a finite number of feasible solutions (although it is not particularly efficient for most IP problems). Another such approach is provided by the branch-and-bound technique. This technique and variations of it have been applied with

some success to a variety of OR problems, but it is especially well known for its application to IP problems.

The basic concept underlying the branch-and-bound technique is to divide and con- quer. Since the original “large” problem is too difficult to be solved directly, it is divided into smaller and smaller subproblems until these subproblems can be conquered. The di- viding (branching) is done by partitioning the entire set of feasible solutions into smaller and smaller subsets. The conquering ( fathoming) is done partially by bounding how good the best solution in the subset can be and then discarding the subset if its bound indicates that it cannot possibly contain an optimal solution for the original problem.

We shall now describe in turn these three basic steps—branching, bounding, and fathoming—and illustrate them by applying a branch-and-bound algorithm to the proto- type example (the California Manufacturing Co. problem) presented in Sec. 12.1 and re- peated here (with the constraints numbered for later reference).

INTRODUCTION TO OPERATIONS RESEARCH-0132

Branching

When you are dealing with binary variables, the most straightforward way to partition the set of feasible solutions into subsets is to fix the value of one of the variables (say, x1) at x1 = 0 for one subset and at x1 = 1 for the other subset. Doing this for the prototype ex- ample divides the whole problem into the two smaller subproblems shown next.

INTRODUCTION TO OPERATIONS RESEARCH-0133

The branching tree created by the branching for the first iteration of the BIP branch- and-bound algorithm for the example in Sec. 12.1.

Figure 12.4 portrays this dividing (branching) into subproblems by a tree (defined in Sec. 10.2) with branches (arcs) from the All node (corresponding to the whole problem having all feasible solutions) to the two nodes corresponding to the two subproblems. This tree, which will continue “growing branches” iteration by iteration, is referred to as the branching tree (or solution tree or enumeration tree) for the algorithm. The variable used to do this branching at any iteration by assigning values to the variable (as with x1 above) is called the branching variable. (Sophisticated methods for selecting branching

INTRODUCTION TO OPERATIONS RESEARCH-0134

variables are an important part of most branch-and-bound algorithms but, for simplicity, we always select them in their natural order—x1, x2, . . . , xn—throughout this section.) Later in the section you will see that one of these subproblems can be conquered (fathomed) immediately, whereas the other subproblem will need to be divided further into smaller subproblems by setting x2 = 0 or x2 = 1.

For other IP problems where the integer variables have more than two possible

values, the branching can still be done by setting the branching variable at its respective in- dividual values, thereby creating more than two new subproblems. However, a good al- ternate approach is to specify a range of values (for example, xj < 2 or xj ::: 3) for the branching variable for each new subproblem. This is the approach used for the algorithm presented in Sec. 12.7.

Bounding

For each of these subproblems, we now need to obtain a bound on how good its best fea- sible solution can be. The standard way of doing this is to quickly solve a simpler relax- ation of the subproblem. In most cases, a relaxation of a problem is obtained simply by deleting (“relaxing”) one set of constraints that had made the problem difficult to solve. For IP problems, the most troublesome constraints are those requiring the respective vari- ables to be integer. Therefore, the most widely used relaxation is the LP relaxation that deletes this set of constraints.

To illustrate for the example, consider first the whole problem given in Sec. 12.1 (and repeated at the beginning of this section). Its LP relaxation is obtained by replacing the last line of the model (xj is binary, for j = 1, 2, 3, 4) by the following new (relaxed) version of this constraint (5).

INTRODUCTION TO OPERATIONS RESEARCH-0135

INTRODUCTION TO OPERATIONS RESEARCH-0136INTRODUCTION TO OPERATIONS RESEARCH-0137

Figure 12.5 summarizes these results, where the numbers given just below the nodes are the bounds and below each bound is the optimal solution obtained for the LP relaxation.

Fathoming

A subproblem can be conquered (fathomed), and thereby dismissed from further consideration, in the three ways described below.

One way is illustrated by the results for subproblem 1 given by the x1 = 0 node in Fig. 12.5. Note that the (unique) optimal solution for its LP relaxation, (x1, x2, x3, x4) = (0, 1, 0, 1), is an integer solution. Therefore, this solution must also be the optimal solution for sub-

problem 1 itself. This solution should be stored as the first incumbent (the best feasible so- lution found so far) for the whole problem, along with its value of Z. This value is denoted by

subproblem 1 any further by branching from the x1 = 0 node, etc. Doing so could only lead to other feasible solutions that are inferior to the incumbent, and we have no inter- est in such solutions. Because it has been solved, we fathom (dismiss) subproblem 1 now.

(1, 5 5 )

The above results suggest a second key fathoming test. Since Z* = 9, there is no rea-

son to consider further any subproblem whose bound (after rounding down) < 9, since such a subproblem cannot have a feasible solution better than the incumbent. Stated more generally, a subproblem is fathomed whenever its

Bound < Z*.

This outcome does not occur in the current iteration of the example because subproblem 2 has a bound of 16 that is larger than 9. However, it might occur later for descendants of this subproblem (new smaller subproblems created by branching on this subproblem, and then perhaps branching further through subsequent “generations”). Furthermore, as new in- cumbents with larger values of Z* are found, it will become easier to fathom in this way.

The third way of fathoming is quite straightforward. If the simplex method finds that a subproblem’s LP relaxation has no feasible solutions, then the subproblem itself must have no feasible solutions, so it can be dismissed (fathomed).

In all three cases, we are conducting our search for an optimal solution by retaining for further investigation only those subproblems that could possibly have a feasible solu- tion better than the current incumbent.

Summary of Fathoming Tests. A subproblem is fathomed (dismissed from further consideration) if

INTRODUCTION TO OPERATIONS RESEARCH-0138

Test 3: The optimal solution for its LP relaxation is integer. (If this solution is better than the incumbent, it becomes the new incumbent, and test 1 is reapplied to all unfathomed subproblems with the new larger Z*.)

Figure 12.6 summarizes the results of applying these three tests to subproblems 1 and 2 by showing the current branching tree. Only subproblem 1 has been fathomed, by test 3, as indicated by F(3) next to the x1 = 0 node. The resulting incumbent also is identified be- low this node.

The subsequent iterations will illustrate successful applications of all three tests. How- ever, before continuing the example, we summarize the algorithm being applied to this BIP problem. (This algorithm assumes that the objective function is to be maximized, that all coefficients in the objective function are integer and, for simplicity, that the ordering of the variables for branching is x1, x2, . . . , xn. As noted previously, most branch-and-bound algorithms use sophisticated methods for selecting branching variables instead.)

Summary of the BIP Branch-and-Bound Algorithm

Initialization: Set Z* = -oo. Apply the bounding step, fathoming step, and opti- mality test described below to the whole problem. If not fathomed, classify this prob- lem as the one remaining “subproblem” for performing the first full iteration below.

Steps for each iteration:

1. Branching: Among the remaining (unfathomed) subproblems, select the one that was created most recently. (Break ties according to which has the larger bound.) Branch from the node for this subproblem to create two new subproblems by fixing the next variable (the branching variable) at either 0 or 1.

2. Bounding: For each new subproblem, solve its LP relaxation to obtain an optimal solution, including the value of Z, for this LP relaxation. If this value of Z is not an in- teger, round it down to an integer. (If it was already an integer, no change is needed.) This integer value of Z is the bound for the subproblem.

3. Fathoming: For each new subproblem, apply the three fathoming tests summarized above, and discard those subproblems that are fathomed by any of the tests.

Optimality test: Stop when there are no remaining subproblems that have not been fathomed; the current incumbent is optimal.6 Otherwise, return to perform another iteration.

6If there is no incumbent, the conclusion is that the problem has no feasible solutions.

The branching step for this algorithm warrants a comment as to why the subproblem to branch from is selected in this way. One option not used here (but sometimes adopted in other branch-and-bound algorithms) would have been always to select the remaining sub- problem with the best bound, because this subproblem would be the most promising one to contain an optimal solution for the whole problem. The reason for instead selecting the most recently created subproblem is that LP relaxations are being solved in the bounding step. Rather than start the simplex method from scratch each time, each LP relaxation generally is solved by reoptimization in large-scale implementations of this algorithm.7 This reopti- mization involves revising the final simplex tableau from the preceding LP relaxation as needed because of the few differences in the model ( just as for sensitivity analysis) and then applying a few iterations of the appropriate algorithm (perhaps the dual simplex method). When dealing with very large problems, this reoptimization tends to be much faster than starting from scratch, provided the preceding and current models are closely related. The models will tend to be closely related under the branching rule used, but not when you are skipping around in the branching tree by selecting the subproblem with the best bound.

Completing the Example

The pattern for the remaining iterations will be quite similar to that for the first iteration described above except for the ways in which fathoming occurs. Therefore, we shall sum- marize the branching and bounding steps fairly briefly and then focus on the fathoming step.

Iteration 2. The only remaining subproblem corresponds to the x1 = 1 node in Fig. 12.6, so we shall branch from this node to create the two new subproblems given below.

INTRODUCTION TO OPERATIONS RESEARCH-0139

7The reoptimization technique was first introduced in Sec. 4.7 and then applied to sensitivity analysis in Sec.7.2. To apply it here, all of the original variables would be retained in each LP relaxation and then the constraint xj < 0 would be added to fix xj = 0 and the constraint xj ::: 1 would be added to fix xj = 1. These constraints indeed have the effect of fixing the variables in this way because the LP relaxation also includes the constraints that 0 < xj < 1.

INTRODUCTION TO OPERATIONS RESEARCH-0140

Note that both of these bounds are larger than Z* = 9, so fathoming test 1 fails in both cases. Test 2 also fails, since both LP relaxations have feasible solutions (as indi- cated by the existence of an optimal solution). Alas, test 3 fails as well, because both optimal solutions include variables with noninteger values.

Figure 12.7 shows the resulting branching tree at this point. The lack of an F to the right of either new node indicates that both remain unfathomed.

Iteration 3. So far, the algorithm has created four subproblems. Subproblem 1 has been fathomed, and subproblem 2 has been replaced by (separated into) subproblems 3 and 4, but these last two remain under consideration. Because they were created simultaneously, but subproblem 4 (x1 = 1, x2 = 1) has the larger bound (16 > 13), the next branching is done from the (x1, x2) = (1, 1) node in the branching tree, which creates the following new subproblems (where constraint 3 disappears because it does not contain x4).

INTRODUCTION TO OPERATIONS RESEARCH-0141

INTRODUCTION TO OPERATIONS RESEARCH-0142

For both of these subproblems, reducing these LP relaxations to one-variable problems (plus the fixed values of x1, x2, and x3) make it easy to see that the optimal solution for the LP relaxation of subproblem 5 is indeed the one given above. Similarly, note how the combination of constraint 1 and 0 < x4 < 1 in the LP relaxation of subproblem 6 prevents any feasible solutions. Therefore, this subproblem is fathomed by test 2. However, subproblem 5 fails this test, as well as test 1 (16 > 9) and test 3 (x = 1 is not integer),

so it remains under consideration.

We now have the branching tree shown in Fig. 12.8.

Iteration 4. The subproblems corresponding to nodes (1, 0) and (1, 1, 0) in Fig. 12.8 remain under consideration, but the latter node was created more recently, so it is selected for branching from next. Since the resulting branching variable x4 is the last variable, fix- ing its value at either 0 or 1 actually creates a single solution rather than subproblems re- quiring fuller investigation. These single solutions are

INTRODUCTION TO OPERATIONS RESEARCH-0143

Formally applying the fathoming tests, we see that the first solution passes test 3 and the second passes test 2. Furthermore, this feasible first solution is better than the incumbent (14 > 9), so it becomes the new incumbent, with Z* = 14.

Because a new incumbent has been found, we now reapply fathoming test 1 with the new larger value of Z* to the only remaining subproblem, the one at node (1, 0).

INTRODUCTION TO OPERATIONS RESEARCH-0144

Your OR Tutor includes another example of applying this algorithm. Also in- cluded in the IOR Tutorial is an interactive procedure for executing this algorithm. As usual, the Excel, LINGO/LINDO, and MPL/Solvers files for this chapter in your OR

Courseware show how the student versions of these software packages are applied to the various examples in the chapter. The algorithms they use for BIP problems all are similar to the one described above.8

Other Options with the Branch-and-Bound Technique

This section has illustrated the branch-and-bound technique by describing a basic branch- and-bound algorithm for solving BIP problems. However, the general framework of the branch-and-bound technique provides a great deal of flexibility in how to design a spe- cific algorithm for any given type of problem such as BIP. There are many options avail- able, and constructing an efficient algorithm requires tailoring the specific design to fit the specific structure of the problem type.

Every branch-and-bound algorithm has the same three basic steps of branching, bounding, and fathoming. The flexibility lies in how these steps are performed.

Branching always involves selecting one remaining subproblem and dividing it into smaller subproblems. The flexibility here is found in the rules for selecting and dividing. Our BIP algorithm selected the most recently created subproblem, because this is very ef- ficient for reoptimizing each LP relaxation from the preceding one. Selecting the subprob- lem with the best bound is the other most popular rule, because it tends to lead more quickly to better incumbents and so more fathoming. Combinations of the two rules also can be used. The dividing typically (but not always) is done by choosing a branching variable and assigning it either individual values (e.g., our BIP algorithm) or ranges of values (e.g., the algorithm in the next section). More sophisticated algorithms generally use a rule for strategically choosing a branching variable that should tend to lead to early fathoming. This usually is considerably more efficient than the rule used by our BIP algorithm of simply selecting the branching variables in their natural order—x1, x2, . . . , xn. For example, a major drawback of this simple rule for selecting the branching variable is that if this vari- able has an integer value in the optimal solution for the LP relaxation of the subproblem being branched on, the next subproblem that fixes this variable at this same integer value also will have the same optimal solution for its LP relaxation, so no progress will have been made toward fathoming. Therefore, more strategic options for selecting the branch- ing variable might do something like selecting the variable whose value in the optimal so- lution for the LP relaxation of the current subproblem is furthest from being an integer.

Bounding usually is done by solving a relaxation. However, there are a variety of ways to form relaxations. For example, consider the Lagrangian relaxation, where the entire set of functional constraints Ax < b (in matrix notation) is deleted (except possi- bly for any “convenient” constraints) and then the objective function

INTRODUCTION TO OPERATIONS RESEARCH-0145where the fixed vector A ::: 0. If x* is an optimal solution for the original problem, its Z < ZR, so solving the Lagrangian relaxation for the optimal value of ZR provides a valid bound. If A is chosen well, this bound tends to be a reasonably tight one (at least com- parable to the bound from the LP relaxation). Without any functional constraints, this relaxation also can be solved extremely quickly. The drawbacks are that fathoming tests 2 and 3 (revised) are not as powerful as for the LP relaxation.

8In the professional version of LINGO, LINDO, and various MPL solvers, the BIP algorithm also uses a variety of sophisticated techniques along the lines described in Sec. 12.8.

In general terms, two features are sought in choosing a relaxation: it can be solved relatively quickly, and it provides a relatively tight bound. Neither alone is adequate. The LP relaxation is popular because it provides an excellent trade-off between these two factors.

One option occasionally employed is to use a quickly solved relaxation and then, if fathoming is not achieved, to tighten the relaxation in some way to obtain a somewhat tighter bound.

Fathoming generally is done pretty much as described for the BIP algorithm. The three fathoming criteria can be stated in more general terms as follows.

Summary of Fathoming Criteria. A subproblem is fathomed if an analysis of its

relaxation reveals that

Criterion 1: Feasible solutions of the subproblem must have Z < Z*, or

Criterion 2: The subproblem has no feasible solutions, or

Criterion 3: An optimal solution of the subproblem has been found.

Just as for the BIP algorithm, the first two criteria usually are applied by solving the relaxation to obtain a bound for the subproblem and then checking whether this bound is < Z* (test 1) or whether the relaxation has no feasible solutions (test 2). If the re- laxation differs from the subproblem only by the deletion (or loosening) of some con- straints, then the third criterion usually is applied by checking whether the optimal solution for the relaxation is feasible for the subproblem, in which case it must be optimal for the subproblem. For other relaxations (such as the Lagrangian relaxation), additional analysis is required to determine whether the optimal solution for the relaxation is also optimal for the subproblem.

If the original problem involves minimization rather than maximization, two options are available. One is to convert to maximization in the usual way (see Sec. 4.6). The other is to convert the branch-and-bound algorithm directly to minimization form, which re- quires changing the direction of the inequality for fathoming test 1 from

Is the subproblem’s bound < Z*? to

Is the subproblem’s bound ::: Z*?

When using this latter inequality, if the value of Z for the optimal solution for the LP relaxation of the subproblem is not an integer, it now would be rounded up to an integer to obtain the subproblem’s bound.

So far, we have described how to use the branch-and-bound technique to find only one optimal solution. However, in the case of ties for the optimal solution, it is sometimes desirable to identify all these optimal solutions so that the final choice among them can be made on the basis of intangible factors not incorporated into the mathematical model. To find them all, you need to make only a few slight alterations in the procedure. First, change the weak inequality for fathoming test 1 (Is the subproblem’s bound < Z*?) to a strict inequality (Is the subproblem’s bound < Z*?), so that fathoming will not occur if the subproblem can have a feasible solution equal to the incumbent. Second, if fathom- ing test 3 passes and the optimal solution for the subproblem has Z = Z*, then store this solution as another (tied) incumbent. Third, if test 3 provides a new incumbent (tied or otherwise), then check whether the optimal solution obtained for the relaxation is unique. If it is not, then identify the other optimal solutions for the relaxation and check whether they are optimal for the subproblem as well, in which case they also become incumbents. Finally, when the optimality test finds that there are no remaining (unfathomed) subsets, all the current incumbents will be the optimal solutions.

Finally, note that rather than find an optimal solution, the branch-and-bound tech- nique can be used to find a nearly optimal solution, generally with much less computa- tional effort. For some applications, a solution is “good enough” if its Z is “close enough” to the value of Z for an optimal solution (call it Z**). Close enough can be defined in either of two ways as either

Z** - K < Z or (1 - a)Z** < Z

for a specified (positive) constant K or a. For example, if the second definition is chosen and a = 0.05, then the solution is required to be within 5 percent of optimal. Consequently, if it were known that the value of Z for the current incumbent (Z*) satisfies either

Z** - K < Z* or (1 - a)Z** < Z*

then the procedure could be terminated immediately by choosing the incumbent as the desired nearly optimal solution. Although the procedure does not actually identify an op- timal solution and the corresponding Z**, if this (unknown) solution is feasible (and so optimal) for the subproblem currently under investigation, then fathoming test 1 finds an upper bound such that

Z** < bound so that either

Bound - K < Z* or (1 - a)bound < Z*

would imply that the corresponding inequality in the preceding sentence is satisfied. Even if this solution is not feasible for the current subproblem, a valid upper bound is still ob- tained for the value of Z for the subproblem’s optimal solution. Thus, satisfying either of these last two inequalities is sufficient to fathom this subproblem because the incumbent must be “close enough” to the subproblem’s optimal solution.

Therefore, to find a solution that is close enough to being optimal, only one change is needed in the usual branch-and-bound procedure. This change is to replace the usual fathoming test 1 for a subproblem

Bound < Z*? by either

Bound - K < Z*?

or

(1 - a)(bound) < Z*?

and then perform this test after test 3 (so that a feasible solution found with Z > Z* is still kept as the new incumbent). The reason this weaker test 1 suffices is that regardless of how close Z for the subproblem’s (unknown) optimal solution is to the subproblem’s bound, the incumbent is still close enough to this solution (if the new inequality holds) that the sub- problem does not need to be considered further. When there are no remaining subprob- lems, the current incumbent will be the desired nearly optimal solution. However, it is much easier to fathom with this new fathoming test (in either form), so the algorithm should run much faster. For an extremely large problem, this acceleration may make the difference be- tween finishing with a solution guaranteed to be close to optimal and never terminating. For many extremely large problems arising in practice, since the model provides only an idealized representation of the real problem anyway, finding a nearly optimal solution for the model in this way may be sufficient for all practical purposes. Therefore, this shortcut is used fairly frequently in practice.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

INTEGER PROGRAMMING:THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS