# Exterior Penalty Function Methods

As can be inferred from the name, the Exterior Penalty Function Methods start with an initial guess outside of the feasible domain, and approaches the optimal solution from outside. Because the iterations approach from outside the feasible, this methods finds the extremals near the boundaries of the feasible domain. The Exterior Penalty Function methods can handle both equality and inequality constraints.

In exterior penalty function methods, the penalty function may take the general form:

As can be inferred from Eq. (5.12), as *w _{h}*

*—>*oo and —>• oo, the unconstrained

optimization problem in (5.3) becomes equivalent to the original constrained optimization problem in (5.1).

The following example illustrates this concept of exterior penalty function methods.

Example 5.2.

Solution: The associated unconstrained optimization problem can be written

as:

The augmented function *F(x,w _{g})* can be written as:

Hence, we can write:

*dF*

To find the minimum of *F,* we set the derivative = 0. Hence,

*dx*

Clearly, the solution .v = 0 is excluded because it is outside the feasible domain *(x^* 3). Forx < 3, we can find the optimal solution by taking the limit as *w _{g}* —> oo:

Hence the optimal solution is *x* = 3. Eq. (5.18) can be used to plot the obtained solution for different values of the penalty parameter. Figure 5.2 shows the values of *x* for a range of = 1 - • -10^{12}, where it is clear that л^{-} converges to the

Figure 5.2: As the penalty paramter increases, the solution converges to the optimal value.

optima] solution as w_{g} increases. It is observed from Fig. 5.2 that there is overshoot that is actually nonfeasible, since *x* < 3.

In general, we cannot just use high values for the penalty parameters because it will result in numerical difficulty in searching for the optimal solution. In other words, giving high values for the penalty parameters, at initial iterations, puts high weight on the feasibility of the solution compared to the optimality of the objective function. In practice, one would start with lower values for the penalty parameters, at initial iterations, and increase them in subsequent iterations. The penalty parameters are updated according the equation:

where *к* is the iteration index, C_{?} and С/, are constant multipliers whose values depend on the specific problem. The algorithm for indirect methods given in Algorithm 5.1 applies also to the Exterior Penalty Function methods.

# Augmented Lagrange Multiplier Method

The Augmented Lagrange Multiplier method is considered one of the most robust indirect methods. Here, the penalty function *P(x,w _{g},wi,)* is defined as:

where A = [Aj • • • A*i] ^{r}* is a Lagrange multipliers vector associated with the equality constraints, and /3 = [/3| - - ■ /3,,,]

^{7}is a Lagrange multipliers vector associated with the inequality constraints. The augmented function in this case is:

The Augmented Lagrange Multiplier method works in a similar way to that of the Barrier and Exterior Penalty Function methods. The algorithm starts with initial guess for the solution *x°,* and solves the unconstrained optimization problem to minimize the augmented function *F.* Any algorithm for unconstrained optimization (such as the BFGS or the DFP algorithms) can be used to solve the unconstrained optimization problem. The obtained solution is used as a new guess T^{1} to replace *x°,* and the unconstrained optimization problem is solved again in this new iteration. The iterations continue until a stopping criterion is satisfied. The penalty parameters *w _{s}* and wy, along with the Lagrange multipliers A and /3 are updated in each iteration. The

*w*and ну, are updated in a similar way as that of the Exterior Penalty Function method, given in Eq. (5.19). The Lagrange multipliers are updated according to the equation:

_{g}where *к* is the iteration index.

To get more insight into the method, consider the necessary conditions of optimality discussed in Chapter 3. The set of the necessary conditions associated with the inequality constraints are given in Eq. (3.25). The condition in Eq. (3.25) implies that either /3, = 0 or *gj(x)* = О, V/ = 1 ■ ■ *in.* It can be shown that in the case of *gj(x) =* 0, /3, > 0. Hence, it is possible to state that a necessary condition for optimality is that /3, > 0. Hence, the optimal solution is expected to be in the regions where /3, is non-negative. The last term in the penalty function in (5.20) penalizes negative values of /3,. In the case when /3, = 0, the feasibility of the solution necessitates that g,(T) < 0. Hence, we can summarize this necessary condition as follows:

By considering all the possible sign combinations of Д and g,(T), it is easy to show that the second term in the penalty function in (5.20) will result in a value of zero only when the condition (5.23) is satisfied. If the condition (5.23) is not satisfied, the second term in the penalty function in (5.20) will result in a positive value. The Augmented Lagrange Multiplier method uses the necessary conditions of optimality in the construction of the penalty term of the augmented function, and hence it guides the numerical search for the optimal solution based on information learned from the necessary conditions of optimality.

# Algorithm for Indirect Methods

**Algorithm **5.1 below is an algorithm for general indirect methods such as the three methods described in the above three sections. Overall, the algorithm for any Indirect Method (IM) has two loops. The outer loop is the IM loop, which updates the IM parameters. The IM parameters are penalty parameters and the Lagrange multipliers if applicable. The inner loop has the unconstrained optimization algorithm, which itself is an iterative algorithm, e.g., the DFP and the BFGS algorithms. After each iteration of the IM loop, the convergence is checked. The method is converged if all the constraints *hj(x) =* 0 and g,(T) < 0 are satisfied, Vt'.y, side constraints are satisfied. In the case of Augmented Lagrange Multiplier method, an additional condition needs to be satisfied which is (5.23).

**Algorithm 5.1 **Algorithm for Indirect Methods

Select an initial guess for the solution 3c°

Select the maximum number of IM loop iterations *N _{in }*Select an initial value for each of the penalty parameters and Lagrange multipliers, as applicable

Select the constant multipliers *Q,* and *C _{g }*к = 0

**while **к < jV,„ do

Call the BFGS (or DFP) solver to optimize the augmented function *F,* and output Т*^{+|}

**if **Convergence conditions are satisfied **then**

stop"

**end if**

**if **Change in *Ax = x ^{k=l} -x^{k}* is small

**then**STOP

**end if**

**if **Change A*F = F(x ^{k+l},* w*

^{+1},w£

^{+l},X*

^{+1},j8*

^{+l}) —

*F(x*is small

^{k},w^{k}, wfc,X^{k}, j}^{k})**then**STOP

**end if**к = к +

^{1}

Update *w ^{k+l},wfc^{+l},k^{k+l},fi^{k+l}* using Eqns. (5.19) and (5.22)

**end while**

It is noted that the penalty parameters change over subsequent iterations, and hence the augmented function *F* also changes over subsequent iterations. Hence, in each iteration the unconstrained optimization algorithm is used to solve a different problem of different objective *F.*