Numerical Algorithms for Constrained Optimization
In general, constrained optimization problems are more challenging to solve, compared to the unconstrained optimization problems. The general form of the optimization problem is:
where .v € R", R" is the n-dimensional Euclidian space, /?,: R" —» R, gj: R" —»
R, and / : R" —» R, Vi, j.
There are several numerical algorithms that can be used to search for the optimal solution of a function in the presence of constraints. These methods can be categorized, in general, into two categories. The first category is called Direct Methods, and it includes the methods that approximate the objective and constraint functions, to simplify the problem. The second category of methods is called Indirect Methods, and it includes methods that convert the constrained optimization problem to a sequence of unconstrained optimization problems, to leverage the unconstrained optimization algorithms for solving the more complex constrained problems. In both types of methods, the numerical algorithm usually starts with an initial guess x°, and then updates this initial guess over subsequent iterations using the update equation:

The search direction sk and step ak will vary from one method to another. This chapter summarizes some of the methods in each category. We start with the Indirect Methods.
Indirect Methods
As mentioned above, Indirect Methods convert the constrained optimization problem to a sequence of unconstrained optimization problems. This is achieved by appending the objective function with “penalty functions” that penalizes the violation of the constraints. In other words, we search for the optimal solution for the optimization problem (5.1) by solving another unconstrained optimization problem, that is of the form:
where x € R", щ and wg are penalty multipliers associated with the equality and inequality constraints, respectively. The equality and inequality constraint functions are all included in the penalty functions, as detailed in the sections below. The form of the penalty function P(x,wh,wg) and how it includes the constraints functions depends on the specific method.
In general the selection of the penalty parameters values determine the relative weight given to optimality (minimizing /(*)) versus feasibility of the obtained solution (satisfying the constraints.) The unconstrained problem (5.3) is solved several times, sequentially, using different selections of the penalty parameters in each iteration. At each iteration in this sequence, an unconstrained optimization algorithm is used to search for the optimal solution of (5.3), where the initial guess is the solution obtained from the previous iteration. An algorithm for indirect methods is given later in this chapter in Algorithm 5.1. Some of the main Indirect Methods are summarized below.
Barrier Methods
The Barrier methods construct the augmented function of the associated unconstrained optimization problem in such a way that the solution at any iteration does not leave the feasible domain. These methods are also called The Interior Penalty Function Methods for that same reason - the solutions obtained in subsequent iterations are inside the feasible domain. The fact that the solutions are always in the feasible “domain” implies that this type of problem would not work for problems that have equality constraints, since in the presence of equality constraints there might be no domain to iterate on. The Barrier methods works for problems that have only the inequality constraints. In Barrier methods, the penalty function P(x,wg) is continuous and nonnegative over the feasible domain, and it approaches infinity as the solution approaches the boundary of the feasible domain. There are several candidates for the penalty function, two of them are:
When using the Barrier methods, we start the iterations with high values for the penalty parameter vvg, and then we decrease it over subsequent iterations. This basically means that in the initial iterations we give high weight for moving the solutions away from the feasible domain boundaries toward the interior of it. In subsequent iterations, we give increasing weight to the optimality of the solution.
Example 5.1. Find the minimum of f(x) using the Barrier method.
Solution: The associated unconstrained optimization problem can be written
as:
The feasible domain is x > 3 in order for the log function to be valid. The derivative of F is:
(IF
To find the minimum of F, we set the derivative = 0. Hence,
dx
We can find the optimal solution by taking the limit as vvs —> 0:
The solution x = 0 is excluded because it is outside the feasible domain (x ^ 3). Hence the optimal solution is x = 3.
Equation (5.9) can be used to plot the obtained solution for different values of the penalty parameter. Figure 5.1 shows the values of x for a range of wg = 103 ■ ■ • 10-8, where it is clear that л- converges to the optimal solution as vvg decreases. It is observed from Fig. 5.1 that all values of x for different values of ws are in the feasible domain x > 3.

Figure 5.1: Using Barrier methods, as the penalty paramter decreases, the solution converges to the optimal value.
In the above simple example, it was possible to express the optimal solution in terms of the penalty parameter wg, and hence it was possible to take the limit and get the optimal solution. In more complex problems, this is not possible. Rather, we select a value for penalty parameter vvs; then we numerically solve the unconstrained optimization problem (5.3). The obtained solution for x is then used as initial guess, and we solve the unconstrained optimization problem (5.3) using a new value for wg. This iterative process repeats until a convergence is achieved. The penalty parameter vvs is updated in every iteration using a simple update equation:
where к is the iteration index, and Cg is a constant multiplier whose value depends on the specific problem. An algorithm for indirect methods is given in Algorithm 5.1 which applies to the Barrier methods.