Direct Methods

In Direct methods, the search for the optimal solution is earned out by sequential approximation of the objective and constraint functions. This approximation enables solving the constraint optimization problem at each iteration. The sequential quadratic programming is one of the widely used Direct methods. It is presented below in Section 5.2.3. We first present the sequential linear programming algorithm.

Sequential Linear Programming

In Sequential Learning Programming (SLP), each of the functions f(x), /г,(3c), and gj{x) in (5.1) is linearly approximated using Taylor expansion, Vi, j. As a result, we get a linear programming problem, which can be solved using the Simplex method, as discussed in Chapter 3. This linear approximation, however, is valid only in a small range around the linearization point. The optimal solution to the original problem (5.1) is unknown, and hence we do not know where to linearize the functions. To search for the optimal solution using the linearized model, we follow an iterative process. First, we start with an initial guess for the solution, and we linearize the problem functions around that guess. The linearized optimization problem around a current guess xk is:

We then solve the above linear programming problem for x. The obtained solution x is then used as a new guess xk for the next iteration. The process repeats by linearizing all the functions around the current guess, and solving for a new guess. This process repeats until one of the stopping criteria is met. In addition to satisfying the constraints and side constraints, the stopping criteria are usually:

■ if f{xk+l) > f(xk), then stop. Or,

■ if (x — xk)T (x — 3c*) is less than a specified tolerance, then stop.

The sequential linear programming method is not widely used. The linear approximation is valid only in a very small range around the linearization point. This method is not robust.

Quadratic Programming

The quadratic programming algorithm searches for the optimal solutions of optimization problems in which the objective function is a quadratic function and all constraint functions are linear. There are indeed several approaches that can be used to solve this type of problem. We start with reviewing some of the characteristics of quadratic functions.

For a quadratic function of one variable f(y) = ay2 + by + c, the gradient is: The Hessian is a scalar in this case and is given as:

If we want to:

then the necessary condition for optimality yields:

A quadratic function of n variables can be written in a matrix form as:

For the above quadratic function we can write the gradient vector as:

The Hessian matrix is:

If we want to:

then the necessary condition for optimality yields:

If the matrix ([A] + [A]7) is invertible, the extremum point is then:

In the remaining of this section we will look at optimization problems in which the objective function f(x) is quadratic as defined in Eq. (5.29), and all the equality and inequality constrains are linear. This will be presented through examples.

Example 5.3. Use the variable elimination method to find the optimal solution for the following problem:

Solution:

We have only one equality constraint and it is linear. Hence we can use this constraint to reduce the number of variables and eliminate the constraint function as follows. From the constraint in (5.35), we can write:

Substituting from Eq. (5.35) in f(x,y) we get:

Hence the optimization problem given in (5.35) can be rewritten as an unconstrained optimization problem as follows:

Moreover the objective function is quadratic which has a closed form analytic

solution to the extreme point given in Eq. (5.28). So the optimal solution is at

-2 " 1 y* = —. Substituting y* into Eq. (5.35) we get x* = -.

Example 5.4. Solve the optimization problem given in Example 5.3 using the Exterior Penalty Function method.

Solution:

In this problem there is only one equality constraint; hence we introduce a penalty function of only one parameter wy,. The optimization of the augmented function F(x,y,wi,) in this case is then:

The function F(x, иу,) can be expressed in a matrix form as:

Minimizing the function F(x. wh) is unconstrained optimization, and the objective function is quadratic. The necessary conditions of optimality as given by Eq. (5.34) yields the solution as:

where.

The Exterior Penalty Function method approaches the optimal solution as w/, gets larger, hence we can write:

The above two examples showed how to solve quadratic optimization problems when there is only linear equality constraints. In the presence of linear inequality constraints, the inequality constraint can be handled in a similar way as the equality constrains if methods such as the Exterior Penalty Function or the Augmented Lagrange Multiplier are used. The inequality constraints can be handled more efficiently, however, if we notice that an inequality constraint will only affect the optimal solution if it is active at the optimal solution (i.e., g,(T*) = 0, where x* is the optimal solution;) in such case the active inequality constraints can be handled as equality constraints. To better see that, consider the very simple case of minimizing the quadratic function /(x) = x2 + 5 subject to the constraint x < —2. In this case, the optimal solution to the unconstrained problem is A' = 0, but obviously x = 0 is outside the feasible domain. One can conclude that the inequality constraint has to be active in this case and the optimal solution can be found if we minimize f(x) subject to the constraint x = —2. In a more general case, given the m inequality constraints, at each iterate xk, one needs to determine the set of active constraints Jar(JA) := {j € {l,-“ = 0}, and

its compliment (the set of inactive constraints) J,„(xk) : = {1, - - - , m}Jac(3c*). In this approach for solving quadratic programming problems, the set Jac(xk) are those inequality constraints that are not satisfied at the current iterate xk, and they are handled as equality constraints, while the set J,„(xk) includes the rest of the inequality constraints, and they can be ignored at the current iterate xk. This is further illustrated in the following example.

Example 5.5.

Solution: Variable elimination can be used to eliminate the equality constraint h(x,y) and the variable x from the objective function / as discussed in Example 5.3. Ignoring the inequality constraint for now, the problem reduces to:

For the optimization problem (5.44), the optimal solution is x= [1.4, 1.7]7. Now we check if the obtained solution satisfies the inequality constraints by direct substitution in g(x). In this example, the inequality constraint is satisfied and hence the problem is complete. That is 3c* = [1.4. 1.7]7.

Now consider solving the same problem, but with a different inequality constraint function. The new g(T) is defined as:

In this case, the solution x = [1.4, 1.7]7 does not satisfy the inequality constraint. Hence g(T) is active. That is the optimization problem can now be written

as:

It is straight forward to show that the optimal solution for this problem is T* = [—1/2,3/4]7 The quadratic programming method applies only to problems of

quadratic objective functions and linear constraint functions. Yet, it is a building block in the more widely used method called Sequential Quadratic Programming. The function quadprog.m in Matlab solves quadratic programming problems.

Sequential Quadratic Programming

The Sequential Quadratic Programming (SQP) is an iterative method that solves the general optimization problem in a direct way by approximating the objective and constraint functions. A quadratic function is used to approximate the objective function at the current guess (iterate) for the solution, while a linear function is used to approximate each of the constraint functions at each iterate. Hence the SQP approximates the general nonlinear programming problem to quadratic programming problem at each iteration. The SQP is one of the most effective methods. SQP was first proposed in 1960s by R. Wilson in his PhD dissertation at Harvard University. Since then, the SQP has evolved significantly, based on a rigorous theory, to a class of methods suitable for a wide range of problems.

The standard nonlinear programming problem is given in (5.1). Using the SQP. in each iteration k, the guess xk is used to model the given nonlinear programming problem as a quadratic programming subproblem. This subproblem is solved and the solution is used to compute the new iterate xk+x. So, at each iteration we replace the objective function f(x) by its local quadratic approximation as follows:

where Ax = x — xk. The constraint functions are approximated by their local linear approximations as follows:

Hence, at each iteration we solve the following quadratic programming subproblem:

- к

The output from solving this subproblem is Ax at the current iteration k this Ax is used to update the current guess xk using the update equation:

The new iterate л*-1"1 is then used to compute new approximate functions fjij.gj V/, 7 at 3t*+1. This process repeats until any of the stopping criteria is satisfied. The convergence stopping criterion is that all constraint functions of the original nonlinear programming problem are satisfied and the first order conditions of the objective function of the original nonlinear programming problem are satisfied at the current iterate. Another stopping criterion is when there is no significant change in Ax over subsequent iterations. Also the algorithm should stop when a maximum number of iterations is completed. Algorithm 5.2 shows the SQP algorithm.

One might think of the subsequent iterates xk as a sequence that converges to a local minimum 3c*, of the given nonlinear programming problem, as к approaches oc. One major advantage of the SQP method is that the iterates xk need not be feasible; in other words the iterates xk does not have to satisfy the equality or inequality constraints of the problem.

Algorithm 5.2 Sequential Quadratic Programming Algorithm Select an initial guess for the solution

Select a stopping tolerance e and a maximum number of iterations Nilr к = 0

while к < yV„, do

Call the quadratic programming solver to optimize the optimization sub-

- к

problem, and output Av

=xk + Axk

if It = 0 and g < 0 and KT conditions satisfied then STOP end if

if || Ax || < £ then STOP end if

if к = Nj,r then STOP end if к = к + 1 end while

 
Source
< Prev   CONTENTS   Source   Next >