Unconstrained Optimization

This chapter presents some numerical algorithms that can be used to solved unconstrained optimization problems. The algorithms used to solve this type of problem can be categorized into two main categories: the gradient-based algorithms and the non gradient-based algorithms. Algorithms in the former class of algorithms need to compute gradient of the objective function with respect to each of the design variables. In the latter class of algorithms, on the other hand, gradient information is not needed. While gradient information helps the convergence to the optimal solution, it is not always easy to compute. Both categories of method are being widely used in different applications. The nonlinear optimization problem considered in this chapter can be written as:

where x € R", R" is the n-dimensional Euclidian space, and / : R" —> R.

Non-Gradient Algorithms

Algorithms in this category of methods start with an initial guess for the solution x°, and updates that guess iteratively until a stopping criteria is satisfied. The iterative update equation takes the form:

where, xk is the current guess for the solution xk+x is the next guess for the solution, sk is the search direction at the current iteration, ak is a scalar to be op?timized in each iteration and it determines the step size. Different methods have different strategies in selecting the search direction in each iteration, but most of them use the same approach in computing the step size ak at each iteration. Suppose that the search direction sk at the k'h iteration is given, then the new iterate can be computed using Eq. (4.3) and it will be a function of the step size xk+l (ak). The objective function / is then evaluated at the new iterate and hence the objective function becomes a function of the step size f(ak). The function f(ak) is a function of only one variable ak, and it can be optimized over ak to find the optimal step size. Optimizing a function of one variable can be carried out either analytically or numerically using several algorithms such as the Golden Section method. This process for computing the step size is repeated in each iteration. Below is a description for some of the basic non-gradient algorithms.

Cyclic Coordinated Descent Method

In this method, the iterations are organized in cycles. For x € R", each cycle will have n iterations. In each iteration in the cycle, the search direction I is selected to be unit vector of one of the design variables. In subsequent iterations in a cycle, the search direction cycles through the number of variables in order. The search directions repeat in subsequent cycles. For example, if we have three design variables, n = 3. Then each cycle will have three iterations. In each cycle, the first search direction 1= [1,0,0]7, the second search direction is s= [0,1,0]T, and the third search direction is s = [0,0,1 ]7.

Pattern Search Method

In some problems, the Cyclic Coordinated Descent Method might get trapped in a zigzag pattern when the iterate gets close to the optimal solution. To overcome this problem, the Pattern Search Method adds one more iteration to each cycle. This additional iteration at the end of each cycle is computed as the summation of all the previous n search directions weighted by their step sizes; that is:

This added iteration in each cycle disturbs the zigzag pattern.

Example 4.1. Consider the cantilever bean shown in Fig. 4.1. Due to the tip load Q, the tip point will be displaced and rotated. It is required to compute the linear displacement X| and the rotational displacement Qi.

The displacement of a beam can be analyzed assuming that each end of the beam has two degrees of freedom: the vertical displacement and the rotational displacement, as shown in Fig. 4.1. In the case of a clamped cantilever beam,

Cantilever beam displacement is the optimal solution of minimizing the potential energy function

Figure 4.1: Cantilever beam displacement is the optimal solution of minimizing the potential energy function.

the two degrees of freedom at the clamped end vanish; that is: x3 = 04 = 0. Let E, = jc/L; the vertical displacement .?(

The deflection along the beam can then be found by minimizing its potential energy; the potential energy can be expressed as:

where E is the Young’s modulus and / is the beam cross section moment of inertia. For a simpler form of Eq. (4.5), let / = 2VJ/El, ari = xi, *2 = OiL, Q]J jEl = 1, andTr = [jci дсг]. Then the objective function that we need to minimize takes the form:

Solve the above optimization problem to find the tip displacement using the Pattern Search Algorithm. Assume the initial guess for the solution be x° = [-1 -2}T.

Solution:

Pattern Search Algorithm: Since we have two variables, then each cycle in the Pattern Search Algorithm was three iterations. At x°, the value of the objective function is /(if0) = 2. The first search direction is J° = [1 0]7. Hence the next guess can be expressed as function of the search step size a0 as follows:

The objective function at the new design .v1 is a function of a0:

The function /(a0) is quadratic in a0 and its minimum point can be obtained directly: oc° = —2/24 = —0.0833. Hence the new design isT1 = [-13/12 —2]7 and the new objective function value is / = 1.9167. This completes the first iteration.

The second iteration has the search direction J1 = [0 1 ]7 - Hence the next guess can be expressed as function of the search step size a1 as follows:

The objective function at the new design x2 is a function of a1:

The function /(a1) is quadratic in a1 and its minimum point can be obtained directly: a1 = 3/8 = 0.375. Hence the new design is x2 = [—13/12 — 13/8]7 and the new objective function value is / = 1.35416. This completes the second iteration.

The third iteration (pattern search iteration) has a search direction sp = a1 ?* + a°x° = [—1/123/8]7. Hence the next guess can be expressed as function of the search step size ap as follows:

The objective function at the new design 3c3 is also a quadratic function of ap, for which the optimal search step is ap = 40/49. Hence the new design is J3 = [—157/147 — 83/49]7 and the new objective function value is / = 1.319727. This completes the third iteration and the first cycle.

The above process repeats until a stopping criterion is satisfied. Table 4.1 shows the numerical results for all the iterations. The algorithms was stopped after 19 iterations due no change in the design variables X and x2. Hence the solution is x = [-0.3334 — 0.5001 ]7 Table 4.1: Cantilever Beam Deflection using Pattern Search Algorithm.

iteration

x2

/

N

1

-1.0000

-2.0000

2.0000

0

2

-1.0833

-2.0000

1.9167

0.0833

3

-1.0833

-1.6254

1.3540

0.3750

4

-1.0681

-1.6938

1.3199

0.1829

5

-0.9307

-1.6938

1.0918

0.1378

6

-0.9307

-1.3965

0.7372

0.2973

7

-0.5541

-0.5818

0.0617

2.7411

8

-0.3744

-0.5818

-0.3266

0.1797

9

-0.3744

-0.5619

-0.3283

0.0199

10

-0.3631

-0.5606

-0.3297

0.0628

11

-0.3635

-0.5606

-0.3297

0.0005

12

-0.3635

-0.5455

-0.3306

0.0152

13

-0.3635

-0.5460

-0.3306

0.0315

14

-0.3564

-0.5460

-0.3312

0.0072

15

-0.3564

-0.5351

-0.3317

0.0113

16

-0.3333

-0.5002

-0.3333

3.2111

17

-0.3334

-0.5002

-0.3333

0.0001

18

-0.3334

-0.5001

-0.3333

0.0001

19

-0.3334

-0.5001

-0.3333

0.1497

Powell’s Method

The Powell’s Method is similar to the Pattern Search Method, except that the first n search directions in a cycle are copied from the last n iterations in the previous cycle. This method is very popular due to its convergence characteristics. Powell’s Method would require only n cycles to converge to the optimal solution when optimizing a quadratic function of n variables.

 
Source
< Prev   CONTENTS   Source   Next >