# Section III: Variable-Size Design Space Optimization

Systems architecture optimization problems usually have categorical variables, and sometimes include multi-disciplinary optimization. In this type of problem, the size of the design space is sometimes variable. The focus in this part of the book is on the latter aspect of systems architecture optimization; the type of problems where the number of optimization variables is a variable.

Local optimization algorithms search for an optimal solution in the neighborhood of an initial guess. Global optimization methods search for optimal solutions in the whole design space. Genetic algorithms is one example of global optimization algorithms. The following chapters present recently developed variations on the genetic algorithm optimization method that make it more suitable for solving system architecture optimization problems.

## Hidden Genes Genetic Algorithms

### Introduction to Global Optimization

Optimization problems may be classified based on the existence of constraints, the nature of the design variables, and the nature of the equations involved. Local optimization methods find a local minimum given an initial guess in its neighborhood. The methods that need the gradient information, such as those discussed in previous chapters, are local optimization methods. In reality, objective functions may have a large number of local minima. Finding the global minimum of a function is a more challenging problem, as compared to local minimum search. Consider for example, the interplanetary trajectory optimization problem described in **Section **1.2.1. The cost of a trajectory is usually quantified in terms of the fuel needed to achieve the mission using this trajectory. The cost of a trajectory is usually a function of several variables. Assume we allow only two variables to change, e.g., the launch date and the flight time, and fix all other variables. In this case a typical cost function variation with these two variables is shown in **Fig. **6.1**. Figure **6.1 is a contour plot that maps a three-dimensional function on a two-dimensional plane using color coding. The color dictates the value of the cost function. As can be seen in **Fig. 6.1, **if we use a local optimization method and start with an initial guess for the time of flight near the value of 140 days, then the optimizer would find a local minimum that may not be a global optimal solution. There is a local minimum in the neighborhood of that initial guess that acts as an attractor for local optimization methods. It is of great impact if we have a tool that is able to search for the global optimal solution (identified in **Fig. 6.1 **as the “Optimal Solution”) and get trapped in a local domain.

**Figure 6.1: **The trajectory optimization cost function has multiple local minima.

Several techniques were developed for global optimization of multi-modal functions. Some direct methods rely on the use of rectangular bounds and work by exploring the rectangular subregions. The whole region is sampled in the limit when the number of iterations is infinity. This method may suffer lack of efficiency in some problems. Global optimization can be achieved using local optimizers, if multiple start points are used. The global optimal solution is then picked from the local solutions. Clearly, several start points may lead to the same local solution, and hence render the algorithm inefficient. For a better computational efficiency, clustering methods replace each group of start points, that lead to the same local minimum by only one start point. The Branch and Bound method is effective in handling optimization problems with mixed design variables (integer and real). Yet, it requires the feasibility of dividing the design space, to create smaller sub problems (branching). The stochastic branch and bound method uses stochastic upper and lower estimates of the optimal value of the objective function in each sub problem.

There are several global optimization concepts; many of them are biologically inspired. By mid-twentieth century, Metaheuristic algorithms started to develop. Genetic algorithms (GAs), for example, were first introduced by John Holland in 1960s. In his early work, Holland introduced the crossover and reproduction operations. The crossover operation is key to produce solutions in new regions in the design space. For aerospace engineering applications, the evolutionary strategy search method was also developed with no crossover operation. Yet a mutation operation was introduced that is also a mechanism to generate offsprings in new regions. An elite solution was always stored in each iteration.

Inspired by the annealing process of metals, the simulated annealing (SA) search method was developed in 1980s. In simulated annealing, a new, randomly generated, solution replaces an initial guess solution under the control of a probabilistic criterion; and the process then repeats. The same decade also witnessed the development of Tabu search methods. Several bio inspired search methods were later invented. One method is the Particle Swarm Optimization (PSO) which is inspired by the swarming behaviour of birds. Each particle (individual) behaves based on both its own intelligence and the group intelligence. Another example is the Ant Colony Optimization (ACO) which is based on the intelligence of a colony of ants. Specifically, the ACO technique mimics the ant colonies cooperative behavior in finding the shortest path from a food source to their nest. In the ACO method, each layer represents a design variable. Each layer consists of nodes, that represent the permissible values of the corresponding design variable. In the ACO, each layer has a fixed number of nodes. To move from the nest to the food source, each ant in the colony moves through all layers and leaves a pheromone on the path. A global update rule is then used to update the globally best path. The ACO method has been implemented in several applications, of them are the traveling salesman problem, the job-shop scheduling problem, and the resource-constrained project scheduling problem. Differential Evolution (DE) is another optimization algorithm that proved to be more efficient than genetic algorithms in some applications. For scheduling optimization problems, the harmony search (HS) algorithm was proposed in 2001. There are also other optimization methods that proved to be efficient in some types of problems. For example the honey bee algorithm was found to be efficient in optimizing Internet hosting centers. An extension to that is the virtual bee algorithm and the artificial bee colony algorithm. Other interesting examples of bio inspired optimization algorithms include the firefly algorithm (FA) which was developed in 2007, cuckoo search (CS) algorithm (2009), the bat algorithm (2010), and the flower pollination algorithm (2012).

The methods presented in this book to handle VSDS optimization problems are based on global optimization methods. Specifically, the genetic algorithms is adopted to implement most of the concepts presented in this book. Hence a brief review for genetic algorithms and how they work is presented in the next section.