Gads TB
Gads TB
Gads TB
Users Guide
Version 1
Web Newsgroup Technical support Product enhancement suggestions Bug reports Documentation error reports Order status, license renewals, passcodes Sales, pricing, and general information Phone Fax Mail
508-647-7000 508-647-7001 The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098
For contact information about worldwide offices, see the MathWorks Web site. Genetic Algorithm and Direct Search Toolbox Users Guide COPYRIGHT 2004 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc. FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by or for the federal government of the United States. By accepting delivery of the Program, the government hereby agrees that this software qualifies as "commercial" computer software within the meaning of FAR Part 12.212, DFARS Part 227.7202-1, DFARS Part 227.7202-3, DFARS Part 252.227-7013, and DFARS Part 252.227-7014. The terms and conditions of The MathWorks, Inc. Software License Agreement shall pertain to the governments use and disclosure of the Program and Documentation, and shall supersede any conflicting contractual terms or conditions. If this license fails to meet the governments minimum needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to MathWorks. MATLAB, Simulink, Stateflow, Handle Graphics, and Real-Time Workshop are registered trademarks, and TargetBox is a trademark of The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective holders.
Online only
Contents
Introducing the Genetic Algorithm and Direct Search Toolbox
1
What Is the Genetic Algorithm and Direct Search Toolbox? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 Related Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 Writing an M-File for the Function You Want to Optimize 1-5 Example Writing an M-File . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Maximizing Versus Minimizing . . . . . . . . . . . . . . . . . . . . . . . . . 1-6
2
What Is the Genetic Algorithm? . . . . . . . . . . . . . . . . . . . . . . . . 2-2 Using The Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 Calling the Function ga at the Command Line . . . . . . . . . . . . . 2-3 Using the Genetic Algorithm Tool . . . . . . . . . . . . . . . . . . . . . . . 2-4 Example: Rastrigins Function . . . . . . . . . . . . . . . . . . . . . . . . . 2-6 Rastrigins Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6 Finding the Minimum of Rastrigins Function . . . . . . . . . . . . . 2-8 Finding the Minimum from the Command Line . . . . . . . . . . . 2-10 Displaying Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11 Some Genetic Algorithm Terminology . . . . . . . . . . . . . . . . . Fitness Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Individuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Populations and Generations . . . . . . . . . . . . . . . . . . . . . . . . . . Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fitness Values and Best Fitness Values . . . . . . . . . . . . . . . . . Parents and Children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-15 2-15 2-15 2-15 2-16 2-16 2-17
iii
How the Genetic Algorithm Works . . . . . . . . . . . . . . . . . . . . . Outline of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating the Next Generation . . . . . . . . . . . . . . . . . . . . . . . . . . Plots of Later Generations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping Conditions for the Algorithm . . . . . . . . . . . . . . . . . . .
3
What Is Direct Search? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 Performing a Pattern Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3 Calling patternsearch at the Command Line . . . . . . . . . . . . . . . 3-3 Using the Pattern Search Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3 Example: Finding the Minimum of a Function . . . . . . . . . . . Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finding the Minimum of the Function . . . . . . . . . . . . . . . . . . . . Plotting the Objective Function Values and Mesh Sizes . . . . . . Pattern Search Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Pattern Search Works . . . . . . . . . . . . . . . . . . . . . . . . . . . Iterations 1 and 2: Successful Polls . . . . . . . . . . . . . . . . . . . . . Iteration 4: An Unsuccessful Poll . . . . . . . . . . . . . . . . . . . . . . . Displaying the Results at Each Iteration . . . . . . . . . . . . . . . . . More Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping Conditions for the Pattern Search . . . . . . . . . . . . . . .
3-6 3-6 3-7 3-8 3-10 3-10 3-11 3-12 3-13 3-13 3-16 3-17 3-17 3-18
iv
Contents
4
Overview of the Genetic Algorithm Tool . . . . . . . . . . . . . . . . 4-2 Opening the Genetic Algorithm Tool . . . . . . . . . . . . . . . . . . . . . 4-2 Defining a Problem in the Genetic Algorithm Tool . . . . . . . . . . 4-3 Running the Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 4-4 Pausing and Stopping the Algorithm . . . . . . . . . . . . . . . . . . . . . 4-5 Displaying Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7 Example Creating a Custom Plot Function . . . . . . . . . . . . . . 4-8 Reproducing Your Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11 Setting Options for the Genetic Algorithm . . . . . . . . . . . . . . . . 4-11 Importing and Exporting Options and Problems . . . . . . . . . . . 4-13 Example Resuming the Genetic Algorithm from the Final Population: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16 Generating an M-File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20 Using the Genetic Algorithm from the Command Line . . . Running the Genetic Algorithm with the Default Options . . . Setting Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Options and Problems from the Genetic Algorithm Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reproducing Your Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resuming ga from the Final Population of a Previous Run . . Running ga from an M-File . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Options for the Genetic Algorithm . . . . . . . . . . . . . Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Population Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fitness Scaling Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reproduction Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mutation and Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mutation Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Crossover Fraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing Results for Varying Crossover Fractions . . . . . . . Example Global Versus Local Minima . . . . . . . . . . . . . . . . . Setting the Maximum Number of Generations . . . . . . . . . . . . Using a Hybrid Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vectorize Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-21 4-21 4-22 4-24 4-25 4-26 4-26 4-29 4-29 4-30 4-34 4-38 4-39 4-39 4-40 4-42 4-45 4-47 4-51 4-53 4-55
5
Overview of the Pattern Search Tool . . . . . . . . . . . . . . . . . . . . 5-2 Opening the Pattern Search Tool . . . . . . . . . . . . . . . . . . . . . . . . 5-2 Defining a Problem in the Pattern Search Tool . . . . . . . . . . . . . 5-3 Running a Pattern Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5 Example A Constrained Problem . . . . . . . . . . . . . . . . . . . . . . 5-6 Pausing and Stopping the Algorithm . . . . . . . . . . . . . . . . . . . . . 5-8 Displaying Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8 Setting Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9 Importing and Exporting Options and Problems . . . . . . . . . . . 5-10 Generate M-File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13 Performing a Pattern Search from the Command Line . . Performing a Pattern Search with the Default Options . . . . . Setting Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Options and Problems from the Pattern Search Tool . . Setting Pattern Search Options . . . . . . . . . . . . . . . . . . . . . . . Poll Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complete Poll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using a Search Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mesh Expansion and Contraction . . . . . . . . . . . . . . . . . . . . . . . Mesh Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cache Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Tolerances for the Solver . . . . . . . . . . . . . . . . . . . . . . .
5-14 5-14 5-16 5-18 5-19 5-19 5-21 5-25 5-28 5-33 5-35 5-37
Function Reference
6
Functions Listed by Category . . . . . . . . . . . . . . . . . . . . . . . . 6-2 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2 Direct Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2 Genetic Algorithm Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 Plot Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
vi
Contents
Population Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5 Fitness Scaling Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7 Selection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8 Reproduction Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10 Mutation Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10 Crossover Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12 Migration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15 Output Function Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16 Stopping Criteria Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17 Hybrid Function Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17 Vectorize Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17 The State Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18 Pattern Search Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plot Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Poll Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Search Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mesh Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cache Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Function Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display to Command Window Options . . . . . . . . . . . . . . . . . . . Vectorize Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-19 6-20 6-20 6-22 6-24 6-25 6-25 6-26 6-26 6-27
Index
vii
viii Contents
1
Introducing the Genetic Algorithm and Direct Search Toolbox
What Is the Genetic Algorithm and Direct Search Toolbox? (p. 1-2) Related Products (p. 1-3)
Introduces the toolbox and its features. Lists products that are relevant to the kinds of tasks you can perform with the Genetic Algorithm and Direct Search Toolbox.
Writing an M-File for the Function You Explains how to solve maximization as well as Want to Optimize (p. 1-5) minimization problems using the toolbox.
You can extend the capabilities of the Genetic Algorithm and Direct Search Toolbox by writing your own M-files, or by using the toolbox in combination with other toolboxes, or with MATLAB or Simulink.
1-2
Related Products
Related Products
The MathWorks provides several products that are relevant to the kinds of tasks you can perform with the Genetic Algorithm and Direct Search Toolbox. For more information about any of these products, see either The online documentation for that product, if it is installed or if you are reading the documentation from the CD The MathWorks Web site, at http://www.mathworks.com; see the products section
Note The following toolboxes all include functions that extend the capabilities of MATLAB. The blocksets all include blocks that extend the capabilites of Simulink.
Product
Description
Curve Fitting Toolbox Data Acquisition Toolbox Database Toolbox Financial Time Series Toolbox Financial Toolbox GARCH Toolbox LMI Control Toolbox Neural Network Toolbox
Perform model fitting and analysis Acquire and send out data from plug-in data acquisition boards Exchange data with relational databases Analyze and manage financial time-series data Model financial data and develop financial analysis algorithms Analyze financial volatility using univariate GARCH models Design robust controllers using convex optimization techniques Design and simulate neural networks
1-3
Product
Description
Nonlinear Control Design Blockset Optimization Toolbox Signal Processing Toolbox Simulink Spline Toolbox Statistics Toolbox Symbolic/Extended Symbolic Math Toolbox System Identification Toolbox
Optimize design parameters in nonlinear control systems Solve standard and large-scale optimization problems Perform signal processing, analysis, and algorithm development Design and simulate continuous- and discrete-time systems Create and manipulate spline approximation models of data Apply statistical algorithms and probability models Perform computations using symbolic mathematics and variable-precision arithmetic Create linear dynamic models from measured input-output data
1-4
1-5
Note Do not use the Editor/Debugger to debug the M-file for the objective function while running the Genetic Algorithm Tool or the Pattern Search Tool. Doing so results in Java exception messages in the Command Window and makes debugging more difficult. See either Defining a Problem in the Genetic Algorithm Tool on page 4-3 or Defining a Problem in the Pattern Search Tool on page 5-3 for more information on debugging.
1-6
2
Getting Started with the Genetic Algorithm
What Is the Genetic Algorithm? (p. 2-2) Introduces the genetic algorithm. Using The Genetic Algorithm (p. 2-3) Example: Rastrigins Function (p. 2-6) Some Genetic Algorithm Terminology (p. 2-15) How the Genetic Algorithm Works (p. 2-18) Explains how to use the genetic algorithm tool. Presents an example of solving an optimization problem using the genetic algorithm. Explains some basic terminology for the genetic algorithm. Presents an overview of how the genetic algorithm works.
Generates a single point at each iteration. The sequence of points approaches an optimal solution. Selects the next point in the sequence by a deterministic computation.
Generates a population of points at each iteration. The population approaches an optimal solution. Selects the next population by computations that involve random choices.
2-2
where @fitnessfun is a handle to the fitness function. nvars is the number of independent variables for the fitness function. options is a structure containing options for the genetic algorithm. If you do not pass in this argument, ga uses its default options. The results are given by fval Final value of the fitness function x Point at which the final value is attained Using the function ga is convenient if you want to Return results directly to the MATLAB workspace Run the genetic algorithm multiple times with different options, by calling ga from an M-file Using the Genetic Algorithm from the Command Line on page 4-21 provides a detailed description of using the function ga and creating the options structure.
2-3
Enter fitness function. Enter number of variables for the fitness function.
2-4
To use the Genetic Algorithm Tool, you must first enter the following information: Fitness function The objective function you want to minimize. Enter the fitness function in the form @fitnessfun, where fitnessfun.m is an M-file that computes the fitness function. Writing an M-File for the Function You Want to Optimize on page 1-5 explains how write this M-file. The @ sign creates a function handle to fitnessfun. Number of variables The length of the input vector to the fitness function. For the function my_fun described in Writing an M-File for the Function You Want to Optimize on page 1-5, you would enter 2. To run the genetic algorithm, click the Start button. The tool displays the results of the optimization in the Status and Results pane. You can change the options for the genetic algorithm in the Options pane. To view the options in one of the categories listed in the pane, click the + sign next to it. For more information, See Overview of the Genetic Algorithm Tool on page 4-2 for a detailed description of the tool. See Example: Rastrigins Function on page 2-6 for an example of using the tool.
2-5
Rastrigins Function
For two independent variables, Rastrigins function is defined as Ras(x) = 20 + x 1 + x 2 10 ( cos 2x 1 + cos 2x 2 ) The toolbox contains an M-file, rastriginsfcn.m, that computes the values of Rastriginsfcn. The following figure shows a plot of Rastrigins function.
2 2
2-6
Global minimum at [0 0]
As the plot shows, Rastrigins function has many local minima the valleys in the plot. However, the function has just one global minimum, which occurs at the point [0 0] in the x-y plane, as indicated by the vertical line in the plot, where the value of the function is 0. At any local minimum other than [0 0], the value of Rastrigins function is greater than 0. The farther the local minimum is from the origin, the larger the value of the function is at that point. Rastrigins function is often used to test the genetic algorithm, because its many local minima make it difficult for standard, gradient-based methods to find the global minimum. The following contour plot of Rastrigins function shows the alternating maxima and minima.
2-7
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1
Local maxima
Local minima
Global minimum at [0 0]
Note Because the genetic algorithm uses random data to perform its search, the algorithm returns slightly different results each time you run it.
- In the Fitness function field, enter @rastriginsfcn. - In the Number of variables field, enter 2, the number of independent variables for Rastrigins function.
2-8
The Fitness function and Number of variables fields should appear as shown in the following figure.
3 Click the Start button in the Run solver pane, as shown in the following
figure.
While the algorithm is running, the Current generation field displays the number of the current generation. You can temporarily pause the algorithm by clicking the Pause button. When you do so, the button name changes to Resume. To resume the algorithm from the point at which you paused it, click Resume. When the algorithm is finished, the Status and results pane appears as shown in the following figure.
Final point
2-9
The final value of the fitness function when the algorithm terminated:
Function value: 0.0067749206244585025
Note that the value shown is very close to the actual minimum value of Rastrigins function, which is 0. Setting Options for the Genetic Algorithm on page 4-29 describes some ways to get a result that is closer to the actual minimum. The reason the algorithm terminated.
Exit: Optimization terminated: maximum number of generations exceeded.
In this example, the algorithm terminates after 100 generations, the default value of the option Generations, which specifies the maximum number of generations the algorithm computes. The final point, which in this example is [0.00274 -0.00516].
This returns
[x fval reason] = ga(@rastriginsfcn, 2) x = 0.0027 -0.0052
fval = 0.0068
2-10
where x is the final point returned by the algorithm. fval is the fitness function value at the final point. reason is the reason that the algorithm terminated.
Displaying Plots
The Plots pane enables you to display various plots that provide information about the genetic algorithm while it is running. This information can help you change options to improve the performance of the algorithm. For example, to plot the best and mean values of the fitness function at each generation, select the box next to Best fitness value, as shown in the following figure.
When you click Start, the Genetic Algorithm Tool displays a plot of the best and mean values of the fitness function at each generation. When the algorithm stops, the plot appears as shown in the following figure.
2-11
10
20
30
40
50 60 generation
70
80
90
100
The points at the bottom of the plot denote the best fitness values, while the points above them denote the averages of the fitness values in each generation. The plot also displays the best and mean values in the current generation numerically at the top. To get a better picture of how much the best fitness values are decreasing, you can change the scaling of the y-axis in the plot to logarithmic scaling. To do so,
1 Select Axes Properties from the Edit menu in the plot window to open the
2-12
Select Log.
2-13
10
10
10
10
10
10
10
20
30
40
50 60 generation
70
80
90
100
Typically, the best fitness value improves rapidly in the early generations, when the individuals are farther from the optimum. The best fitness value improves more slowly in later generations, whose populations are closer to the optimal point.
2-14
Fitness Functions
The fitness function is the function you want to optimize. For standard optimization algorithms, this is known as the objective function. The toolbox tries to find the minimum of the fitness function. You can write the fitness function as an M-file and pass it as an input argument to the main genetic algorithm function.
Individuals
An individual is any point to which you can apply the fitness function. The value of the fitness function for an individual is its score. For example, if the fitness function is f(x 1, x 2 , x 3) = ( 2x 1 + 1 ) + ( 3x 2 + 4 ) + ( x 3 2 )
2 2 2
the vector (2, 3, 1), whose length is the number of variables in the problem, is an individual. The score of the individual (2, 3, 1) is f(2, -3, 1) = 51. An individual is sometimes referred to as a genome and the vector entries of an individual as genes.
2-15
more than once in the population. For example, the individual (2, 3, 1) can appear in more than one row of the array. At each iteration, the genetic algorithm performs a series of computations on the current population to produce a new population. Each successive population is called a new generation.
Diversity
Diversity refers to the average distance between individuals in a population. A population has high diversity if the average distance is large; otherwise it has low diversity. In the following figure, the population on the left has high diversity, while the population on the right has low diversity.
Diversity is essential to the genetic algorithm because it enables the algorithm to search a larger region of the space.
2-16
best fitness value for a population is the smallest fitness value for any individual in the population.
2-17
At each step, the algorithm uses the individuals in the current generation to create the next generation. To create the new generation, the algorithm performs the following steps:
a Scores each member of the current population by computing its fitness
value.
b Scales the raw fitness scores to convert them into a more usable range of
values.
c
Selects parents based on their fitness. making random changes to a single parent mutation or by combining the vector entries of a pair of parents crossover.
Replaces the current population with the children to form the next generation.
3 The algorithm stops when one of the stopping criteria is met. See Stopping
2-18
Initial Population
The algorithm begins by creating a random initial population, as shown in the following figure.
1 Initial population 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1
In this example, the initial population contains 20 individuals, which is the default value of Population size in the Population options. Note that all the individuals in the initial population lie in the upper-right quadrant of the picture, that is, their coordinates lie between 0 and 1, because the default value of Initial range in the Population options is [0;1]. If you know approximately where the minimal point for a function lies, you should set Initial range so that the point lies near the middle of that range. For example, if you believe that the minimal point for Rastrigins function is near the point [0 0], you could set Initial range to be [-1;1]. However, as this example shows, the genetic algorithm can find the minimum even with a less than optimal choice for Initial range.
2-19
Elite child
Crossover child
Mutation child
2-20
Mutation and Crossover on page 4-39 explains how to specify the number of children of each type that the algorithm generates and the functions it uses to perform crossover and mutation. The following sections explain how the algorithm creates crossover and mutation children.
Crossover Children
The algorithm creates crossover children by combining pairs of parents in the current population. At each coordinate of the child vector, the default crossover function randomly selects an entry, or gene, at the same coordinate from one of the two parents and assigns it to the child.
Mutation Children
The algorithm creates mutation children by randomly changing the genes of individual parents. By default, the algorithm adds a random vector from a Gaussian distribution to the parent. The following figure shows the children of the initial population, that is, the population at the second generation, and indicates whether they are elite, crossover, or mutation children.
2-21
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 Elite children Crossover children Mutation children 0.5 0 0.5 1 1.5 2
2-22
Iteration 60 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5
Iteration 80
0.5
Iteration 95 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1
Iteration 100 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1
2-23
As the number of generations increases, the individuals in the population get closer together and approach the minimum point [0 0].
When you run the genetic algorithm, the Status panel displays the criterion that caused the algorithm to stop. The options Stall time limit and Time limit prevent the algorithm from running too long. If the algorithm stops due to one of these conditions, you
2-24
might improve your results by increasing the values of Stall time limit and Time limit.
2-25
2-26
3
Getting Started with Direct Search
What Is Direct Search? (p. 3-2) Performing a Pattern Search (p. 3-3) Example: Finding the Minimum of a Function (p. 3-6) Pattern Search Terminology (p. 3-10) How Pattern Search Works (p. 3-13) Plotting the Objective Function Values and Mesh Sizes (p. 3-8)
Introduces direct search and pattern search. Explains the main function in the toolbox for performing pattern search. Provides an example of solving an optimization problem using pattern search. Explains some basic pattern search terminology. Provides an overview of direct search algorithms. Shows how to plot the objective function values and mesh sizes of the sequence of points generated by the pattern search.
3-2
where @objfun is a handle to the objective function. x0 is the starting point for the pattern search. The results are given by fval Final value of the objective function x Point at which the final value is attained Performing a Pattern Search from the Command Line on page 5-14 explains in detail how to use the function patternsearch.
3-3
To use the Pattern Search Tool, you must first enter the following information: Objective function The objective function you want to minimize. You enter the objective function in the form @objfun, where objfun.m is an M-file
3-4
that computes the objective function. The @ sign creates a function handle to objfun. Start point The initial point at which the algorithm starts the optimization. You can enter constraints for the problem in the Constraints pane. If the problem is unconstrained, leave these fields blank. Then, click the Start button. The tool displays the results of the optimization in the Status and results pane. You can also change the options for the pattern search in the Options pane. To view the options in a category, click the + sign next to it. Finding the Minimum of the Function on page 3-7 gives an example of using the Pattern Search Tool. Overview of the Pattern Search Tool on page 5-2 provides a detailed description of the Pattern Search Tool.
3-5
Objective Function
The example uses the objective function, ps_example, which is included in the Genetic Algorithms and Direct Search Toolbox. You can view the code for the function by entering
type ps_example
3-6
psearchtool
@ps_example.
3 In the Start point field, type [2.1 1.7].
You can leave the fields in the Constraints pane blank because the problem is unconstrained.
4 Click Start to run the pattern search.
The Status and Results pane displays the results of the pattern search.
The minimum function value is approximately -2. The Final point pane displays the point at which the minimum occurs.
3-7
Then click Start to run the pattern search. This displays the following plots.
3-8
4 2 0 2
10
20
50
60
4 3
Mesh size
2 1 0
10
20
30 Iteration
40
50
60
The upper plot shows the objective function value of the best point at each iteration. Typically, the objective function values improve rapidly at the early iterations and then level off as they approach the optimal value. The lower plot shows the mesh size at each iteration. The mesh size increases after each successful iteration and decreases after each unsuccessful one, explained in How Pattern Search Works on page 3-13.
3-9
Patterns
A pattern is a collection of vectors that the algorithm uses to determine which points to search at each iteration. For example, if there are two independent variables in the optimization problem, the default pattern consists of the following vectors. v1 = [1 0] v2 = [0 1] v3 = [-1 0] v4 = [0 -1] The following figure shows these vectors.
3-10
1 v 0.5 v 0 v
2
0.5 v 1
4
1.5 1.5
0.5
0.5
1.5
Meshes
At each step, the pattern search algorithm searches a set of points, called a mesh, for a point that improves the objective function. The algorithm forms the mesh by
1 Multiplying the pattern vectors by a scalar, called the mesh size 2 Adding the resulting vectors to the current point the point with the best
objective function value found at the previous step For example, suppose that The current point is [1.6 3.4]. The pattern consists of the vectors v1 = [1 0] v2 = [0 1]
3-11
v3 = [-1 0] v4 = [0 -1] The current mesh size is 4. The algorithm multiplies the pattern vectors by 4 and adds them to the current point to obtain the following mesh.
[1.6 [1.6 [1.6 [1.6 3.4] 3.4] 3.4] 3.4] + + + + 4*[1 0] = [5.6 3.4] 4*[0 1] = [1.6 7.4] 4*[-1 0] = [-2.4 3.4] 4*[0 -1] = [1.6 -0.6]
The pattern vector that produces a mesh point is called its direction.
Polling
At each step, the algorithm polls the points in the current mesh by computing their objective function values. When option Complete poll has the default setting Off, the algorithm stops polling the mesh points as soon as it finds a point whose objective function value is less than that of the current point. If this occurs, the poll is called successful and the point it finds becomes the current point at the next iteration. Note that the algorithm only computes the mesh points and their objective function values up to the point at which it stops the poll. If the algorithm fails to find a point that improves the objective function, the poll is called unsuccessful and the current point stays the same at the next iteration. If you set Complete poll to On, the algorithm computes the objective function values at all mesh points. The algorithm then compares the mesh point with the smallest objective function value to the current point. If that mesh point has a smaller value than the current point, the poll is successful.
3-12
Iteration 1
At the first iteration, the mesh size is 1 and the pattern search algorithm adds the pattern vectors to the initial point x0 = [2.1 1.7] to compute the following mesh points.
[1 0] + x0 = [3.1 1.7] [0 1] + x0 = [2.1 2.7] [-1 0] + x0 = [1.1 1.7] [0 -1 ] + x0 = [2.1 0.7]
The algorithm computes the objective function at the mesh points in the order shown above. The following figure shows the value of ps_example at the initial point and mesh points.
3-13
Objective Function Values at Initial Point and Mesh Points 3 5.6347 2.5 Initial point x0 Mesh points
The algorithm polls the mesh points by computing their objective function values until it finds one whose value is smaller than 4.6347, the value at x0. In this case, the first such point it finds is [1.1 1.7], at which the value of the objective function is 4.5146, so the poll is successful. The algorithm sets the next point in the sequence equal to
x1 = [1.1 1.7]
Note By default, the pattern search algorithm stops the current iteration as soon as it finds a mesh point whose fitness value is smaller than that of the current point. Consequently, the algorithm might not poll all the mesh points. You can make the algorithm poll all the mesh points by setting Complete poll to On.
3-14
Iteration 2
After a successful poll, the algorithm multiplies the current mesh size by 2, the default value of Expansion factor in the Mesh options pane. Because the initial mesh size is 1, at the second iteration the mesh size is 2. The mesh at iteration 2 contains the following points.
2*[1 0] + x1 = [3.1 1.7] 2*[0 1] + x1 = [1.1 3.7] 2*[-1 0] + x1 = [-0.9 1.7] 2*[0 -1 ] + x1 = [1.1 -0.3]
The following figure shows the point x1 and the mesh points, together with the corresponding values of ps_example.
Objective Function Values at x1 and Mesh Points 4 6.5416 3.5 3 2.5 2 3.25 1.5 1 0.5 0 3.1146 0.5 1 0.5 0 0.5 1 1.5 2 2.5 3 3.5 4.5146 4.7282 x1 Mesh points
The algorithm polls the mesh points until it finds one whose value is smaller than 4.5146, the value at x1. The first such point it finds is [-0.9 1.7], at which the value of the objective function is 3.25, so the poll is successful. The algorithm sets the second point in the sequence equal to
x2 = [-0.9 1.7]
3-15
Because the poll is successful, the algorithm multiplies the current mesh size by 2 to get a mesh size of 4 at the third iteration.
The following figure shows the mesh points and their objective function values.
Objective Function Values at x3 and Mesh Points 10 8 6 4 2 0 2 4 6 8 4.3351 10 5 0 5 64.11 0.2649 4.7282 7.7351 x3 Mesh points
At this iteration, none of the mesh points has a smaller objective function value than the value at x3, so the poll is unsuccessful. In this case, the algorithm does not change the current point at the next iteration. That is,
3-16
x4 = x3;
At the next iteration, the algorithm multiplies the current mesh size by 0.5, the default value of Contraction factor in the Mesh options pane, so that the mesh size at the next iteration is 4. The algorithm then polls with a smaller mesh size.
The entry Successful Poll below Method indicates that the current iteration was successful. For example, the poll at iteration 2 successful. As a result, the objective function value of the point computed at iteration 2, displayed below f(x), is less than the value at iteration 1. At iteration 4, the entry Refine Mesh below Method tells you that the poll is unsuccessful. As a result, the function value at iteration 4 remains unchanged from iteration 3. Note that the pattern search doubles the mesh size after each successful poll and halves it after each unsuccessful poll.
More Iterations
The pattern search performs 88 iterations before stopping. The following plot shows the points in the sequence computed in the first 13 iterations of the pattern search.
3-17
1.5
0.5 10 13
0.5
1 6
The numbers below the points indicate the first iteration at which the algorithm finds the point. The plot only shows iteration numbers corresponding to successful polls, because the best point doesnt change after an unsuccessful poll. For example, the best point at iterations 4 and 5 is the same as at iteration 3.
3-18
The algorithm stops when any of the following conditions occurs: The mesh size is less than Mesh tolerance. The number of iterations performed by the algorithm reaches the value of Max iteration. The total number of objective function evaluations performed by the algorithm reaches the value of Max function evaluations. The distance between the point found at one successful poll and the point found at the next successful poll is less than X tolerance. The change in the objective function from one successful poll to the next successful poll is less than Function tolerance. The Bind tolerance option, which is used to identify active constraints for constrained problems, is not used as a stopping criterion.
3-19
3-20
4
Using the Genetic Algorithm
Overview of the Genetic Algorithm Tool on page 4-2 Using the Genetic Algorithm from the Command Line (p. 4-21) Setting Options for the Genetic Algorithm (p. 4-29)
Provides an overview of the Genetic Algorithm Tool. Describes how to use the genetic algorithm at the command line. Explains how to set options for the genetic algorithm.
at the MATLAB prompt. This opens the Genetic Algorithm Tool, as shown in the following figure.
4-2
Enter fitness function. Enter number of variables for the fitness function.
4-3
Number of variables The number of independent variables for the fitness function.
Note Do not use the Editor/Debugger to debug the M-file for the objective function while running the Genetic Algorithm Tool. Doing so results in Java exception messages in the Command Window and makes debugging more difficult. Instead, call the objective function directly from the command line or pass it to the genetic algorithm function ga. To facilitate debugging, you can export your problem from the Genetic Algorithm Tool to the MATLAB workspace, as described in Importing and Exporting Options and Problems on page 4-13..
The following figure shows these fields for the example described in Example: Rastrigins Function on page 2-6.
4-4
When the algorithm terminates, the Status and results pane displays The message GA terminated. The fitness function value of the best individual in the final generation The reason the algorithm terminated The coordinates of the final point The following figure shows this information displayed when you run the example in Example: Rastrigins Function on page 2-6.
You can change many of the settings in the Genetic Algorithm Tool while the algorithm is running. Your changes are applied at the next generation. Until your changes are applied, which occurs at the start of the next generation, the Status and Results pane displays the message Changes pending. At the start of the next generation, the pane displays the message Changes applied. as shown in the following figure.
4-5
Click Pause to temporarily suspend the algorithm. To resume the algorithm using the current population at the time you paused, click Resume. Click Stop to stop the algorithm. The Status and results pane displays the fitness function value of the best point in the current generation at the moment you clicked Stop.
Note If you click Stop and then run the genetic algorithm again by clicking Start, the algorithm begins with a new random initial population or with the population you specify in the Initial population field. If you want to restart the algorithm where it left off, use the Pause and Resume buttons.
Example Resuming the Genetic Algorithm from the Final Population: on page 4-16 explains what to do if you click Stop and later decide to resume the genetic algorithm from the final population of the last run.
4-6
Set Stall time limit to Inf. The following figure shows these settings.
Note Do not use these settings when calling the genetic algorithm function ga at the command line, as the function will never terminate until you press Ctrl + C. Instead, set Generations or Time limit to a finite number.
Displaying Plots
The Plots pane, shown in the following figure, enables you to display various plots of the results of the genetic algorithm.
Select the check boxes next to the plots you want to display. For example, if you select Best fitness and Best individual, and run the example described in Example: Rastrigins Function on page 2-6, the tool displays the plots shown in the following figure.
4-7
The upper plot displays the best and mean fitness values in each generation. The lower plot displays the coordinates of the point with the best fitness value in the current generation.
Note When you display more than one plot, clicking on any plot opens a larger version of it in a separate window.
Plot Options on page 6-4 describes the types of plots you can create.
4-8
Creating the Plot Function on page 4-9 Using the Plot Function on page 4-9 How the Plot Function Works on page 4-10
4-9
Fitness value
10
20
30
10
40
80
90
100
10
10
10
10
20
30
40
50 60 Generation
70
80
90
100
Note that because the scale of the y-axis in the lower custom plot is logarithmic, the plot only shows changes that are greater then 0. The logarithmic scale enables you to see small changes in the fitness function that the upper plot does not reveal.
4-10
set(gca,'xlim',[1,options.Generations],'Yscale','log'); Sets up the plot before the algorithm starts. options.Generation is the maximum number of generations. best = min(state.Score) The field state.Score contains the scores of all individuals in the current population. The variable best is the minimum score. For a complete description of the fields of the structure state, see Structure of the Plot Functions on page 6-5. change = last_best - best The variable change is the best score at the previous generation minus the best score in the current generation. plot(state.Generation, change, '.r') Plots the change at the current generation, whose number is contained in state.Generation. The code for gaplotchange contains many of the same elements as the code for gaplotbestf, the function that creates the best fitness plot.
4-11
Setting Options for the Genetic Algorithm on page 4-29 describes how options settings affect the performance of the genetic algorithm. For a detailed description of all the available options, see Genetic Algorithm Options on page 6-3.
4-12
workspace that contains the option values. For example, you can set the Initial
point to [2.1 1.7] in either of the following ways:
at the MATLAB prompt and then enter x0 in the Initial point field. For options whose values are large matrices or vectors, it is often more convenient to define their values as variables in the MATLAB workspace. This way, it is easy to change the entries of the matrix or vector if necessary.
4-13
To export options or problems, click the Export button or select Export to Workspace from the File menu. This opens the dialog box shown in the following figure.
The dialog provides the following options: To save both the problem definition and the current options settings, select Export problem and options to a MATLAB structure named and enter a name for the structure. Clicking OK saves this information to a structure in the MATLAB workspace. If you later import this structure into the Genetic Algorithm Tool, the settings for Fitness function, Number of variables, and all options settings are restored to the values they had when you exported the structure. Note If you select Use random states from previous run in the Run solver pane before exporting a problem, the Genetic Algorithm Tool also saves the states of rand and randn at the beginning of the last run when you export. Then, when you import the problem and run the genetic algorithm with Use random states from previous run selected, the results of the run just before you exported the problem are reproduced exactly.
If you want the genetic algorithm to resume from the final population of the last run before you exported the problem, select Include information needed to resume this run. Then, when you import the problem structure and click Start, the algorithm resumes from the final population of the previous run. To restore the genetic algorithms default behavior of generating a random initial population, delete the population in the Initial population field and replace it with empty brackets, [].
4-14
Note If you select Include information needed to resume this run, then selecting Use random states from previous run has no effect on the initial population created when you import the problem and run the genetic algorithm on it. The latter option is only intended to reproduce results from the beginning of a new run, not from a resumed run.
To save only the options, select Export options to a MATLAB structure named and enter a name for the options structure. To save the results of the last run of the algorithm, select Export results to a MATLAB structure named and enter a name for the results structure.
input argument:
[x fval] = patternsearch(my_gaproblem)
This returns
x = 0.0027 -0.0052
fval = 0.0068
4-15
See Using the Genetic Algorithm from the Command Line on page 4-21 for form information.
Importing Options
To import an options structure from the MATLAB workspace, select Import Options from the File menu. This opens a dialog box that displays a list of the genetic algorithm options structures in the MATLAB workspace. When you select an options structure and click Import, the options fields in the Genetic Algorithm Tool are updated to display the values of the imported options. You can create an options structure in either of the following ways: Calling gaoptimset with options as the output By saving the current options from the Export to Workspace dialog box in the Genetic Algorithm Tool
Importing Problems
To import a problem that you previously exported from the Genetic Algorithm Tool, select Import Problem from the File menu. This opens the dialog box that displays a list of the genetic algorithm problem structures in the MATLAB workspace. When you select a problem structure and click OK, the following fields are updated in the Genetic Algorithm Tool: Fitness function Number of variables The options fields
4-16
Fitness value
10
20
30
40
50 60 Generation
70
80
90
100
Suppose you want to experiment by running the genetic algorithm with other options settings, and then later restart this run from its final population with its current options settings. You can do this by the following steps:
1 Click the Export to Workspace button 2 In the dialog box that appears,
- Select Export problem and options to a MATLAB structure named. - Enter a name for the problem and options, such as ackley_run1, in the text field. - Select Include information needed to resume this run. The dialog box should now appear as in the following figure.
4-17
3 Click OK.
This exports the problem and options to a structure in the MATLAB workspace. You can view the structure in the MATLAB Command Window by entering
ackley_uniform ackley_uniform = fitnessfcn: @ackleyfcn genomelength: 10 options: [1x1 struct]
After running the genetic algorithm with different options settings or even a different fitness function, you can restore the problem as follows:
1 Select Import Problem from the File menu. This opens the dialog box
4-18
This sets the Initial population field in Population options to the final population of the run before you exported the problem. All other options are restored to their setting during that run. When you click Start, the genetic algorithm resumes from the saved final population. The following figure shows the best fitness plots from the original run and the restarted run.
4-19
Best: 3.2232 Mean: 3.2232 3.45 3.4 3.35 3.3 Fitness value 3.25 3.2 3.15 3.1 3.05 3 Fitness value 3.45 3.4 3.35 3.3 3.25 3.2 3.15 3.1 3.05 3
10
20
30
40
50 60 Generation
70
80
90
100
10
20
30
40
50 60 Generation
70
80
90
100
First run
Note If, after running the genetic algorithm with the imported problem, you want to restore the genetic algorithms default behavior of generating a random initial population, delete the population in the Initial population field and replace it with empty brackets, [].
Generating an M-File
To create an M-file that runs the genetic algorithm, using the fitness function and options you specify in the Genetic Algorithm Tool, select Generate M-File from the File menu and save the M-file in a directory on the MATLAB path. Calling this M-file at the command line returns the same results as the Genetic Algorithm Tool, using the fitness function and options settings that were in place when you generated the M-file.
4-20
The input arguments to ga are @fitnessfun A function handle to the M-file that computes the fitness function. Writing an M-File for the Function You Want to Optimize on page 1-5 explains how to write this M-file. nvars The number of independent variables for the fitness function. The output arguments are x The final point fval The value of the fitness function at x For a description of additional output arguments, see the reference page for ga. As an example, you can run the example described in Example: Rastrigins Function on page 2-6 from the command line by entering
[x fval] = ga(@rastriginsfcn, 2)
This returns
x =
4-21
0.0027
-0.0052
fval = 0.0068
Besides x and fval, this returns the following additional output arguments: reason Reason the algorithm terminated output Structure containing information about the performance of the algorithm at each generation population Final population scores Final scores See the reference page for ga for more information about these arguments.
Setting Options
You can specify any of the options that are available in the Genetic Algorithm Tool by passing an options structure as an input argument to ga using the syntax
[x fval] = ga(@fitnessfun, nvars, options)
This returns the structure options with the default values for its fields.
options = PopulationType: 'doubleVector' PopInitRange: [2x1 double]
4-22
PopulationSize: EliteCount: CrossoverFraction: MigrationDirection: MigrationInterval: MigrationFraction: Generations: TimeLimit: FitnessLimit: StallLimitG: StallLimitS: InitialPopulation: InitialScores: PlotInterval: CreationFcn: FitnessScalingFcn: SelectionFcn: CrossoverFcn: MutationFcn: HybridFcn: PlotFcns: OutputFcns: Vectorized:
20 2 0.8000 'forward' 20 0.2000 100 Inf -Inf 50 20 [] [] 1 @gacreationuniform @fitscalingrank @selectionstochunif @crossoverscattered @mutationgaussian [] [] [] 'off'
The function ga uses these default values if you do not pass in options as an input argument. The value of each option is stored in a field of the options structure, such as
options.PopulationSize. You can display any of these values by entering options followed by the name of the field. For example, to display the size of
To create an options structure with a field value that is different from the default for example to set PopulationSize to 100 instead of its default value 20 enter
4-23
This creates the options structure with all values set to their defaults except for PopulationSize, which is set to 100. If you now enter,
ga(@fitnessfun, nvars, options) ga runs the genetic algorithm with a population size of 100.
If you subsequently decide to change another field in the options structure, such as setting PlotFcns to @gaplotbestf, which plots the best fitness function value at each generation, call gaoptimset with the syntax
options = gaoptimset(options, 'PlotFcns', @plotbestf)
This preserves the current values of all fields of options except for PlotFcns, which is changed to @plotbestf. Note that if you omit the input argument options, gaoptimset resets PopulationSize to its default value 20. You can also set both PopulationSize and PlotFcns with the single command
options = gaoptimset('PopulationSize',100,'PlotFcns',@plotbestf)
If you export a problem structure, ga_problem, from the Genetic Algorithm Tool, you can apply ga to it using the syntax
[x fval] = ga(ga_problem)
The problem structure contains the following fields: fitnessfcn Fitness function
4-24
The states of rand and randn are stored in the first two fields of output.
output = randstate: randnstate: generations: funccount: message: [35x1 double] [2x1 double] 100 2000 [1x64 char]
4-25
If you now run ga a second time, you get the same results.
Note If you do not need to reproduce your results, it is better not to set the states of rand and randn, so that you get the benefit of the randomness in the genetic algorithm.
The last ouput argument, is the final population. To run ga using final_pop as the initial population, enter
options = gaoptimset('InitialPop', final_pop); [x, fval, reason, output, final_pop2] = ga(@fitnessfcn, nvars);
If you want, you can then use final_pop2, the final population from the second run, as the initial population for a third run.
4-26
You can plot the values of fval against the crossover fraction with the following commands:
plot(0:.05:1, record); xlabel('Crossover Fraction'); ylabel('fval')
60
50
40
fval
30
20
10
0.1
0.2
0.3
0.7
0.8
0.9
The plot indicates that you get the best results by setting options.CrossoverFraction to a value somewhere between 0.6 and 0.95. You can get a smoother plot of fval as a function of the crossover fraction by running ga 20 times and averaging the values of fval for each crossover fraction. The following figure shows the resulting plot.
4-27
50 45 40
The plot narrows the range of best choices for options.CrossoverFraction to values between 0.7 and 0.9.
4-28
Diversity
One of the most important factors in determining how well the genetic algorithm performs is the diversity of the population. If the average distance between individuals is large, the diversity is high; if the average distance is small, the diversity is low. Diversity affects the performance of the genetic algorithm. If the diversity is too large or too small, the algorithm might not perform well. You can control the amount of diversity in the population by various options settings, including the Initial range and the amount of mutation in Mutation options. The examples in the following sections illustrate how these options affect the behavior of the genetic algorithm: Example Setting the Initial Range on page 4-30
4-29
Population Options
Population options control the characteristics of the individuals in the population. This section describes the following options: Example Setting the Initial Range on page 4-30 Setting the Population Size on page 4-33
Note The initial range only restricts the range of the points in the initial population. Subsequent generations can contain points whose entries do not lie in the initial range.
If you know approximately where the solution to a problem lies, you should specify the initial range so that it contains your guess for the solution. However, the genetic algorithm can find the solution even if it does not lie in the initial range, provided that the populations have enough diversity. The following example shows how the initial range affects the performance of the genetic algorithm. The example uses Rastrigins function, described in Example: Rastrigins Function on page 2-6. The minimum value of the function is 0, which occurs at the origin. To run the example, make the following settings in the Genetic Algorithm Tool: Set Fitness function to @Rastriginsfcn. Set Number of variables to 2. Select Best fitness in the Plots pane. Select Range in the Plots pane. Set Initial range to [1; 1.1].
4-30
Then click Start. The genetic algorithm returns the best fitness function value of approximately 2 and displays the plots in the following figure.
Best: 1.9899 Mean: 1.9911
0.6
10
Fitness value
10 10 10
0.5
0.4
0.3
10
20
30
40
90
100
10
20
30
40
50 60 generation
70
80
90
100
The upper plot the best fitness values at each generation shows little progress in lowering the fitness function. The lower plot shows the average distance between individuals at each generation, which is a good measure of the diversity of a population. For this setting of initial range, there is too little diversity for the algorithm to make progress. Next, try setting Initial range to [1; 100] and running the algorithm. The genetic algorithm returns the best fitness value of approximately 3.9 and displays the following plots.
4-31
10
Fitness value 10
0
10
20
30
40
90
100
150
100
50
10
20
30
40
50 60 generation
70
80
90
100
This time, the genetic algorithm makes progress, but because the average distance between individuals is so large, the best individuals are far from the optimal solution. Finally, set Initial range to [1; 2] and run the genetic algorithm. This returns the best fitness value of approximately .012 and displays the following plots.
4-32
10 10 10 10 10
Fitness value
10
20
30
40
90
100
2 1.5 1 0.5 0
10
20
30
40
50 60 generation
70
80
90
100
The diversity in this case is better suited to the problem, so the genetic algorithm returns a much better result than in the previous two cases.
Note You should set Size to be at least the value of Number of variables, so that the individuals in each population span the space being searched.
4-33
You can experiment with different settings for Population size that return good results without taking a prohibitive amount of time to run.
4-34
Score
100 90 80 70 60 50
10 Sorted individuals
15
20
The following plot shows the scaled values of the raw scores using rank scaling.
4-35
Scaled Values Using Rank Scaling 4.5 4 3.5 3 2.5 2 1.5 1 0.5
Scaled value
10 Sorted individuals
15
20
Because the algorithm minimizes the fitness function, lower raw scores have higher scaled values. Also, because rank scaling assigns values that depend only on an individuals rank, the scaled values shown would be the same for any population of size 20 and number of parents equal to 32.
4-36
Comparison of Rank and Top Scaling 8 7 6 Scaled value 5 4 3 2 1 0 0 5 10 Sorted individuals 15 20 Rank scaling Top scaling
Because top scaling restricts parents to the fittest individuals, it creates less diverse populations than rank scaling. The following plot compares the variances of distances between individuals at each generation using rank and top scaling.
4-37
Variance of Distance Between Individuals Using Rank and Top Scaling 0.8 0.7 0.6 0.5 Variance 0.4 0.3 0.2 0.1 0 Variance using rank scaling Variance using top scaling
10
20
30
40
50 60 Generation
70
80
90
100
Selection Options
The selection function chooses parents for the next generation based on their scaled values from the fitness scaling function. An individual can be selected more than once as a parent, in which case it contributes its genes to more than one child. The default selection function, Stochastic uniform, lays out a line in which each parent corresponds to a section of the line of length proportional to its scaled value. The algorithm moves along the line in steps of equal size. At each step, the algorithm allocates a parent from the section it lands on. A more deterministic selection function is Remainder, which performs two steps: In the first step, the function selects parents deterministically according to the integer part of the scaled value for each individual. For example, if an individuals scaled value is 2.3, the function selects that individual twice as a parent.
4-38
In the second step, the selection function selects additional parents using the fractional parts of the scaled values, as in stochastic uniform selection. The function lays out a line in sections, whose lengths are proportional to the fractional part of the scaled value of the individuals, and moves along the line in equal steps to select the parents. Note that if the fractional parts of the scaled values all equal 0, as can occur using Top scaling, the selection is entirely deterministic.
Reproduction Options
Reproduction options control how the genetic algorithm creates the next generation. The options are Elite count The number of individuals with the best fitness values in the current generation that are guaranteed to survive to the next generation. These individuals are called elite children. The default value of Elite count is 2. When Elite count is at least 1, the best fitness value can only decrease from one generation to the next. This is what you want to happen, since the genetic algorithm minimizes the fitness function. Setting Elite count to a high value causes the fittest individuals to dominate the population, which can make the search less effective. Crossover fraction The fraction of individuals in the next generation, other than elite children, that are created by crossover.The Crossover Fraction on page 4-42 describes how the value of Crossover fraction affects the performance of the genetic algorithm.
4-39
Both processes are essential to the genetic algorithm. Crossover enables the algorithm to extract the best genes from different individuals and recombine them into potentially superior children. Mutation adds to the diversity of a population and thereby increases the likelihood that the algorithm will generate individuals with better fitness values. Without mutation, the algorithm could only produce individuals whose genes were a subset of the combined genes in the initial population. See Creating the Next Generation on page 2-20 for an example of how the genetic algorithm applies mutation and crossover. You can specify how many of each type of children the algorithm creates as follows: Elite count, in Reproduction options, specifies the number of elite children. Crossover fraction, in Reproduction options, specifies the fraction of the population, other than elite children, that are crossover children. For example, if the Population size is 20, the Elite count is 2, and the Crossover fraction is 0.8, the numbers of each type of children in the next generation is as follows: There are 2 elite children There are 18 individuals other than elite children, so the algorithm rounds 0.8*18 = 14.4 to 14 to get the number of crossover children. The remaining 4 individuals, other than elite children, are mutation children.
Mutation Options
The genetic algorithm applies mutations using the function that you specify in the Mutation function field. The default mutation function, Gaussian, adds a random number, or mutation, chosen from a Gaussian distribution, to each entry of the parent vector. Typically, the amount of mutation, which is proportional to the standard deviation of the distribution, decreases as at each new generation. You can control the average amount of mutation that the algorithm applies to a parent in each generation through the Scale and Shrink options:
4-40
Scale controls the standard deviation of the mutation at the first generation, which is Scale multiplied by the range of the initial population, which you specify by the Initial range option. Shrink controls the rate at which the average amount of mutation decreases. The standard deviation decreases linearly so that its final value equals 1 - Shrink times its initial value at the first generation. For example, if Shrink has the default value of 1, then the amount of mutation decreases to 0 at the final step. You can see the effect of mutation by selecting the plot functions Distance and Range, and then running the genetic algorithm on a problem such as the one described in Example: Rastrigins Function on page 2-6. The following figure shows the plot.
Average Distance between individuals 4 3 2 1 0
10
20
30
40
80
90
100
20
40
60 generation
80
100
120
The upper plot displays the average distance between points in each generation. As the amount of mutation decreases, so does the average distance between individuals, which is approximately 0 at the final generation. The lower plot displays a vertical line at each generation, showing the range from the smallest to the largest fitness value, as well as mean fitness value. As the amount of mutation decreases, so does the range. These plots show that
4-41
reducing the amount of mutation decreases the diversity of subsequent generations. For comparison, the following figure shows the plots for Distance and Range when you set Shrink to 0.5.
Average Distance between individuals 4 3 2 1 0
10
20
30
40
80
90
100
With Shrink set to 0.5, the average amount of mutation decreases by a factor of 1/2 by the final generation. As a result, the average distance between individuals decreases by approximately the same factor.
4-42
f(x 1, x 2, , x n) = x 1 + x 2 + + x n You can define this function as an inline function by setting Fitness function to
inline('sum(abs(x))')
To run the example, Set Fitness function to inline('sum(abs(x))'). Set Number of variables to 10. Set Initial range to [-1; 1]. Select the Best fitness and Distance in the Plots pane. First, run the example with the default value of 0.8 for Crossover fraction. This returns the best fitness value of approximately 0.2 and displays the following plots.
Best: 0.23492 Mean: 0.48445 10 Fitness value 8 6 4 2 0 10 20 50 60 70 80 Generation Average Distance between individuals 30 40 90 100
10
20
30
40
50 60 generation
70
80
90
100
4-43
10
20
30
40
90
100
In this case, the algorithm selects genes from the individuals in the initial population and recombines them. The algorithm cannot create any new genes because there is no mutation. The algorithm generates the best individual that it can using these genes at generation number 8, where the best fitness plot becomes level. After this, it creates new copies of the best individual, which are then are selected for the next generation. By generation number 17, all individuals in the population are the same, namely, the best individual. When this occurs, the average distance between individuals is 0. Since the algorithm cannot improve the best fitness value after generation 8, it stalls after 50 more generations, because Stall generations is set to 50.
4-44
14 12 10 8 6 4 10 20 30 40 50 60 generation 70 80 90 100
In this case, the random changes that the algorithm applies never improve the fitness value of the best individual at the first generation. While it improves the individual genes of other individuals, as you can see in the upper plot by the decrease in the mean value of the fitness function, these improved genes are never combined with the genes of the best individual because there is no crossover. As a result, the best fitness plot is level and the algorithm stalls at generation number 50.
4-45
deviations of the best fitness values in all the preceding generations, for each value of the Crossover fraction. To run the demo, enter
deterministicstudy
at the MATLAB prompt. When the demo is finished, the plots appear as in the following figure.
After 10 iIterations 2 Iteration 4 6 8 10 0 60 Score Mean and Std 0.2 0.4 0.6 CrossoverFraction 0.8 1
40
20
0.2
0.8
The lower plot shows the means and standard deviations of the best fitness values over 10 generations, for each of the values of the crossover fraction. The upper plot shows a color-coded display of the best fitness values in each generation. For this fitness function, setting Crossover fraction to 0.8 yields the best result. However, for another fitness function, a different setting for Crossover fraction might yield the best result.
4-46
10
15
20
25
Local minima
4-47
The function has two local minima, one at x = 0, where the function value is - 1, and the other at x = 21, where the function value is - 1 - 1/e. Since the latter value is smaller, the global minimum occurs at x = 21.
- Set Fitness function to @two_min. - Set Number of variables to 1. - Click Start. The genetic algorithm returns a point very close to the local minimum at x = 0.
The following custom plot shows why the algorithm finds the local minimum rather than the global minimum. The plot shows the range of individuals in each generation and the best individual.
4-48
Best individual
10
20
30
40
50 60 Generation
70
80
90
100
Note that all individuals are between -2 and 2.5. While this range is larger than the default Initial range of [0;1], due to mutation, it is not large enough to explore points near the global minimum at x = 21. One way to make the genetic algorithm explore a wider range of points that is, to increase the diversity of the populations is to increase the Initial range. The Initial range does not have to include the point x = 21, but it must be large enough so that the algorithm generates individuals near x = 21. Set Initial range to [0;15] as shown in the following figure.
4-49
Then click Start. The genetic algorithm returns a point very close 21.
This time, the custom plot shows a much wider range of individuals. By the second generation there are individuals greater than 21, and by generation 12, the algorithm finds a best individual that is approximately equal to 21.
Best: 20.9876 80
60
40
Best individual
20
20
40
60
10
20
30
40
50 60 Generation
70
80
90
100
4-50
4-51
Best: 5.0444 Mean: 48.7926 100 90 80 70 Fitness value 60 50 40 30 20 10 0 50 100 150 Generation 200 250 300
Note that the algorithm stalls at approximately generation number 170 that is, there is no immediate improvement in the fitness function after generation 170. If you restore Stall generations to its default value of 50, the algorithm would terminate at approximately generation number 230. If the genetic algorithm stalls repeated with the current setting for Generations, you can try increasing both Generations and Stall generations to improve your results. However, changing other options might be more effective.
Note When Mutation function is set to Gaussian, increasing the value of Generations might actually worsen the final result. This can occur because the Gaussian mutation function decreases the average amount of mutation in each generation by a factor that depends on Generations. Consequently, the setting for Generations affects the behavior of the algorithm.
4-52
Minimum at (1, 1)
4-53
The toolbox provides an M-file, dejong2fcn.m, that computes the function. To a see a demo of this example, enter
hybriddemo
at the MATLAB prompt. To explore the example, first enter gatool to open the Genetic Algorithm Tool and enter the following settings: Set Fitness function to @dejong2fcn. Set Number of variables to 2. Set Population size to 10. Before adding a hybrid function, trying running the genetic algorithm by itself, by clicking Start. The genetic algorithm displays the following results in the Status and results pane.
The final point is close to the true minimum at (1, 1). You can improve this result by setting Hybrid function to fminunc in Hybrid function options.
When the genetic algorithm terminates, the function fminunc takes the final point of the genetic algorithm and as its initial point and returns a more accurate result, as shown in the Status and results pane.
4-54
Vectorize Option
The genetic algorithm usually runs faster if you vectorize the fitness function. This means that the genetic algorithm only calls the fitness function once, but expects the fitness function to compute the fitness for all individuals in the current population at once. To vectorize the fitness function, Write the M-file that computes the function so that it accepts a matrix with arbitrarily many rows, corresponding to the individuals in the population. For example, to vectorize the function f (x 1, x 2) = x 1 2x 1 x 2 + 6x 1 + x 2 6x 2 write the M-file using the following code:
z =x(:,1).^2 - 2*x(:,1).*x(:,2) + 6*x(:,1) + x(:,2).^2 - 6*x(:,2);
2 2
The colon in the first entry of x indicates all the rows of x, so that x(:, 1) is a vector. The .^ and .* operators perform element-wise operations on the vectors. Set the Vectorize option to On.
Note The fitness function must accept an arbitrary number of rows to use the Vectorize option.
4-55
The following comparison, run at the command line, shows the improvement in speed with Vectorize set to On.
tic;ga(@rastriginsfcn,20);toc elapsed_time = 4.3660 options=gaoptimset('Vectorize','on'); tic;ga(@rastriginsfcn,20,options);toc elapsed_time = 0.5810
4-56
5
Using Direct Search
Overview of the Pattern Search Tool (p. 5-2) Performing a Pattern Search from the Command Line (p. 5-14) Setting Pattern Search Options on page 5-19
Provides an overview of the Pattern Search Tool. Explains how to perform a pattern search from the command line. Explains how to set options for a pattern search.
at the MATLAB prompt. This opens the Pattern Search Tool, as shown in the following figure.
5-2
5-3
Objective function The function you want to minimize. Enter a handle to an M-file function that computes the objective function. Writing an M-File for the Function You Want to Optimize on page 1-5 describes how to write the M-file. Start point The starting point for the pattern search algorithm
Note Do not use the Editor/Debugger to debug the M-file for the objective function while running the Pattern Search Tool. Doing so results in Java exception messages in the Command Window and makes debugging more difficult. Instead, call the objective function directly from the command line or pass it to the function patternsearch. To facilitate debugging, you can export your problem from the Pattern Search Tool to the MATLAB workspace, as described in Importing and Exporting Options and Problems on page 5-10.
The following figure shows these fields for the example described in Example: Finding the Minimum of a Function on page 3-6.
Constrained Problems
You can enter any constraints for the problem in the following fields in the Constraints pane: Linear inequalities Enter the following for inequality constraints of the form Ax b : - Enter the matrix A in the A = field. - Enter the vector b in the b = fields. Linear equalities Enter the following for equality constraints of the form Aeq x = beq : - Enter the matrix Aeq in the Aeq = field.
5-4
- Enter the vector beq in the beq = field. Bounds Enter the following information for bounds constraints of the form lb x and x ub : - Enter the vector lb for the lower bound in the Lower = field. - Enter the vector ub in the Upper = field. Leave the fields corresponding to constraints that do not appear in the problem empty.
5-5
beq = 84 62 65 1
5-6
psearchtool
an M-file included in the toolbox that computes the objective function for the example. Because the matrices and vectors defining the starting point and constraints are large, it is more convenient to set their values as variables in the MATLAB workspace first and then enter the variable names in the Pattern Search Tool. To do so, enter
x0 = [2 1 0 9 1 0]; Aineq = [-8 7 3 -4 9 0]; bineq = [7]; Aeq = [7 1 8 3 3 3; 5 0 5 1 5 8; 2 6 7 1 1 8; 1 0 0 0 0 0]; beq = [84 62 65 1];
Then, enter the following in the Pattern Search Tool: Set Initial point to x0. Set the following Linear inequalities: - Set A = to Aineq. - Set b = to bineq. - Set Aeq = to Aeq. - Set beq = to beq. The following figure shows these settings in the Pattern Search Tool.
Then click Start to run the pattern search. When the search is finished, the results are displayed in Status and results pane, as shown in the following figure.
5-7
Displaying Plots
The Plots pane, shown in the following figure, enables you to display various plots of the results of a pattern search.
Select the check boxes next to the plots you want to display. For example, if you select Best function value and Mesh size, and run the example described in
5-8
Example: Finding the Minimum of a Function on page 3-6, the tool displays the plots shown in the following figure.
The upper plot displays the objective function value at each iteration. The lower plot displays the coordinates of the point with the best objective function value at the current iteration.
Note When you display more than one plot, clicking on any plot displays a larger version of it in a separate window.
Plot Options on page 6-4 describes the types of plots you can create.
Setting Options
You can set options for a pattern search in the Options pane, shown in the figure below.
5-9
For a detailed description of the available options, see Pattern Search Options on page 6-19.
5-10
Pattern Search Tool. The following sections describe how to import and export options and problem structures.
The dialog provides the following options: To save the objective function and options in a MATLAB structure, select Export problem and options to a MATLAB structure named and enter a name for the structure. If you have run a pattern search in the current session and you select
Include information needed to resume this run, the final point from the last search is saved in place of Start point. Use this option if you want to run
the pattern search at a later time from the final point of the last search. See Importing a Problem on page 5-13. To save only the options, select Export options to a MATLAB structure named and enter a name for the options structure.
5-11
To save the results of the last run of the algorithm, select Export results to a MATLAB structure named and enter a name for the results structure.
[x fval] = patternsearch(my_psproblem)
This returns
x = 1.0010 -2.3027 9.5131 -0.0474 -0.1977 1.3083
fval = 2.1890e+003
See Performing a Pattern Search from the Command Line on page 5-14 for form information.
Importing Options
To import an options structure for a pattern search from the MATLAB workspace, select Import Options from the File menu. This opens a dialog box that displays a list of the valid pattern search options structures in the MATLAB workspace. When you select an options structure and click Import, the Pattern Search Tool resets its options to the values in the imported structure.
5-12
Note You cannot import options structures that contain any invalid option fields. Structures with invalid fields are not displayed in the Import Pattern Search Options dialog box.
You can create an options structure in either of the following ways: Calling psoptimset with options as the output By saving the current options from the Export to Workspace dialog box in the Pattern Search Tool
Importing a Problem
To import a problem that you previously exported from the Pattern Search Tool, select Import Problem from the File menu. This opens the dialog box that displays a list of the pattern search problem structures in the MATLAB workspace. When you select a problem structure and click OK, the Pattern Search Tool resets the problem definition and the options to the values in the imported structure. In addition, if you selected Include information needed to resume this run when you created the problem structure, the tool resets Start point to the final point of the last run prior to exporting the structure. See Exporting Options, Problems, and Results on page 5-11.
Generate M-File
To create an M-file that runs a pattern search using the objective function and options you specify in the Pattern Search Tool, select Generate M-File from the File menu and save the M-file in a directory on the MATLAB path. Calling this M-file at the command line returns the same results as the Pattern Search Tool, using the fitness function and options settings that were in place when you generated the M-file.
5-13
The output arguments are x The final point fval The value of the objective function at x The required input arguments are @objectfun A function handle to the objective function objectfun, which you can write as an M-file. See Writing an M-File for the Function You Want to Optimize on page 1-5 to learn how to do this. x0 The initial point for the pattern search algorithm As an example, you can run the example described in Example: Finding the Minimum of a Function on page 3-6 from the command line by entering
[x fval] = patternsearch(@ps_example, [2.1 1.7])
This returns
Optimization terminated: Current mesh size 9.5367e-007 is less than 'TolMesh'.
5-14
x = -4.7124 -0.0000
fval = -2.0000
where A is a matrix and b is vector that represent inequality constraints of the form Ax b . Aeq is a matrix and beq is a vector that represent equality constraints of the form Aeq x = beq . lb and ub are vectors representing bound constraints of the form lb x and x ub , respectively. You only need to pass in the constraints that are part of the problem. For example, if there are no bound constraints, use the syntax
[x fval] = patternsearch(@objfun, x0, A, b Aeq, beq)
Use empty brackets [] for constraint arguments that are not needed for the problem. For example, if there are no inequality constraints, use the syntax
[x fval] = patternsearch(@objfun, x0, [], [], Aeq, beq, lb, ub)
Besides x and fval, this returns the following additional output arguments:
5-15
exitflag Integer indicating whether the algorithm was successful output Structure containing information about the performance of the solver See the reference page for patternsearch for more information about these arguments.
Setting Options
You can specify any of the options that are available in the Pattern Search Tool by passing an options structure as an input argument to patternsearch using the syntax
[x fval] = patternsearch(@fitnessfun, nvars, A, b, Aeq, beq, lb, ub, options)
If the problem is unconstrained, you must pass in empty brackets for the constraint arguments using the syntax
[x fval] = patternsearch(@fitnessfun,nvars,[],[],[],[],[],[],options)
This returns the options structure with the default values for its fields.
options = TolMesh: TolX: TolFun: TolBind: MaxIteration: MaxFunEvals: MeshContraction: MeshExpansion: MeshAccelerator: MeshRotate: InitialMeshSize: ScaleMesh: MaxMeshSize: 1.0000e-006 1.0000e-006 1.0000e-006 0.0010 '100*numberofvariables' '2000*numberofvariables' 0.5000 2 'off' 'on' 1 'on' Inf
5-16
PollMethod: CompletePoll: PollingOrder: SearchMethod: CompleteSearch: Display: OutputFcns: PlotFcns: PlotInterval: Cache: CacheSize: CacheTol: Vectorized:
The function patternsearch uses these default values if you do not pass in options as an input argument. The value of each option is stored in a field of the options structure, such as options.MeshExpansion. You can display any of these values by entering options followed by the name of the field. For example, to display the mesh expansion factor for the pattern search, enter
options.MeshExpansion ans = 2
To create an options structure with a field value that is different from the default, use the function psoptimset. For example, to change the mesh expansion factor to 3 instead of its default value 2, enter
options = psoptimset('MeshExpansion', 3)
This creates the options structure with all values set to their defaults except for MeshExpansion, which is set to 3. If you now call patternsearch with the argument options, the pattern search uses a mesh expansion factor of 3. If you subsequently decide to change another field in the options structure, such as setting PlotFcns to @psplotmeshsize, which plots the mesh size at each iteration, call psoptimset with the syntax
5-17
This preserves the current values of all fields of options except for PlotFcns, which is changed to @plotmeshsize. Note that if you omit the options input argument, psoptimset resets MeshExpansion to its default value, which is 2.0. You can also set both MeshExpansion and PlotFcns with the single command
options = psoptimset('MeshExpansion',3,'PlotFcns',@plotmeshsize)
except for the default value of 'Display', which is 'final' when created by psoptimset, but 'none' when created in the Pattern Search Tool. You can also export an entire problem from the Pattern Search Tool and run it from the command line. See Example Running patternsearch on an Exported Problem on page 5-12 for an example.
5-18
Poll Method
At each iteration, a pattern search polls the points in the current mesh that is, it computes the objective function at the mesh points to see if there is one whose function value is less than the function value at the current point. How Pattern Search Works on page 3-13 provides an example of polling. You can specify the pattern that defines the mesh by the Poll method option. The default pattern, Positive basis 2N, consists of the following 2N directions, where N is the number of independent variables for the objective function. [ 1000 ] [ 0100 ] [ 0001 ] [ 1 000 ] [ 0 1 00 ] [ 0 00 1 ] For example, if objective function has three independent variables, the
Positive basis 2N, consists of the following six vectors.
5-19
1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 Alternatively, you can set Poll method to Positive basis NP1, the pattern consisting of the following N + 1 directions. [ 1000 ] [ 0100 ] [ 0001 ] [ 1 1 1 1 ] For example, if objective function has three independent variables, the Positive basis Np1, consists of the following four vectors.
1 0 0 0 1 0 0 0 1 1 1 1 A pattern search will sometimes run faster using Positive basis Np1 as the
Poll method, because the algorithm searches fewer points at each iteration.
For example, if you run a pattern search on the example described in Example A Constrained Problem on page 5-6, the algorithm performs 2080 function evaluations with Positive basis 2N, the default Poll method, but only 1413 function evaluations using Positive basis 2P1.
5-20
However, if the objective function has many local minima, using Positive basis 2N as the Poll method might avoid finding a local minimum that is not the global minimum, because the search explores more points around the current point at each iteration.
Complete Poll
By default, if the pattern search finds a mesh point that improves the value of the objective function, it stops the poll and sets that point as the current point for the next iteration. When this occurs, some mesh points might not get polled. Some of these unpolled points might have an objective function value that is even lower than the first one the pattern search finds. For problems in which there are several local minima, it is sometimes preferable to make the pattern search poll all the mesh points at each iteration and choose the one with the best objective function value. This enables the pattern search to explore more points at each iteration and thereby potentially avoid a local minimum that is not the global minimum. You can make the pattern search poll the entire mesh setting Complete poll to On in Poll options.
5-21
0 5 10 15 20 5 25 15 0 10 5 0 5 5 10 10 10
The global minimum of the function occurs at (0, 0), where its value is -25. However, the function also has a local minimum at (0, 9), where its value is -16. To create an M-file that computes the function, copy and paste the following code into a new M-file in the MATLAB Editor.
function z = poll_example(x) if x(1)^2 + x(2)^2 <= 25 z = x(1)^2 + x(2)^2 - 25; elseif x(1)^2 + (x(2) - 9)^2 <= 16 z = x(1)^2 + (x(2) - 9)^2 - 16; else z = 0; end
5-22
To run a pattern search on the function, enter the following in the Pattern Search Tool: Set Objective function to @poll_example. Set Start point to [0 5]. Set Level of display to Iterative in Display to command window options. Click Start to run the pattern search with Complete poll set to Off, its default value. The Pattern Search Tool displays the results in the Status and results pane, as shown in the following figure.
The pattern search returns the local minimum at (0, 9). At the initial point, (0, 5), the objective function value is 0. At the first iteration, the search polls the following mesh points. f((0, 5) + (1, 0)) = f(1, 5) = 0 f((0, 5) + (0, 1)) = f(0, 6) = -7 As soon as the search polls the mesh point (0, 6), at which the objective function value is less than at the initial point, it stops polling the current mesh and sets the current point at the next iteration to (0, 6). Consequently, the search moves toward the local minimum at (0, 9) at the first iteration.You see this by looking at the first two lines of the command line display.
Iter 0 1 f-count 1 3 MeshSize 1 2 f(x) 0 -7 Method Start iterations Successful Poll
Note that the pattern search performs only two evaluations of the objective function at the first iteration, increasing the total function count from 1 to 3.
5-23
Next, set Complete poll to On and click Start. The Status and results pane displays the following results.
This time, the pattern search finds the global minimum at (0, 0). The difference between this run and the previous one is that with Complete poll set to On, at the first iteration the pattern search polls all four mesh points. f((0, 5) + (1, 0)) = f(1, 5) = 0 f((0, 5) + (0, 1)) = f(0, 6) = -6 f((0, 5) + (-1, 0)) = f(-1, 5) = 0 f((0, 5) + (0, -1)) = f(0, 4) = -9 Because the last mesh point has the lowest objective function value, the pattern search selects it as the current point at the next iteration. The first two lines of the command-line display show this.
Iter 0 1 f-count 1 5 MeshSize 1 2 f(x) 0 -9 Method Start iterations Successful Poll
In this case, the objective function is evaluated four times at the first iteration. As a result, the pattern search moves toward the global minimum at (0, 0). The following figure compares the sequence of points returned when Complete
poll is set to Off with the sequence when Complete poll is On.
5-24
Local minimum
Global minimum
5-25
Then enter the settings shown in the following figure in the Pattern Search Tool.
For comparison, click Start to run the example without a search method. This displays the plots shown in the following figure.
5-26
Function value
50
100
150 Iteration
200
250
300
15 10 5 0
50
100
150 Iteration
200
250
300
To see the effect of using a search method, select Positive Basis Np1 in the Search method field in Search options. This sets the search method to be a pattern search using the pattern for Positive basis Np1. Then click Start to run the genetic algorithm. This displays the following plots.
5-27
Function value
20
40
60 Iteration
80
100
120
Note that using the search method reduces the total function count the number of times the objective function was evaluated by almost 50 percent, and reduces the number of iterations from 270 to 120.
5-28
To also display the values of the mesh size and objective function at the command line, set Level of Display to Iterative in the Display to command window options.
When you run the example described in Example A Constrained Problem on page 5-6, the Pattern Search Tool displays the following plot.
x 10
10
14
12
10
50
100
150 Iterations
200
250
300
To see the changes in mesh size more clearly, change the y-axis to logarithmic scaling as follows:
1 Select Axes Properties from the Edit menu in the plot window.
5-29
Select Log.
When you click OK, the plot appears as shown in the following figure.
5-30
10
15
10
10
10
10
10
10
10
50
100
150 Iterations
200
250
300
The first 37 iterations result in successful polls, so the mesh sizes increase steadily during this time. You can see that the first unsuccessful poll occurs at iteration 38 by looking at the command-line display for that iteration.
36 37 38 39 40 43 6.872e+010 1.374e+011 6.872e+010 3486 3486 3486 Successful Poll Successful Poll Refine Mesh
Note that at iteration 37, which is successful, the mesh size doubles for the next iteration. But at iteration 38, which is unsuccessful, the mesh size is multiplied 0.5. To see how Mesh expansion and Mesh expansion affect the pattern search, make the following changes: Set Mesh expansion to 3.0. Set Mesh contraction to 0.75.
5-31
Then click Start. The Status and results pane shows that the final point is approximately the same as with the default settings of Mesh expansion and Mesh contraction, but that the pattern search takes longer to reach that point.
The algorithm halts because it exceeds the maximum number of iterations, whose value you can set in the Max iteration field in the Stopping criteria options. The default value is 100 times the number of variables for the objective function, which is 6 in this example. When you change the scaling of the y-axis to logarithmic, the mesh size plot appears as shown in the following figure.
5-32
10 10 10 10 Mesh size 10 10 10 10 10
12
10
100
200
300 Iteration
400
500
600
700
Note that the mesh size increases faster with Mesh expansion set to 3.0, as compared with the default value of 2.0, and decreases more slowly with Mesh contraction set to 0.75, as compared with the default value of 0.5.
Mesh Accelerator
The mesh accelerator can make a pattern search converge faster to the optimal point by reducing the number of iterations required to reach the mesh tolerance. When the mesh size is below a certain value, the pattern search contracts the mesh size by a factor smaller than the Mesh contraction factor.
Note We recommend that you only use the mesh accelerator for problems in which the objective function is not too steep near the optimal point, or you might lose some accuracy. For differentiable problems, this means that the absolute value of the derivative is not too large near the solution.
To use the mesh accelerator, set Accelerator to On in Mesh options. When you run the example describe in Example A Constrained Problem on page 5-6, the number of iterations required to reach the mesh tolerance is 246, as compared with 270 when Accelerator is set to Off. You can see the effect of the mesh accelerator by comparing the mesh sizes after iteration 200, as shown in the following figure:
5-33
x 10
Accelerator off
Mesh size
0 215 x 10
4
220
225 Iteration
230
235
240
Mesh size
Accelerator on
1
0 215
220
225 Iteration
230
235
240
In both cases, the mesh sizes are the same until iteration 226, but differ at iteration 227. The MATLAB Command Window displays the following lines for iterations 226 and 227 with Accelerator set to Off.
5-34
Note that the mesh size is multiplied by 0.5, the default value of Mesh
contraction factor.
For comparison, the Command Window displays the following lines for the same iteration numbers with Accelerator set to On.
Iter 226 227 f-count MeshSize 1501 6.104e-005 1516 1.526e-005 f(x) 2189 2189 Method Refine Mesh Refine Mesh
Cache Options
Typically, at any given iteration of a pattern search, some of the mesh points might coincide with mesh points at previous iterations. By default, the pattern search recomputes the objective function at these mesh points even though it has already computed their values and found that they are not optimal. If computing the objective function takes a long time say, several minutes this can make the pattern search run significantly longer. You can eliminate these redundant computations by using a cache, that is, by storing a history of the points that the pattern search has already visited. To do so, set Cache to On in Cache options. At each poll, the pattern search checks to see whether the current mesh point is within a specified tolerance, Tolerance, of a point in the cache. If so, the search does not compute the objective function for that point, but uses the cached function value and moves on to the next point.
Note When Cache is set to On, the pattern search might fail to identify a point in the current mesh that improves the objective function because it is within the specified tolerance of a point in the cache. As a result, the pattern search might run for more iterations with Cache set to On than with Cache set to Off. It is generally a good idea to keep the value of Tolerance very small, especially for highly nonlinear objective functions.
5-35
To illustrate this, select Best function value and Function count in the Plots pane and run the example described in Example A Constrained Problem on page 5-6 with Cache set to Off. After the pattern search finishes, the plots appear as shown in the following figure.
Best Function Value: 2189.0301 4000
Function value
50
100
250
300
20 15 10 5 0
50
100
150 Iteration
200
250
300
Note that the total function count is 2080. Now, set Cache to On and run the example again. This time, the plots appear as shown in the following figure.
5-36
Best Function Value: 2189.0301 4000 Function value 3500 3000 2500 2000
50
100
250
300
15
10
50
100
150 Iteration
200
250
300
5-37
before a linear constraint is considered to be active. When a linear constraint is active, the pattern search polls points in directions parallel to the linear constraint boundary as well as the mesh points. Usually, you should set Bind tolerance to be at least as large as the maximum of Mesh tolerance, X tolerance, and Function tolerance.
subject to the constraints 11x 1 + 10x 2 10 10x 1 10x 2 10 Note that you can compute the objective function using the function norm. The feasible region for the problem lies between the two lines in the following figure.
5-38
0.5
0.5
1.5
2.5 2
1.5
0.5
0.5
Feasible region
5-39
3 4
1 1
0.125 0.0625
1.487 1.487
The pattern search contracts the mesh at each iteration until one of the mesh points lies in the feasible region. The following figure shows a close-up of the initial point and mesh points at iteration 5.
Initial Point and Mesh Points at Iteration 5 0.9 Initial point Mesh points 0.95
1.05
1.1
1.15
1.1
1.05
0.95
0.9
The top mesh point, which is (-1.001, -1.0375), has a smaller objective function value than the initial point, so the poll is successful. Because the distance from the initial point to lower boundary line is less than the default value of Bind tolerance, which is 0.0001, the pattern search does not consider the linear constraint 10x 1 10x 2 10 to be active, so it does not search points in a direction parallel to the boundary line.
5-40
This time, the display in the MATLAB Command Window shows that the first two iterations are successful.
Iter 0 1 2 f-count 1 2 3 MeshSize 1 2 4 f(x) 1.487 0.7817 0.6395 Method Start iterations Successful Poll Successful Poll
Because the distance from the initial point to the boundary is less than Bind tolerance, the second linear constraint is active. In this case, the pattern search polls points in directions parallel to the boundary line 10x 1 10x 2 = 10 , resulting in successful poll. The following figure shows the initial point with two addition search points in directions parallel to the boundary.
0.6
0.8
1.2
1.4
The following figure compares the sequences of points during the first 20 iterations of the pattern search for both settings of Bind tolerance.
5-41
First 20 Iterations for Two Settings of Bind Tolerance 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 Bind tolerance = .0001 Bind tolerance = .01
Note that when Bind tolerance is set to .01, the points move toward the optimal point more quickly. The pattern search requires only 90 iterations. When Bind tolerance is set to .0001, the search requires 124 iterations. However, when the feasible region does not contain very acute angles, as it does in this example, increasing Bind tolerance can increase the number of iterations required, because the pattern search tends to poll more points.
5-42
6
Function Reference
Functions Listed by Category (p. 6-2) Genetic Algorithm Options (p. 6-3) Pattern Search Options (p. 6-19)
Lists the functions in the toolbox by category. Describes the options for the genetic algorithm. Describes the options for pattern search.
Functions Alphabetical List (p. 6-28) Lists the functions in the toolbox alphabetically.
Function Reference
Genetic Algorithm
Function ga Description
Find the minimum of a function using the genetic algorithm Get values of a genetic algorithm options structure Create a genetic algorithm options structure Open the Genetic Algorithm Tool
Direct Search
Function patternsearch psoptimget psoptimset psearchtool Description
Find the minimum of a function using a pattern search Get values of a pattern search options structure Create a pattern search options structure Open the Pattern Search Tool
6-2
You can identify an identify an option in one of two ways: By its label, as it appears in the Genetic Algorithm Tool By its field name in the options structure For example: Population type refers to the label of the option in the Genetic Algorithm Tool. PopulationType refers to the corresponding field of the options structure. The genetic algorithm options are divided into the following categories: Plot Options on page 6-4 Population Options on page 6-5 Fitness Scaling Options on page 6-7 Selection Options on page 6-8 Reproduction Options on page 6-10 Mutation Options on page 6-10 Crossover Options on page 6-12 Migration Options on page 6-15 Output Function Options on page 6-16
6-3
Function Reference
Stopping Criteria Options on page 6-17 Hybrid Function Option on page 6-17 Vectorize Option on page 6-17
Plot Options
Plot options enable you to plot data from the genetic algorithm as it is running. When you select plot functions and run the genetic algorithm, a plot window displays each plot in a separate axis. You can stop the algorithm at any time by clicking the Stop button on the display window.
Plot interval (PlotInterval) specifies the number of generations between
consecutive points in the plot. You can choose from the following plot functions: Best fitness (@gaplotbestf) plots the best function value versus iteration number in each generation. Expectation (@gaplotexpectation) plots the expected number of children versus the raw scores at each generation. Score diversity (@gaplotscorediversity) plots a histogram of the scores at each generation. Stopping (@plotstopping) plots stopping criteria levels. Best individual (@gaplotbestindiv) plots the vector entries of the individual with the best fitness function value in each generation. Genealogy (@gaplotgenealogy) plots the genealogy of individuals. Lines from one generation to the next are color-coded as follows: - Red lines indicate mutation children. - Blue lines indicate crossover children. - Black lines indicate elite individuals. Scores (@gaplotscores) plots the scores of the individuals at each generation. Distance (@gaplotdistance) plots the average distance between individuals at each generation. Range (@gaplotrange) plots the minimum, maximum, and mean fitness function values in each generation. Selection (@gaplotselection) plots a histogram of the parents.
6-4
Custom function enables you to use plot functions of your own. In the Custom function field, enter a function handle to a custom plot function. Example
Creating a Custom Plot Function on page 4-8 gives an example. Structure of the Plot Functions on page 6-5 describes the structure of a plot function. To display a plot when calling ga from the command line, set the PlotFcns field of options to be a function handle to the plot function. For example, to display the best fitness plot, set options as follows.
options = gaoptimset('PlotFcns , @gaplotbestf);
where @plotfun1, @plotfun2, and so on are command-line names of plot functions, which are in parentheses in the preceding list.
The input arguments to the function are options Structure containing all the current options settings state Structure containing information about the current generation. See The State Structure on page 6-18. flag String that tells what stage the algorithm is currently in If you write a custom plot function, you can include additional input arguments after flag.
Population Options
Population options enable you to specify the parameters of the population that the genetic algorithm uses.
Population type (PopulationType) specifies the data type of the input to the fitness function. You can set Population type to be one of the following:
Double Vector ('doubleVector') Use this option if the individuals in the population have type double. This is the default.
6-5
Function Reference
Bit string ('bitstring') Use this option if the individuals in the population are bit strings. Custom ('custom') Use this option to create a population whose data type is neither of the preceding. If you use a custom population type, you must write your own creation, mutation, and crossover functions that accept inputs of that population type, and specify these functions in the following fields, respectively: - Creation function (CreationFcn) - Mutation function (MutationFcn) - Crossover function (CrossoverFcn)
Population size (PopulationSize) specifies how many individuals there are in each generation. With a large population size, the genetic algorithm searches the solution space more thoroughly, thereby reducing the chance that the algorithm will return a local minimum that is not a global minimum. However, a large population size also causes the algorithm to run more slowly.
If you set Population size to a vector, the genetic algorithm creates multiple subpopulations, the number of which is the length of the vector. The size of each subpopulation is the corresponding entry of the vector.
Creation function (CreationFcn) specifies the function that creates the initial
population for ga. You can choose from the following functions: Uniform (@gacreationuniform) creates a random initial population with a uniform distribution. This is the default. Custom enables you can write your own creation function, which must generate data of the type that you specify in Population type.
Initial population (InitialPopulation) specifies an initial population for the genetic algorithm. The default value is [], in which case ga uses the Creation function to create an initial population. If you enter a nonempty array in the Initial population field, the array must have Population size rows and Number of variables columns. In this case, the genetic algorithm does not call the Creation function. Initial scores (InitialScores) specifies initial scores for the initial
population.
6-6
Initial range (PopInitRange) specifies the range of the vectors in the initial population that is generated by the creation function. You can set Initial range to be a matrix with two rows and Number of variables columns, each column of which has the form [lb; ub], where lb is the lower bound and ub is the upper bound for the entries in that coordinate. If you specify Initial range to be a 2-by-1 vector, each entry is expanded to a constant row of length Number of variables.
Rank (@fitscalingrank) The default fitness scaling function, Rank, scales the raw scores based on the rank of each individual instead of its score. The rank of an individual is its position in the sorted scores. The rank of the most fit individual is 1, the next most fit is 2, and so on. Rank fitness scaling removes the effect of the spread of the raw scores. Proportional (@fitscalingprop) Proportional scaling makes the scaled value of an individual proportional to its raw fitness score. Top (@fitscalingtop) Top scaling scales the top individuals equally. Selecting Top displays an additional field, Quantity, which specifies the number of individuals that are assigned positive scaled values. Quantity can be an integer between 1 and the population size or a fraction between 0 and 1 specifying a fraction of the population size. The default value is 0.4. Each of the individuals that produce offspring is assigned an equal scaled value, while the rest are assigned the value 0. The scaled values have the form [0 1/n 1/n 0 0 1/n 0 0 1/n ...]. To override the default value for Quantity at the command line, use the following syntax:
options = gaoptimset('FitnessScalingFcn', {@fitscalingtop, quantity})
Shift linear (@fitscalingshiftlinear) Shift linear scaling scales the raw scores so that the expectation of the fittest individual is equal to a
6-7
Function Reference
constant multiplied by the average score. You specify the constant in the Max survival rate field, which is displayed when you select Shift linear. The default value is 2. To override the default value of Max survival rate at the command line, use the following syntax:
options = gaoptimset('FitnessScalingFcn', {@fitscalingshiftlinear, rate})
where rate is the Max survival rate. Custom enables you to write your own scaling function. If your function has no input arguments, enter it in the text box in the form @myfun. If your function does have input arguments, enter it as a cell array of the form {@myfun, P1, P2, ...}, where P1, P2, ... are the input arguments. Your scaling function must have the following calling syntax.
function expection = myfun(scores, nParents, P1, P2, ... )
The input arguments to the function are - scores A vector of scalars, one for each member of the population - nParents The number of parents needed from this population - P1, P2, ... Additional input arguments, if any, that you want to pass to the function The function returns expectation, a row vector of scalars of the same length as scores, giving the scaled values of each member of the population. The sum of the entries of expectation must equal nParents.
Selection Options
Selection options specify how the genetic algorithm chooses parents for the next generation. You can specify the function the algorithm uses in the Selection function (SelectionFcn) field in the Selection options pane. The options are Stochastic uniform (@selectionstochunif) The default selection function, Stochastic uniform, lays out a line in which each parent corresponds to a section of the line of length proportional to its scaled value. The algorithm moves along the line in steps of equal size. At each step, the algorithm allocates a parent from the section it lands on. The first step is a uniform random number less than the step size.
6-8
Remainder (@selectionremainder) Remainder selection assigns parents deterministically from the integer part of each individuals scaled value and then uses roulette selection on the remaining fractional part. For example, if the scaled value of an individual is 2.3, that individual is listed twice as a parent because the integer part is 2. After parents have been assigned according to the integer parts of the scaled values, the rest of the parents are chosen stochastically. The probability that a parent is chosen in this step is proportional to the fractional part of its scaled value. Uniform (@selectionuniform) Uniform selection chooses parents using the expectations and number of parents. Uniform selection is useful for debugging and testing, but is not a very effective search strategy. Roulette (@selectionroulette) Roulette selection chooses parents by simulating a roulette wheel, in which the area of the section of the wheel corresponding to an individual is proportional to the individuals expectation. The algorithm uses a random number to select one of the sections with a probability equal to its area. Tournament (@selectiontournament) Tournament selection chooses each parent by choosing Tournament size players at random and then choosing the best individual out of that set to be a parent. Tournament size must be at least 2. The default value of Tournament size is 4. To override the default value of Tournament size at the command line, use the syntax
options = gaoptimset('SelectionFcn', {@selecttournament, size})
where size is the Tournament size. Custom enables you to write your own selection function. If your function has no input arguments, enter it in the text box in the form @myfun. If your function does have input arguments, enter it as a cell array of the form {@myfun, P1, P2, ...}, where P1, P2, ... are the input arguments. Your selection function must have the following calling syntax:
function parents = myfun(expectation, nParents, options, P1, P2, ... )
The input arguments to the function are - expectation Expected number of children for each member of the population
6-9
Function Reference
- nParents Number of parents to select - options Genetic algorithm options structure - P1, P2,... Additional input arguments, if any, that you want to pass to the function The function returns parents, a row vector of length nParents containing the indices of the parents that you select
Reproduction Options
Reproduction options specify how the genetic algorithm creates children for the next generation.
Elite count (EliteCount) specifies the number of individuals that are guaranteed to survive to the next generation. Set Elite count to be a positive
integer less than or equal to the population size. The default value is 2.
Crossover fraction (CrossoverFraction) specifies the fraction of the next generation, other than elite children, that are produced by crossover. Set Crossover fraction to be a fraction between 0 and 1, either by entering the fraction in the text box or moving the slider. The default value is 0.8.
Mutation Options
Mutation options specify how the genetic algorithm makes small random changes in the individuals in the population to create mutation children. Mutation provides genetic diversity and enable the genetic algorithm to search a broader space. You can specify the mutation function in the Mutation function (MutationFcn) field in the Mutation options pane. You can choose from the following functions: Gaussian (mutationgaussian) The default mutation function, Gaussian, adds a random number taken from a Gaussian distribution with mean 0 to each entry of the parent vector. The variance of this distribution is determined by the parameters Scale and Shrink, which are displayed when you select Gaussian, and by the Initial range setting in the Population options. - The Scale parameter determines the variance at the first generation. If you set Initial range to be a 2-by-1 vector v, the initial variance is the
6-10
same at all coordinates of the parent vector, and is given by Scale*(v(2) - v(1)). If you set Initial range to be a vector v with two rows and Number of variables columns, the initial variance at coordinate i of the parent vector is given by Scale*(v(i,2) - v(i,1)). - The Shrink parameter controls how the variance shrinks as generations go by. If you set Initial range to be a 2-by-1 vector, the variance at the kth generation, vark, is the same at all coordinates of the parent vector, and is given by the recursive formula k var k = var k 1 1 Shrink ---------------------------- Generations If you set Initial range to be a vector with two rows and Number of variables columns, the variance at coordinate i of the parent vector at the kth generation, vari,k, is given by the recursive formula k var i, k = var i, k 1 1 Shrink ---------------------------- Generations The default values of Scale and Shrink are 0.5 and 0.75, respectively. If you set Shrink to 1, the algorithm shrinks the variance in each coordinate linearly until it reaches 0 at the last generation is reached. A negative value of Shrink causes the variance to grow. To override the default values of Scale and Shrink at the command line, use the syntax
options = gaoptimset('MutationFcn', {@mutationgaussian, scale, shrink})
Uniform (mutationuniform) Uniform mutation is a two-step process. First, the algorithm selects a fraction of the vector entries of an individual for mutation, where each entry has a probability Rate of being mutated. The default value of Rate is 0.01. In the second step, the algorithm replaces each selected entry by a random number selected uniformly from the range for that entry. To override the default value of Rate at the command line, use the syntax
options = gaoptimset('MutationFcn', {@mutationuniform, rate})
6-11
Function Reference
Custom enables you to write your own mutation function. If your function has no input arguments, enter it in the text box in the form @myfun. If your function does have input arguments, enter it as a cell array of the form {@myfun, P1, P2, ...}, where P1, P2, ... are the input arguments. Your mutation function must have this calling syntax:
function mutationChildren = myfun(parents, options, nvars, FitnessFcn, state, thisScore, thisPopulation, P1, P2, ...)
The arguments to the function are - parents Row vector of parents chosen by the selection function - options Options structure - nvars Number of variables - FitnessFcn Fitness function - state State structure. See The State Structure on page 6-18. - thisScore Vector of scores of the current population - thisPopulation Matrix of individuals in the current population - P1, P2,... Additional input arguments, if any, that you want to pass to the function The function returns mutationChildren the mutated offspring as a matrix whose rows correspond to the children. The number of columns of the matrix is Number of variables.
Crossover Options
Crossover options specify how the genetic algorithm combines two individuals, or parents, to form a crossover child for the next generation.
Crossover function (CrossoverFcn) specifies the function that performs the
crossover. You can choose from the following functions: Scattered (@crossoverscattered), the default crossover function, creates a random binary vector and selects the genes where the vector is a 1 from the first parent, and the genes where the vector is a 0 from the second parent, and combines the genes to form the child. For example, if p1 and p2 are the parents
p1 = [a b c d e f g h]
6-12
p2 = [1 2 3 4 5 6 7 8]
and the binary vector is [1 1 0 0 1 0 0 0], the function returns the following child:
child1 = [a b 3 4 e 6 7 8]
Single point (@crossoversinglepoint) chooses a random integer n between 1 and Number of variables and then - Selects vector entries numbered less than or equal to n from the first parent. - Selects vector entries numbered greater than n from the second parent. - Concatenates these entries to form a child vector. For example, if p1 and p2 are the parents
p1 = [a b c d e f g h] p2 = [1 2 3 4 5 6 7 8]
and the crossover point is 3, the function returns the following child.
child = [a b c 4 5 6 7 8]
Two point (@crossovertwopoint) selects two random integers m and n between 1 and Number of variables. The function selects - Vector entries numbered less than or equal to m from the first parent - Vector entries numbered from m+1 to n, inclusive, from the second parent - Vector entries numbered greater than n from the first parent. The algorithm then concatenates these genes to form a single gene. For example, if p1 and p2 are the parents
p1 = [a b c d e f g h] p2 = [1 2 3 4 5 6 7 8]
and the crossover points are 3 and 6, the function returns the following child.
child = [a b c 4 5 6 g h]
Intermediate (@crossoverintermediate) creates children by taking a weighted average of the parents. You can specify the weights by a single parameter, R, which can be a scalar or a row vector of length Number of variables. The default is a vector of all 1s. The function creates the child from parent1 and parent2 using the following formula.
6-13
Function Reference
If all the entries of Ratio lie in the range [0, 1], the children produced are within the hypercube defined by placing the parents at opposite vertices. If Ratio is not in that range, the children might lie outside the hypercube. If Ratio is a scalar, then all the children lie on the line between the parents. To override the default value of Ratio at the command line, use the syntax
options = gaoptimset('CrossoverFcn', {@crossoverintermediate, ratio});
Heuristic (@crossoverheuristic) returns a child that lies on the line containing the two parents, a small distance away from the parent with the better fitness value in the direction away from the parent with the worse fitness value. You can specify how far the child is from the better parent by the parameter Ratio , which appears when you select Heuristic. The default value of Ratio is 1.2. If parent1 and parent2 are the parents, and parent1 has the better fitness value, the function returns the child
child = parent2 + R * (parent1 - parent2);
To override the default value of Ratio at the command line, use the syntax
options=gaoptimset('CrossoverFcn',{@crossoverheuristic,ratio});
Custom enables you to write your own crossover function. If your function has no input arguments, enter it in the text box in the form @myfun. If your function does have input arguments, enter it as a cell array of the form {@myfun, P1, P2,...}, where P1, P2,... are the input arguments. Your selection function must have this calling syntax.
xoverKids = myfun(parents, options, nvars, FitnessFcn, unused,thisPopulation)
The arguments to the function are - parents Row vector of parents chosen by the selection function - options options structure - nvars Number of variables - FitnessFcn Fitness function - unused Place holder that is not used
6-14
- thisPopulation Matrix representing the current population. The number of rows of the matrix is Population size and the number of columns is Number of variables. - P1, P2, ... Additional input arguments, if any, that you want to pass to the function The function returns xoverKids the crossover offspring as a matrix whose rows correspond to the children. The number of columns of the matrix is Number of variables.
Migration Options
Migration options specify how individuals move between subpopulations. Migration occurs if you set Population size to be a vector of length greater than 1. When migration occurs, the best individuals from one subpopulation replace the worst individuals in another subpopulation. Individuals that migrate from one subpopulation to another are copied. They are not removed from the source subpopulation. You can control how migration occurs by the following three fields in the Migration options pane: Direction (MigrationDirection) Migration can take place in one or both directions. - If you set Direction to Forward ('forward'), migration takes place toward the last subpopulation. That is the nth subpopulation migrates into the (n+1)th subpopulation. - If you set Direction to Both ('both'), the nth subpopulation migrates into both the (n-1)th and the (n+1)th subpopulation. Migration wraps at the ends of the subpopulations. That is, the last subpopulation migrates into the first, and the first may migrate into the last. To prevent wrapping, specify a subpopulation of size 0 by adding an entry of 0 at the end of the population size vector that you enter in Population size. Interval controls how many generation pass between migrations. For example, if you set Interval to 20, migration takes place every 20 generations. Fraction controls how many individuals move between subpopulations. Fraction specifies the fraction of the smaller of the two subpopulations that moves. For example, if individuals migrate from a subpopulation of 50
6-15
Function Reference
individuals into a subpopulation of 100 individuals and you set Fraction to 0.1, the number of individuals that migrate is 0.1 * 50 = 5.
6-16
Vectorize Option
The vectorize option specifies whether the computation of the fitness function is vectorized. When you set Fitness function is vectorized to Off, the genetic algorithm computes the fitness function values of the new generation in a loop. When you set Fitness function is vectorized to On, the algorithm computes the fitness function values of a new generation with one call to the fitness function, which is faster than computing the values in a loop. However, to use this option, your fitness function must be able to accept input matrices with an arbitrary number of rows.
6-17
Function Reference
6-18
You can identify an option in one of two ways: By its label, as it appears in the Pattern Search Tool By its field name in the options structure For example: Poll method refers to the label of the option in the Pattern Search Tool. PollMethod refers to the corresponding field of the options structure. The options are divided into the following categories: Plot Options on page 6-4 Poll Options on page 6-20 Search Options on page 6-22 Mesh Options on page 6-24 Cache Options on page 6-25 Stopping Criteria on page 6-25 Output Function Options on page 6-26 Display to Command Window Options on page 6-26 Vectorize Option on page 6-27
6-19
Function Reference
Plot Options
Plot functions plot output from the pattern search at each iteration. The following plots are available: Best function value (@psplotbestf) plots the best objective function value at each multiple of Interval iterations. Function count (@psplotfuncount) plots the number of function evaluations at each multiple of Interval iterations. Mesh size (@psplotmeshsize) plots the mesh size at each multiple of Interval iterations. Best point (@psplotbestx) plots the current best point. Custom enables you to use your own plot function.
Poll Options
Poll options control how the pattern search polls the mesh points at each iteration.
Poll method (PollMethod) specifies the pattern the algorithm uses to create
the mesh. There are two patterns: The default pattern, Positive basis 2N, consists of the following 2N vectors, where N is the number of independent variables for the objective function. [ 1000 ] [ 0100 ] [ 0001 ] [ 1 000 ] [ 0 1 00 ] [ 0 00 1 ] For example, if objective function has three independent variables, the pattern consists of the following six vectors.
6-20
1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 The Positive basis NP1 pattern consisting of the following N + 1 vectors. [ 1000 ] [ 0100 ] [ 0001 ] [ 1 1 1 1 ] For example, if objective function has three independent variables, the pattern consists of the following four vectors. 1 0 0 0 1 0 0 0 1 1 1 1
Complete poll (CompletePoll) specifies whether all the points in the current mesh must be polled at each iteration. Complete Poll can have the values On or Off.
If you set Complete poll to On, the algorithm polls all the points in the mesh at each iteration and chooses the point with the smallest objective function value as the current point at the next iteration.
6-21
Function Reference
If you set Complete poll to Off, the default value, the algorithm stops the poll as soon as it finds a point whose objective function value is less than that of the current point and chooses that point as the current point at the next iteration.
Polling order (PollingOrder) specifies the order in which the algorithm searches the points in the current mesh. The options are
Random The polling order is random. Success The first search direction at each iteration is the direction in which the algorithm found the best point at the previous iteration. After the first point, the algorithm polls the mesh points in the same order as Consecutive. Consecutive The algorithm polls the mesh points in consecutive order, that is, the order of the pattern vectors as described in Poll method.
Search Options
Search options specify an optional search that the algorithm can perform at each iteration prior to the polling. If the search returns a point that improves the objective function, the algorithm uses that point at the next iteration and omits the polling.
Complete search (CompleteSearch) only applies when you set Search method to Positive basis Np1, Positive basis 2N, or Latin hypercube. Complete search can have the values On or Off.
For Positive basis Np1 or Positive basis 2N, Complete search has the same meaning as the poll option Complete poll.
Search method (SearchMethod) specifies the method of the search. The
options are None ([]) specifies no search (the default). Positive basis Np1 ('PositiveBasisNp1') specifies a pattern search using the Positive Basis Np1 option for Poll method. Positive basis 2N ('PositiveBasis2N') specifies a pattern search using the Positive Basis 2N option for Poll method. Genetic Algorithm (@searchga) specifies a search using the genetic algorithm. If you select Genetic Algorithm, two other options appear:
6-22
- Iteration limit Positive integer specifying the number of iterations of the pattern search for which the genetic algorithm search is performed. - Options Options structure for the genetic algorithm Latin hypercube (@searchlhs) specifies a Latin hypercube search. The way the search is performed depends on the setting for Complete search: - If you set Complete search to On, the algorithm polls all the points that are randomly generated at each iteration by the latin hypercube search and chooses the one with the smallest objective function value. - If you set Complete search to Off, the algorithm stops the poll as soon as it finds one of the randomly generated points whose objective function value is less than that of the current point, and chooses that point for the next iteration. If you select Latin hypercube, two other options appear: - Iteration limit Positive integer specifying the number of iterations of the pattern search for which the Latin hypercube search is performed. - Design level A positive integer specifying the design level. The number of points searched equals the Design level multiplied by the number of independent variables for the objective function. Nelder-Mead (@searchneldermead) specifies a search using fminsearch, which uses the Nelder-Mead algorithm. If you select Nelder-Mead, two other options appear: - Iteration limit Positive integer specifying the number of iterations of the pattern search for which the Neldermead search is performed - Options Options structure for the function fminsearch. You can create the options structure using optimset. Custom enables you to write your own search function.
Note If you set Search method to Genetic algorithm or Nelder-Mead, we recommend that you leave Iteration limit set to the default value 1, as performing these searches more than once is not likely to improve results.
6-23
Function Reference
Mesh Options
Mesh options control the mesh that the pattern search uses. The following options are available.
Initial size (InitialMeshSize) specifies the size of the initial mesh, which is the length of the shortest vector from the initial point to a mesh point. Initial size should be a positive scalar. The default is 1.0. Max size (MaxMeshSize) specifies a maximum size for the mesh. When the maximum size is reached, the mesh size does not increase after a successful iteration. Max size must be a positive scalar. The default value is Inf. Accelerator (MeshAccelerator) specifies whether the Contraction factor is multiplied by 0.5 after each unsuccessful iteration. Accelerator can have the values On or Off, the default. Rotate (MeshRotate) specifies whether the mesh vectors are flipped multiplied by -1 when the mesh size is less than a small value. Rotate is only applied when Poll method is set to Positive basis Np1 and Rotate is set to
On, the default.
Note Changing the setting of Rotate has no effect on the poll when Poll method is set to Positive basis 2N.
Scale (ScaleMesh) specifies whether the algorithm scales the mesh points by multiplying the pattern vectors by constants. Scale can have the values Off or On, the default. Expansion factor (MeshExpansion) specifies the factor by which the mesh size is increased after a successful poll. The default value is 2.0, which means that the size of the mesh is multiplied by 2.0 after a successful poll. Expansion factor must be a positive scalar. Contraction factor(MeshContraction) specifies the factor by which the mesh
size is decreased after an unsuccessful poll. The default value is 0.5, which means that the size of the mesh is multiplied by 0.5 after an unsuccessful poll. Contraction factor must be a positive scalar.
6-24
Cache Options
The pattern search algorithm can keep a record of the points it has already polled, so that it does not have to poll the same point more than once. If the objective function requires a relatively long time to compute, the cache option can speed up the algorithm. The memory allocated for recording the points is called the cache. This option should only be used for deterministic objective functions, but not for stochastic ones.
Cache (Cache) specifies whether a cache is used. The options are On and Off, the default. When you set Cache to On, the algorithm does not evaluate the objective function at any mesh points that are within Tolerance of a point in
the cache.
Tolerance (CacheTol) specifies how close a mesh point must be to a point in the cache for the algorithm to omit polling it. Tolerance must be a positive scalar. The default value is eps. Size (CacheSize) specifies the size of the cache. Size must be a positive scalar.
Stopping Criteria
Stopping criteria determine what causes the pattern search algorithm to stop. Pattern search uses the following criteria:
Mesh tolerance (TolMesh) specifies the minimum tolerance for mesh size. The algorithm stops if the mesh size becomes smaller than Mesh tolerance. The
100*numberofvariables Maximum number of iterations is 100 times the number of independent variables (the default). Specify A positive integer for the maximum number of iterations
Max function evaluations (MaxFunEval) specifies the maximum number of evaluations of the fitness function. The algorithm stops if the number of function evaluations reaches Max function evaluations. You can select either
2000*numberofvariables Maximum number of function evaluations is 2000 times the number of independent variables.
6-25
Function Reference
at two consecutive iterations. The algorithm stops if the distance between two consecutive points is less than X tolerance. The default value is 1e-6.
Function tolerance (TolFun) specifies the minimum tolerance for the objective function. The algorithm stops when the value of the objective function at the current point is less than Function tolerance. The default value is 1e-6.
the command line while the pattern search is running. The available options are Off ('off') Only the final answer is displayed. Iterative ('iter') Information is displayed for each iteration. Diagnose ('diagnose') Information is displayed if the algorithm fails to converge.
6-26
Final ('final') The outcome of the pattern search (successful or unsuccessful), the reason for stopping, and the final point. Both Iterative and Diagnose display the following information: Iter Iteration number FunEval Cumulative number of function evaluations MeshSize Current mesh size FunVal Objective function value of the current point Method Outcome of the current poll The default value of Level of display is Off in the Pattern Search Tool 'final' in an options structure created using psoptimset
Vectorize Option
The vectorize option specifies whether the computation of the objective function is vectorized. When you set Objective function is vectorized ('Vectorize') to Off ('off'), the algorithm computes the objective function values of the mesh points in a loop, calling the objective function with exactly one point each time through the loop. On the other hand, when you set Objective function is vectorized to On ('on'), the pattern search algorithm computes the objective function values of all mesh points with a single call to the objective function, which is faster than computing them in a loop. However, to use this option, your objective function must be able to accept input matrices with an arbitrary number of rows.
6-27
Function Reference
6-28
ga
Purpose Syntax
6ga
Description
objective function.
x = ga(fitnessfun, nvars) applies the genetic algorithm to an optimization problem, where fitnessfun is the objective function to minimize and nvars is the length of the solution vector x, the best individual found. x = ga(fitnessfun, nvars, options) applies the genetic algorithm to an optimization problem, using the parameters in the options structure. x = ga(problem) finds the minimum for problem, a structure that has three
fields: fitnessfcn Fitness function nvars Number of independent variables for the fitness function options Options structure created with gaoptimset
[x, fval] = ga(...) returns fval, the value of the fitness function at x. [x, fval, reason] = ga(...) returns reason, a string containing the reason
contains output from each generation and other information about the performance of the algorithm. The output structure contains the following fields: randstate The state of rand, the MATLAB random number generator, just before the algorithm started.
6-29
ga
randnstate The state of randn the MATLAB normal random number generator, just before the algorithm started. You can use the values of randstate and randnstate to reproduce the output of ga. See Reproducing Your Results on page 4-25. generations The number of generations computed funccount The number of evaluations of the fitness function message The reason the algorithm terminated. This message is the same as the output argument reason.
[x, fval, reason, output, population] = ga(...) returns matrix population, whose rows are the final population. [x, fval, reason, output, population, scores] = ga(...) returns scores, the scores of the final population.
Example
[x fval, reason] = ga(@rastriginsFcn, 10) x = Columns 1 through 7 0.9977 0.9598 0.0085 0.0097 -0.0274 -0.0173 0.9650
See Also
gaoptimset, gatool
6-30
gaoptimget
6gaoptimget
parameter names.
See Also
6-31
gaoptimset
Purpose Syntax
6gaoptimset
Description
Options
The following table lists the options you can set with gaoptimset. See Genetic Algorithm Options on page 6-3 for a complete description of these options and their values. Values in {} denote the default value. You can also view the optimization parameters and defaults by typing gaoptimset at the command line.
6-32
gaoptimset
Option CreationFcn
Description
Values {@gacreationuniform}
Handle to the function that creates the initial population The fraction of the population at the next generation, not including elite children, that is created by the crossover function Handle to the function that the algorithm uses to create crossover children Positive integer specifying how many individuals in the current generation are guaranteed to survive to the next generation Scalar. If the fitness function attains the value of FitnessLimit, the algorithm halts. Handle to the function that scales the values of the fitness function
CrossoverFraction
CrossoverFcn
EliteCount
FitnessLimit
Scalar | {-Inf}
FitnessScalingFcn
6-33
gaoptimset
Option Generations
Description
Values
Positive integer specifying the maximum number of iterations before the algorithm halts Matrix or vector specifying the range of the individuals in the initial population String describing the data type of the population Handle to a function that continues the optimization after ga terminates Initial population Initial scores Direction of migration Scalar between 0 and 1 specifying the fraction of individuals in each subpopulation that migrates to a different subpopulation
PopInitRange
PopulationType
HybridFcn
Scalar | {0.2}
6-34
gaoptimset
Option MigrationInterval
Description
Values
Positive integer specifying the number of generations that take place between migrations of individuals between subpopulations Handle to the function that produces mutation children Array of handles to functions that ga calls at each iteration. Positive integer specifying the number of generations between consecutive calls to the output functions Array of handles to functions that plot data computed by the algorithm
MutationFcn
@mutationuniform {@mutationgaussian}
OutputFcns
Array | {[]}
OutputInterval
PlotFcns
@gaplotbestf @gaplotbestgenome @gaplotdistance @gaplotexpectation @gaplotgeneology @gaplotselection @gaplotrange @gaplotscorediversity @gaplotscores @gaplotstopping | {[]}
PlotInterval
Positive integer specifying the number of generations between consecutive calls to the plot functions
6-35
gaoptimset
Description
Values
Size of the population Handle to the function that selects parents of crossover and mutation children Positive integer. The algorithm stops if there is no improvement in the objective function for
StallLimitG
StallLimitG
consecutive generations.
StallLimitS
Positive scalar. The algorithm stops if there is no improvement in the objective function for StallLimitS seconds. Positive scalar. The algorithm stops after running for TimeLimit seconds. String specifying whether the computation of the fitness function is vectorized
TimeLimit
Vectorized
'on' | {'off'}
See Also
gaoptimget, gatool
6-36
gatool
6gatool
6-37
gatool
You can use the Genetic Algorithm Tool to run the genetic algorithm on optimization problems and display the results. See Using the Genetic Algorithm Tool on page 2-4 for a complete description of the tool.
See Also
ga, gaoptimset
6-38
patternsearch
Purpose Syntax
6patternsearch
Description
patternsearch finds the minimum of a function using a pattern search. x = patternsearch(@fun, x0) solves unconstrained problems of the form
minimize f(x) x where fun is a MATLAB function that computes the values of the objective function f(x), and x0 is an initial point for the pattern search algorithm. The function patternsearch accepts the objective function as a function handle of the form @fun or as an inline function. patternsearch returns a local minimum x to the objective function. The function fun accepts a vector input and returns a scalar function value.
x = patternsearch(@fun, x0, A, b) finds a local minimum x to the function fun, subject to the linear inequality constraints represented in matrix form by
Ax b If the problem has m linear inequality constraints and n variables, then A is a matrix of size m-by-n. b is a vector of length m.
x = patternsearch(@fun, x0, A, b, Aeq, beq) finds a local minimum x to the function fun, subject to the constraints
6-39
patternsearch
Ax b Aeq x = beq where Aeq x = beq represents the linear equality constraints in matrix form. If the problem has r linear equality constraints and n variables, then Aeq is a matrix of size r-by-n. beq is a vector of length r. If there are no inequality constraints, pass empty matrices, [], for A and b.
x = patternsearch(@fun, x0, A, b, Aeq, beq, lb, ub) finds a local minimum x to the function fun subject to the constraints
Ax b Aeq x = beq lb x ub where lb x ub represents lower and upper bounds on the variables. If the problem has n variables, lb and ub are vectors of length n. If lb or ub is empty (or not provided), it is automatically expanded to -Inf or Inf, respectively. If there are no inequality or equality constraints, pass empty matrices for A, b, Aeq and beq.
x = patternsearch(@fun, x0, A, b, Aeq, beq, lb, ub, options) finds a local minimum x to the function fun, replacing the default optimization parameters by values in the structure options. You can create options with the function psoptimset. Pass empty matrices for A, b, Aeq, beq, lb, ub, and options to use the default values. x = patternsearch(problem) finds the minimum for problem, a structure that
has the following fields: objective Objective function X0 Starting point Aineq Matrix for the inequality constraints Bineq Vector for the inequality constraints Aeq Matrix for the equality constraints Beq Vector for the equality constraints
6-40
patternsearch
LB Lower bound for x UB Upper bound for x options Options structure created with psoptimset randstate Optional field to reset the state of rand randnstate Optional field to reset the state of randn You can create the structure problem by exporting a problem from the Pattern Search Tool, as described in Importing and Exporting Options and Problems on page 5-10.
[x, fval] = patternsearch(@fun, x0, ...) returns the value of the objective function fun at the solution x. [x, fval, exitflag] = patternsearch(@fun, x0, ...) returns exitflag,
which describes the exit condition of patternsearch. If exitflag > 0, patternsearch converged to a solution x. exitflag = 0, patternsearch reached the maximum number of function evaluations or iterations. exitflag < 0, patternsearch did not converge to a solution.
[x, fval, exitflag, output] = patternsearch(@fun, x0, ...) returns a structure output containing information about the search. The output structure contains the following fields:
function Objective function problemtype Type of problem: unconstrained, bound constrained or linear constrained pollmethod Polling method searchmethod Search method used, if any iteration Total number of iterations funccount Total number of function evaluations meshsize Mesh size at x
6-41
patternsearch
Example
Given the following constraints 1 1 x 2 1 2 1 2 x 2 1 2 3 0 x1 0 x2 the following code finds the minimum of the function, lincontest6, that is provided with the toolbox:
A = [1 1; -1 2; 2 1]; b = [2; 2; 3]; lb = zeros(2,1); [x, fval, exitflag] = patternsearch(@lincontest6,... [0 0],A,b,[],[],lb) Optimization terminated: Next Mesh size (9.5367e-007)less than 'TolMesh.' x = 0.6667 1.3333
fval = -8.2222
exitflag = 1
See Also
6-42
psearchtool
6psearchtool
6-43
psearchtool
You can use the Pattern Search Tool to run a pattern search on optimization problems and display the results. See Using the Pattern Search Tool on page 3-3 for a complete description of the tool.
See Also
6-44
psoptimget
6psoptimget
parameter names.
See Also
psoptimset, patternsearch
6-45
psoptimset
Purpose Syntax
6psoptimset
Description
Options
The following table lists the options you can set with psoptimset. See Pattern Search Options on page 6-19 for a complete description of the options and their values. Values in {} denote the default value. You can also view the optimization parameters and defaults by typing psoptimset at the command line.
6-46
psoptimset
Option Cache
Description
With Cache set to 'on', patternsearch keeps a history of the mesh points it polls and does not poll points close to them again at subsequent iterations. Use this option if patternsearch runs slowly because it is taking a long time to compute the objective function. Size of the history Positive scalar specifying how close the current mesh point must be to a point in the history in order for patternsearch to avoid polling it Complete poll around current iterate Complete poll around current iterate Level of display Initial mesh size for pattern algorithm
CacheSize CacheTol
CompletePoll
'on' | {'off'}
CompleteSearch
'on' | {'off'}
Display
InitialMeshSize
6-47
psoptimset
MaxFunEvals
Maximum number of objective function evaluations Maximum number of iterations Maximum mesh size Accelerate convergence near a minimum Mesh contraction factor. Used when iteration is unsuccessful. Mesh expansion factor. Expands mesh when iteration is successful. Specifies a user-defined function that an optimization function calls at each iteration Specifies plots of output from the pattern search Specifies the number of iterations between consecutive calls to the plot functions Order of poll directions in pattern search Polling strategy used in pattern search Automatic scaling of variables
MaxIteration
MaxMeshSize MeshAccelerator
MeshContraction
MeshExpansion
OutputFcn
@psoutputhistory | {none}
PlotFcn
PlotInterval
Positive integer
PollingOrder
PollMethod
ScaleMesh
6-48
psoptimset
SearchMethod
Type of search used in pattern search Binding tolerance Tolerance on constraints Tolerance on function Tolerance on mesh size Tolerance on variable Specifies whether functions are vectorized
Positive scalar | {1e-3} Positive scalar | {1e-6} Positive scalar | {1e-6} Positive scalar | {1e-6} Positive scalar | {1e-6}
'on' | {'off'}
For a detailed description of these options, see Pattern Search Options on page 6-19.
See Also
patternsearch, psoptimget
6-49
psoptimset
6-50
Index
A
accelerator, mesh 5-33 algorithm genetic 2-18 pattern search 3-13
G
ga 6-29 gaoptimget 6-31 gaoptimset 6-32 gatool 6-37
C
cache 5-35 children 2-17 crossover 2-20 elite 2-20 mutation 2-20 crossover 4-39 children 2-20 fraction 4-42
D
direct search 3-2 diversity 2-16, 4-29
E
elite children 2-20 expansion, mesh 5-28 exporting problems from Genetic Algorithm Tool 4-13 from Pattern Search Tool 5-11
generation 2-15 genetic algorithm description of 2-18 options 6-3 overview 2-2 setting options at command line 4-22 stopping criteria for 2-24 using from command line 4-21 Genetic Algorithm Tool 4-2 defining a problem in 4-3 displaying plots 4-7 exporting options and problems from 4-13 importing problems to 4-16 opening 4-2 pausing and stopping 4-5 running 4-4 setting options in 4-11 global and local minima 4-47
H
hybrid function 4-53
F
fitness function 2-15 vectorizing 4-55 writing M-files for 1-5 fitness scaling 4-34
I
importing problems to Genetic Algorithm Tool 4-16 to Pattern Search Tool 5-13 individual 2-15 initial population 2-19
I-1
Index
M
maximizing 1-6 mesh 3-11 accelerator 5-33 expansion and contraction 5-28 M-files, writing 1-5 minima, global and local 4-47 minimizing 1-6 mutation 4-39 options 4-40
patternsearch 6-39 plots genetic algorithm 4-7 pattern search 5-8 poll 3-12 complete 5-21 method 5-19 population 2-15 initial 2-19 initial range 4-30 options 4-30 size 4-33 psearchtool 6-43 psoptimget 6-45 psoptimset 6-46
O
objective function writing M-file for 1-5 options genetic algorithm 6-3
R P
parent 2-17 pattern 3-10 pattern search description 3-13 options 6-19 overveiw 3-2 setting options at command line 5-16 using from command line 5-14 Pattern Search Tool 5-2 defining problem in 5-3 displaying plots 5-8 exporting options and problems from 5-11 importing problems to 5-13 opening 5-2 pausing and stopping 5-8 running 5-5 setting options in 5-9 Rastrigins function 2-6 reproduction 4-39
S
scaling, fitness 4-34 search method 5-25 selection 4-38 setting options genetic algorithm 4-29 pattern search 5-19 stopping criteria pattern search 3-18
V
vectorizing 4-55
I-2