Path: blob/main/Lessons/Lesson 05 - Local Optimization/extras/Lesson_05_General_Supplement.ipynb
871 views
Lesson 5 Supplemental Materials
There are few concepts in lesson 4 that routinely raise questions for students. Let's address some of them.
Dimensions/Dimensionality
The dimension is the number of decision variables that a problem has. For continuous variable problems (these are the f(x) type problems where the decision variable can take on any number) we talk about the number of dimensions as the number of variables (), but that's just a convention. We could use any letter.
Let's look at a quick example.
One Dimension
This is a 1 dimensional function - we only have 1 variable - .
In case some of this notation is new to you, the || means "the absolute value of" whatever is inside the pipe symbols. We wouldn't need to do that, except for the fact that numpy gets confused about fractional exponents and this "fixes" it. Just go with it.
Two Dimensions
This is a 2 dimension problem, because we have 2 decision variables ():
Note: we're doing all the same math with both x and y and adding them together. That's going to come in handy in a little bit here.
We can easily plot our 1 dimension functions on a simple x, y graph.
Functions that contain more than 2 dimensions become more difficult to display visually. But, we can display 2 dimensional functions. Here's what our 2 dimensional function looks like.
Bounds and 1 Dimension Functions
Now let's find all the minima with our 1 dimensional function within the bounds of [-2,2]. Remember that scipi optimize will find the closest minimum or maximum to the start point, so to find all of them, we'll have to pick start points that will let the algorithm "fall into" the correct trough, or "climb" the correct hill. How many minima do you see? How many starting points will we need?
Note: the points at the bounds are neither minima nor maxima. We don't know where they "go" after the bounds, so we can't make any assumptions.
Bounds with scipy minimize
To set bounds when you're calling scipy minimize, you can use the 'TNC' method and the bounds parameter. For a 1 dimensional problem, you'll need a single minimum and maximum tuple in an array.
Note: When you set bounds, scipy passes back an array for the function instead of single value, at least some of the time. If you're ever getting errors with your string formatting, print out the result and double check what scipy has given you.
Bounds with Multi-dimensional Problems
To use bounds with multi-dimensional problems, we need to set an array with a tuple of the min and max boundary for each dimension.
Let's get one of the minima from our 2 dimensional problem. (If you can roll your mouse over that visual you can get some idea of possible starting points, but we're just going to randomly generate one.)
What's up with that function?
Did you look at that function and wonder what the heck was happening there? Let's break it down. Remember when I said that our 2 dimensional function did all the same "math" with both variables? Well, if we pass in a numpy array to our twoD function, it will do the math for each variable. Let's see what that looks like if we do it with a simpler function. First we'll create a super simple function. It's just going to multiply what's passed in by 10.
Note that if we pass in an number, it returns a number. We're familiar with that. All good.
What happens if we pass in a regular list?
Well, that didn't do what we wanted, did it? That gave us a list with 10 of each of the items in our list. (Note, this is exactly how we got our bounds for 2D problem above. Cool!)
Now what happens if we pass in a numpy array?
Hey now - that's more like it. We multiplied each of our variables by 10.
We're still not quite where we want to be though, right? Remember our original function added our 2 results together. Easy peasy. We can just wrap the "math bits" with sum in our function.
Let's try it out.
Voila! Our function is doing the exact same thing, just with more "math bits."
Multi-start Problems
Multi-start problems just mean that we start the problem multiple times from multiple different starting locations. We did a hard-coded multi-start problem above when finding all of the minima in our 1-dimensional problem.
When we look at our 2 dimensional graph, it's pretty hard to figure out where to start by hand, so we can code a multi-start problem to start multiple times, from multiple random locations. In your lesson, you did this to solve the rastrigin problem. But you can do it to solve any problem. Let's try to find the minimum value that we can for our 2-dimension problem by using a multi-start process.