The Glucose Minimal Model
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
In the previous chapter we implemented the glucose minimal model using given parameters, but I didn't say where those parameters came from. This notebook solves the mystery.
We'll use a SciPy function called leastsq, which stands for "least squares"; that is, it finds the parameters that minimize the sum of squared differences between the results of the model and the data.
You can think of leastsq as optional material. We won't use it in the book itself, but it appears in a few of the case studies.
You can read more about leastsq in the SciPy documentation. It uses the Levenberg-Marquart algorithm, which you can read about on Wikipedia.
The following cells download and read the data.
We'll use make_system and slope_func as defined in Chapter 18.
Computing errors
In this context, the "errors" are the differences between the results from the model and the data.
To compute the errors, I'll start again with the parameters we used in Chapter 18.
make_system takes the parameters and actual data and returns a System object.
Here's how we run the ODE solver.
Because we specify t_eval=data.index, the results are evaluated at the some time stamps as the data.
We can plot the results like this.
During the first three time steps, the model does not fit the data. That's because it takes some time for the injected glucose to disperse.
We can compute the errors like this.
The first few errors are substantially larger than the rest.
In the next section, we'll use leastsq to search for the parameters that minimize the sum of the squared errors.
Optimization
To use leastsq, we need an "error function" that takes a sequence of parameters and returns an array of errors.
Here's a function that does what we need.
error_func uses the given parameters to make a System object, runs the simulation, and returns the errors.
But notice that it does not return all of the errors; rather, it uses iloc to select only the elements with index 3 or more. In other words, it omits the elements with index 0, 1, and 2. Note: You can read more about this use of iloc in the Pandas documentation.
Since we don't expect the model to fit the data in this regime, we'll leave it out.
We can call error_func like this:
Now we're ready to call leastsq. As arguments, we pass error_func, the parameters where we want to start the search, and the data, which will be passed as an argument to error_func.
Each time error_func is called, it prints the parameters, so we can get a sense of how leastsq works.
leastsq has two return values. The first is an array with the best parameters:
The second is an object with information about the results, including a success flag and a message.
Now that we have best_params, we can use it to make a System object and run it.
Here are the results, along with the data.
We can compute the errors like this.
And the sum of the squared errors like this:
If things have gone according to plan, the sum of squared errors should be smaller now, compared to the parameters we started with.
Interpreting parameters
So we found the parameters that best match the data. You might wonder why this is useful.
The parameters themselves don't mean very much, but we can use them to compute two quantities we care about:
"Glucose effectiveness", , which is the tendency of elevated glucose to cause depletion of glucose.
"Insulin sensitivity", , which is the ability of elevated blood insulin to enhance glucose effectiveness.
Glucose effectiveness is defined as the change in as we vary :
where is shorthand for . Taking the derivative of with respect to , we get
The glucose effectiveness index, , is the value of when blood insulin is near its basal level, . In that case, approaches 0 and approaches . So we can use the best-fit value of as an estimate of .
Insulin sensitivity is defined as the change in as we vary :
The insulin sensitivity index, , is the value of when and are at steady state:
and are at steady state when and are 0, but we don't actually have to solve those equations to find .
If we set and solve for , we find the relation:
And since , we have:
Taking the derivative of with respect to , we have:
So if we find parameters that make the model fit the data, we can use as an estimate of .
For the parameters we found, here are the estimated values of and :
According to Boston et al, normal ranges for these values are...
The estimated quantities are within the normal intervals.
Exercises
Exercise: How sensitive are the results to the starting guess for the parameters? If you try different values for the starting guess, do we get the same values for the best parameters?