CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
AllenDowney

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.

GitHub Repository: AllenDowney/ModSimPy
Path: blob/master/examples/glucose.ipynb
Views: 531
Kernel: Python 3 (ipykernel)

The Glucose Minimal Model

Modeling and Simulation in Python

Copyright 2021 Allen Downey

License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

# download modsim.py if necessary from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/ModSimPy/raw/master/modsim.py')
# import functions from modsim from modsim import *

In the previous chapter we implemented the glucose minimal model using given parameters, but I didn't say where those parameters came from. This notebook solves the mystery.

We'll use a SciPy function called leastsq, which stands for "least squares"; that is, it finds the parameters that minimize the sum of squared differences between the results of the model and the data.

You can think of leastsq as optional material. We won't use it in the book itself, but it appears in a few of the case studies.

You can read more about leastsq in the SciPy documentation. It uses the Levenberg-Marquart algorithm, which you can read about on Wikipedia.

The following cells download and read the data.

download('https://github.com/AllenDowney/ModSim/raw/main/data/' + 'glucose_insulin.csv')
data = pd.read_csv('glucose_insulin.csv', index_col='time');

We'll use make_system and slope_func as defined in Chapter 18.

download('https://github.com/AllenDowney/ModSimPy/raw/master/' + 'chap18.py')
from chap18 import make_system from chap18 import slope_func

Computing errors

In this context, the "errors" are the differences between the results from the model and the data.

To compute the errors, I'll start again with the parameters we used in Chapter 18.

G0 = 270 k1 = 0.02 k2 = 0.02 k3 = 1.5e-05
params = G0, k1, k2, k3

make_system takes the parameters and actual data and returns a System object.

system = make_system(params, data)

Here's how we run the ODE solver.

results, details = run_solve_ivp(system, slope_func, t_eval=data.index) details.message

Because we specify t_eval=data.index, the results are evaluated at the some time stamps as the data.

results.tail()

We can plot the results like this.

data.glucose.plot(style='o', alpha=0.5, label='data') results.G.plot(style='-', color='C0', label='model') decorate(xlabel='Time (min)', ylabel='Concentration (mg/dL)')

During the first three time steps, the model does not fit the data. That's because it takes some time for the injected glucose to disperse.

We can compute the errors like this.

errors = results.G - data.glucose errors.head()

The first few errors are substantially larger than the rest.

In the next section, we'll use leastsq to search for the parameters that minimize the sum of the squared errors.

Optimization

To use leastsq, we need an "error function" that takes a sequence of parameters and returns an array of errors.

Here's a function that does what we need.

def error_func(params, data): """Computes an array of errors to be minimized. params: sequence of parameters data: DataFrame of values to be matched returns: array of errors """ print(params) # make a System with the given parameters system = make_system(params, data) # solve the ODE results, details = run_solve_ivp(system, slope_func, t_eval=data.index) # compute the difference between the model # results and actual data errors = results.G - data.glucose return errors.iloc[3:]

error_func uses the given parameters to make a System object, runs the simulation, and returns the errors.

But notice that it does not return all of the errors; rather, it uses iloc to select only the elements with index 3 or more. In other words, it omits the elements with index 0, 1, and 2. Note: You can read more about this use of iloc in the Pandas documentation.

Since we don't expect the model to fit the data in this regime, we'll leave it out.

We can call error_func like this:

error_func(params, data)

Now we're ready to call leastsq. As arguments, we pass error_func, the parameters where we want to start the search, and the data, which will be passed as an argument to error_func.

best_params, fit_details = leastsq(error_func, params, data)

Each time error_func is called, it prints the parameters, so we can get a sense of how leastsq works.

leastsq has two return values. The is an array with the best parameters:

best_params

The second is an object with information about the results, including a success flag and a message.

fit_details.success
fit_details.mesg

Now that we have best_params, we can use it to make a System object and run it.

system2 = make_system(best_params, data) results2, details = run_solve_ivp(system2, slope_func, t_eval=data.index) details.message

Here are the results, along with the data.

data.glucose.plot(style='o', alpha=0.5, label='data') results.G.plot(style='-', color='C0', label='model') decorate(xlabel='Time (min)', ylabel='Concentration (mg/dL)')

We can compute the errors like this.

errors2 = results2.G - data.glucose errors2.head()

And the sum of the squared errors like this:

from numpy import sum sum(errors2.iloc[3:]**2)

If things have gone according to plan, the sum of squared errors should be smaller now, compared to the parameters we started with.

sum(errors.iloc[3:]**2)

Interpreting parameters

So we found the parameters that best match the data. You might wonder why this is useful.

The parameters themselves don't mean very much, but we can use them to compute two quantities we care about:

  • "Glucose effectiveness", EE, which is the tendency of elevated glucose to cause depletion of glucose.

  • "Insulin sensitivity", SS, which is the ability of elevated blood insulin to enhance glucose effectiveness.

Glucose effectiveness is defined as the change in dG/dtdG/dt as we vary GG:

EδG˙δGE \equiv - \frac{\delta \dot{G}}{\delta G}

where G˙\dot{G} is shorthand for dG/dtdG/dt. Taking the derivative of dG/dtdG/dt with respect to GG, we get

E=k1+XE = k_1 + X

The glucose effectiveness index, SGS_G, is the value of EE when blood insulin is near its basal level, IbI_b. In that case, XX approaches 0 and EE approaches k1k_1. So we can use the best-fit value of k1k_1 as an estimate of SGS_G.

Insulin sensitivity is defined as the change in EE as we vary II:

SδEδIS \equiv - \frac{\delta E}{\delta I}

The insulin sensitivity index, SIS_I, is the value of SS when EE and II are at steady state:

SIδESSδISSS_I \equiv \frac{\delta E_{SS}}{\delta I_{SS}}

EE and II are at steady state when dG/dtdG/dt and dX/dtdX/dt are 0, but we don't actually have to solve those equations to find SIS_I.

If we set dX/dt=0dX/dt = 0 and solve for XX, we find the relation:

XSS=k3k2ISSX_{SS} = \frac{k_3}{k_2} I_{SS}

And since E=k1+XE = k_1 + X, we have:

SI=δESSδISS=δXSSδISSS_I = \frac{\delta E_{SS}}{\delta I_{SS}} = \frac{\delta X_{SS}}{\delta I_{SS}}

Taking the derivative of XSSX_{SS} with respect to ISSI_{SS}, we have:

SI=k3/k2S_I = k_3 / k_2

So if we find parameters that make the model fit the data, we can use k3/k2k_3 / k_2 as an estimate of SIS_I.

For the parameters we found, here are the estimated values of SGS_G and SIS_I:

G0, k1, k2, k3 = best_params indices = SimpleNamespace(S_G=k1, S_I=k3/k2) show(indices)

According to Boston et al, normal ranges for these values are...

S_G_interval = 1.2e-3, 4.5e-2 S_G_interval
S_I_interval = 5.0e-5, 2.2e-3 S_I_interval

The estimated quantities are within the normal intervals.

Exercises

Exercise: How sensitive are the results to the starting guess for the parameters? If you try different values for the starting guess, do we get the same values for the best parameters?