Path: blob/master/deprecated/notebooks/gp_poisson_1d.ipynb
1192 views
Kernel: Python 3 (ipykernel)
GP with a Poisson Likelihood
https://tinygp.readthedocs.io/en/latest/tutorials/likelihoods.html
We use the tinygp library to define the model, and the numpyro library to do inference, using either MCMC or SVI.
In [2]:
Out[2]:
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
/usr/local/lib/python3.7/dist-packages/jax/experimental/optimizers.py:30: FutureWarning: jax.experimental.optimizers is deprecated, import jax.example_libraries.optimizers instead
FutureWarning)
/usr/local/lib/python3.7/dist-packages/jax/experimental/stax.py:30: FutureWarning: jax.experimental.stax is deprecated, import jax.example_libraries.stax instead
FutureWarning)
Data
In [3]:
Out[3]:
Markov chain Monte Carlo (MCMC)
We set up the model in numpyro
and run MCMC. Note that the log_rate
parameter doesn't have the obs=...
argument set, since it is latent.
In [4]:
Out[4]:
/usr/local/lib/python3.7/dist-packages/numpyro/infer/mcmc.py:280: UserWarning: There are not enough devices to run parallel chains: expected 2 but got 1. Chains will be drawn sequentially. If you are running MCMC in CPU, consider using `numpyro.set_host_device_count(2)` at the beginning of your program. You can double-check how many devices are available in your system using `jax.local_device_count()`.
self.num_chains, local_device_count(), self.num_chains
We can summarize the MCMC results by plotting our inferred model (here we're showing the 1- and 2-sigma credible regions), and compare it to the known ground truth:
In [5]:
Out[5]:
Stochastic variational inference (SVI)
For larger datasets, it is faster to use stochastic variational inference (SVI) instead of MCMC.
In [6]:
As above, we can plot our inferred conditional model and compare it to the ground truth:
In [7]:
Out[7]: