Brief introduction to PyMC3
[PyMC3]((https://docs.pymc.io/) is a library that lets the user specify certain kinds of joint probability models using a Python API, that has the "look and feel" similar to the standard way of present hierarchical Bayesian models. Once the (log) joint is defined, it can be used for posterior inference, using either various algorithms, including Hamiltonian Monte Carlo (HMC), and automatic differentiation variational inference (ADVI). More details can be found on the PyMC3 web page, and in the book Bayesian Analysis with Python (2nd end) by Osvaldo Martin.
Example: 1d Gaussian with unknown mean.
We use the simple example from the Pyro intro. The goal is to infer the weight of an object, given noisy measurements . We assume the following model:
Where is the initial guess.
By Bayes rule for Gaussians, we know that the exact posterior, given a single observation , is given by
#MCMC inference
With PyMC3 version >=3.9 the return_inferencedata=True kwarg makes the sample function return an arviz.InferenceData object instead of a MultiTrace.
Variational inference
We use automatic differentiation VI. Details can be found at https://docs.pymc.io/notebooks/variational_api_quickstart.html
PyMc3 Libraries
There are various libraries that extend pymc3, or use it in various ways, some of which we list below.
The arviz library can be used to |visualize (and diagonose problems with) posterior samples drawn from many libraries, including PyMc3.
The bambi library lets the user specify linear models using "formula syntax", similar to R.
The PyMc-learn library offers a sklearn-style API to specify models, but uses PyMc3 under the hood to compute posteriors for model parameters, instead of just point estimates.