Path: blob/master/site/en-snapshot/probability/examples/Factorial_Mixture.ipynb
25118 views
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Factorial Mixture
In this notebook we show how to use TensorFlow Probability (TFP) to sample from a factorial Mixture of Gaussians distribution defined as: where:
Each variable is modeled as a mixture of Gaussians, and the joint distribution over all variables is a product of these densities.
Given a dataset , we model each dataponit as a factorial mixture of Gaussians:
Factorial mixtures are a simple way of creating distributions with a small number of parameters and a large number of modes.
Build the Factorial Mixture of Gaussians using TFP
Notice our use of tfd.Independent
. This "meta-distribution" applies a reduce_sum
in the log_prob
calculation over the rightmost reinterpreted_batch_ndims
batch dimensions. In our case, this sums out the variables dimension leaving only the batch dimension when we compute log_prob
. Note that this does not affect sampling.
Plot the Density
Compute the density on a grid of points, and show the locations of the modes with red stars. Each mode in the factorial mixture corresponds to a pair of modes from the underlying individual-variable mixture of Gaussians. We can see 9 modes in the plot below, but we only needed 6 parameters (3 to specify the locations of the modes in , and 3 to specify the locations of the modes in ). In contrast, a mixture of Gaussians distribution in the 2d space would require 2 * 9 = 18 parameters to specify the 9 modes.