Contributed Code

Nested Sampling

class NestedSampler(model, *, constructor_kwargs=None, termination_kwargs=None)[source]

Bases: object

(EXPERIMENTAL) A wrapper for jaxns , a nested sampling package based on JAX.

See reference [1] for details on the meaning of each parameter. Please consider citing this reference if you use the nested sampler in your research.

Note

To enumerate over a discrete latent variable, you can add the keyword infer={“enumerate”: “parallel”} to the corresponding sample statement.

Note

To improve the performance, please consider enabling x64 mode at the beginning of your NumPyro program numpyro.enable_x64().

References

  1. JAXNS: a high-performance nested sampling package based on JAX, Joshua G. Albert (https://arxiv.org/abs/2012.15286)

Parameters
  • model (callable) – a call with NumPyro primitives

  • constructor_kwargs (dict) – additional keyword arguments to construct an upstream jaxns.NestedSampler instance.

  • termination_kwargs (dict) – keyword arguments to terminate the sampler. Please refer to the upstream jaxns.NestedSampler.__call__() method.

Example

>>> from jax import random
>>> import jax.numpy as jnp
>>> import numpyro
>>> import numpyro.distributions as dist
>>> from numpyro.contrib.nested_sampling import NestedSampler

>>> true_coefs = jnp.array([1., 2., 3.])
>>> data = random.normal(random.PRNGKey(0), (2000, 3))
>>> labels = dist.Bernoulli(logits=(true_coefs * data).sum(-1)).sample(random.PRNGKey(1))
>>>
>>> def model(data, labels):
...     coefs = numpyro.sample('coefs', dist.Normal(0, 1).expand([3]))
...     intercept = numpyro.sample('intercept', dist.Normal(0., 10.))
...     return numpyro.sample('y', dist.Bernoulli(logits=(coefs * data + intercept).sum(-1)),
...                           obs=labels)
>>>
>>> ns = NestedSampler(model)
>>> ns.run(random.PRNGKey(2), data, labels)
>>> samples = ns.get_samples(random.PRNGKey(3), num_samples=1000)
>>> assert jnp.mean(jnp.abs(samples['intercept'])) < 0.05
>>> print(jnp.mean(samples['coefs'], axis=0))  
[0.93661342 1.95034876 2.86123884]
run(rng_key, *args, **kwargs)[source]

Run the nested samplers and collect weighted samples.

Parameters
  • rng_key (random.PRNGKey) – Random number generator key to be used for the sampling.

  • args – The arguments needed by the model.

  • kwargs – The keyword arguments needed by the model.

get_samples(rng_key, num_samples)[source]

Draws samples from the weighted samples collected from the run.

Parameters
  • rng_key (random.PRNGKey) – Random number generator key to be used to draw samples.

  • num_samples (int) – The number of samples.

Returns

a dict of posterior samples

get_weighted_samples()[source]

Gets weighted samples and their corresponding log weights.

print_summary()[source]

Print summary of the result. This is a wrapper of jaxns.utils.summary().

diagnostics()[source]

Plot diagnostics of the result. This is a wrapper of jaxns.plotting.plot_diagnostics() and jaxns.plotting.plot_cornerplot().

Stein Variational Inference

Stein Variational Inference (SteinVI) is a family of VI techniques for approximate Bayesian inference based on Stein’s method (see [1] for an overview). It is gaining popularity as it combines the scalability of traditional VI with the flexibility of non-parametric particle-based methods.

Stein variational gradient descent (SVGD) [2] is a recent SteinVI technique which uses iteratively moves a set of particles \(\{z_i\}_{i=1}^N\) to approximate a distribution p(z). SVGD is well suited for capturing correlations between latent variables as a particle-based method. The technique preserves the scalability of traditional VI approaches while offering the flexibility and modeling scope of methods such as Markov chain Monte Carlo (MCMC). SVGD is good at capturing multi-modality [3][4].

numpyro.contrib.einstein is a framework for particle-based inference using the ELBO-within-Stein algorithm. The framework works on Stein mixtures, a restricted mixture of guide programs parameterized by Stein particles. Similarly to how SVGD works, Stein mixtures can approximate model posteriors by moving the Stein particles according to the Stein forces. Because the Stein particles parameterize a guide, they capture a neighborhood rather than a single point. This property means Stein mixtures significantly reduce the number of particles needed to represent high dimensional models.

numpyro.contrib.einstein mimics the interface from numpyro.infer.svi, so trying SteinVI requires minimal change to the code for existing models inferred with SVI. For primary usage, see the Bayesian neural network example.

The framework currently supports several kernels, including:

  • RBFKernel

  • LinearKernel

  • RandomFeatureKernel

  • MixtureKernel

  • PrecondMatrixKernel

  • HessianPrecondMatrix

  • GraphicalKernel

For example, usage see:

References

1. Stein’s Method Meets Statistics: A Review of Some Recent Developments (2021) Andreas Anastasiou, Alessandro Barp, François-Xavier Briol, Bruno Ebner, Robert E. Gaunt, Fatemeh Ghaderinezhad, Jackson Gorham, Arthur Gretton, Christophe Ley, Qiang Liu, Lester Mackey, Chris. J. Oates, Gesine Reinert, Yvik Swan. https://arxiv.org/abs/2105.03481

2. Stein variational gradient descent: A general-purpose Bayesian inference algorithm (2016) Qiang Liu, Dilin Wang. NeurIPS

3. Nonlinear Stein Variational Gradient Descent for Learning Diversified Mixture Models (2019) Dilin Wang, Qiang Liu. PMLR

SteinVI Interface

class SteinVI(model, guide, optim, loss, kernel_fn: numpyro.contrib.einstein.kernels.SteinKernel, num_particles: int = 10, loss_temperature: float = 1.0, repulsion_temperature: float = 1.0, classic_guide_params_fn: Callable[[str], bool] = <function SteinVI.<lambda>>, enum=True, **static_kwargs)[source]

Stein variational inference for stein mixtures.

Parameters
  • model – Python callable with Pyro primitives for the model.

  • guide – Python callable with Pyro primitives for the guide (recognition network).

  • optim – an instance of _NumpyroOptim.

  • loss – ELBO loss, i.e. negative Evidence Lower Bound, to minimize.

  • kernel_fn – Function that produces a logarithm of the statistical kernel to use with Stein inference

  • num_particles – number of particles for Stein inference. (More particles capture more of the posterior distribution)

  • loss_temperature – scaling of loss factor

  • repulsion_temperature – scaling of repulsive forces (Non-linear Stein)

  • enum – whether to apply automatic marginalization of discrete variables

  • classic_guide_param_fn – predicate on names of parameters in guide which should be optimized classically without Stein (E.g. parameters for large normal networks or other transformation)

  • static_kwargs – Static keyword arguments for the model / guide, i.e. arguments that remain constant during fitting.

SteinVI Kernels

class RBFKernel(mode='norm', matrix_mode='norm_diag', bandwidth_factor: Callable[[float], float] = <function RBFKernel.<lambda>>)[source]

Calculates the Gaussian RBF kernel function, from [1], \(k(x,y) = \exp(\frac{1}{h} \|x-y\|^2)\), where the bandwidth h is computed using the median heuristic \(h = \frac{1}{\log(n)} \text{med}(\|x-y\|)\).

References:

  1. Stein Variational Gradient Descent by Liu and Wang

Parameters
  • mode (str) – Either ‘norm’ (default) specifying to take the norm of each particle, ‘vector’ to return a component-wise kernel or ‘matrix’ to return a matrix-valued kernel

  • matrix_mode (str) – Either ‘norm_diag’ (default) for diagonal filled with the norm kernel or ‘vector_diag’ for diagonal of vector-valued kernel

  • bandwidth_factor – A multiplier to the bandwidth based on data size n (default 1/log(n))

class LinearKernel(mode='norm')[source]

Calculates the linear kernel \(k(x,y) = x \cdot y + 1\) from [1].

References:

  1. Stein Variational Gradient Descent as Moment Matching by Liu and Wang

class RandomFeatureKernel(mode='norm', bandwidth_subset=None, bandwidth_factor: Callable[[float], float] = <function RandomFeatureKernel.<lambda>>)[source]

Calculates the random kernel \(k(x,y)= 1/m\sum_{l=1}^{m}\phi(x,w_l)\phi(y,w_l)\) from [1].

References:

  1. Stein Variational Gradient Descent as Moment Matching by Liu and Wang

Parameters
  • bandwidth_subset – How many particles should be used to calculate the bandwidth? (default None, meaning all particles)

  • random_indices – The set of indices which to do random feature expansion on. (default None, meaning all indices)

  • bandwidth_factor – A multiplier to the bandwidth based on data size n (default 1/log(n))

class MixtureKernel(ws: List[float], kernel_fns: List[numpyro.contrib.einstein.kernels.SteinKernel], mode='norm')[source]

Calculates a mixture of multiple kernels \(k(x,y) = \sum_i w_ik_i(x,y)\)

References:

  1. Stein Variational Gradient Descent as Moment Matching by Liu and Wang

Parameters
  • ws – Weight of each kernel in the mixture

  • kernel_fns – Different kernel functions to mix together

class PrecondMatrixKernel(precond_matrix_fn: numpyro.contrib.einstein.kernels.PrecondMatrix, inner_kernel_fn: numpyro.contrib.einstein.kernels.SteinKernel, precond_mode='anchor_points')[source]

Calculates the const preconditioned kernel \(k(x,y) = Q^{-\frac{1}{2}}k(Q^{\frac{1}{2}}x, Q^{\frac{1}{2}}y)Q^{-\frac{1}{2}},\) or anchor point preconditioned kernel \(k(x,y) = \sum_{l=1}^m k_{Q_l}(x,y)w_l(x)w_l(y)\) both from [1].

References:

  1. “Stein Variational Gradient Descent with Matrix-Valued Kernels” by Wang, Tang, Bajaj and Liu

Parameters
  • precond_matrix_fn – The constant preconditioning matrix

  • inner_kernel_fn – The inner kernel function

  • precond_mode – How to use the precondition matrix, either constant (‘const’) or as mixture with anchor points (‘anchor_points’)

class GraphicalKernel(mode='matrix', local_kernel_fns: Optional[Dict[str, numpyro.contrib.einstein.kernels.SteinKernel]] = None, default_kernel_fn: numpyro.contrib.einstein.kernels.SteinKernel = <numpyro.contrib.einstein.kernels.RBFKernel object>)[source]

Calculates graphical kernel \(k(x,y) = diag({K_l(x_l,y_l)})\) for local kernels \(K_l\) from [1][2].

References:

  1. Stein Variational Message Passing for Continuous Graphical Models by Wang, Zheng, and Liu

  2. Stein Variational Gradient Descent with Matrix-Valued Kernels by Wang, Tang, Bajaj, and Liu

Parameters
  • local_kernel_fns – A mapping between parameters and a choice of kernel function for that parameter (default to default_kernel_fn for each parameter)

  • default_kernel_fn – The default choice of kernel function when none is specified for a particular parameter