GPy.inference.latent_function_inference package

Submodules

GPy.inference.latent_function_inference.dtc module

class DTC[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

NB. It’s not recommended to use this function! It’s here for historical purposes.

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]
class vDTC[source]

Bases: object

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]

GPy.inference.latent_function_inference.exact_gaussian_inference module

class ExactGaussianInference[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian.

The function self.inference returns a Posterior object, which summarizes the posterior.

For efficiency, we sometimes work with the cholesky of Y*Y.T. To save repeatedly recomputing this, we cache it.

LOO(kern, X, Y, likelihood, posterior, Y_metadata=None, K=None)[source]

Leave one out error as found in “Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models” Vehtari et al. 2014.

inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None, K=None, precision=None, Z_tilde=None)[source]

Returns a Posterior class containing essential quantities of the posterior

GPy.inference.latent_function_inference.expectation_propagation module

class EP(epsilon=1e-06, eta=1.0, delta=1.0, always_reset=False)[source]

Bases: GPy.inference.latent_function_inference.expectation_propagation.EPBase, GPy.inference.latent_function_inference.exact_gaussian_inference.ExactGaussianInference

expectation_propagation(K, Y, likelihood, Y_metadata)[source]
inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None, precision=None, K=None)[source]
class EPBase(epsilon=1e-06, eta=1.0, delta=1.0, always_reset=False)[source]

Bases: object

on_optimization_end()[source]
on_optimization_start()[source]
reset()[source]
class EPDTC(epsilon=1e-06, eta=1.0, delta=1.0, always_reset=False)[source]

Bases: GPy.inference.latent_function_inference.expectation_propagation.EPBase, GPy.inference.latent_function_inference.var_dtc.VarDTC

expectation_propagation(Kmm, Kmn, Y, likelihood, Y_metadata)[source]
inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None, Lm=None, dL_dKmm=None, psi0=None, psi1=None, psi2=None)[source]

GPy.inference.latent_function_inference.fitc module

class FITC[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]
const_jitter = 1e-06

GPy.inference.latent_function_inference.inferenceX module

class InferenceX(model, Y, name='inferenceX', init='L2')[source]

Bases: GPy.core.model.Model

The model class for inference of new X with given new Y. (replacing the “do_test_latent” in Bayesian GPLVM) It is a tiny inference model created from the original GP model. The kernel, likelihood (only Gaussian is supported at the moment) and posterior distribution are taken from the original model. For Regression models and GPLVM, a point estimate of the latent variable X will be inferred. For Bayesian GPLVM, the variational posterior of X will be inferred. X is inferred through a gradient optimization of the inference model.

Parameters:
  • model (GPy.core.Model) – the GPy model used in inference
  • Y (numpy.ndarray) – the new observed data for inference
  • init ('L2', 'NCC' and 'rand') – the distance metric of Y for initializing X with the nearest neighbour.
compute_dL()[source]
log_likelihood()[source]
parameters_changed()[source]
infer_newX(model, Y_new, optimize=True, init='L2')[source]

Infer the distribution of X for the new observed data Y_new.

Parameters:
  • model (GPy.core.Model) – the GPy model used in inference
  • Y_new (numpy.ndarray) – the new observed data for inference
  • optimize (boolean) – whether to optimize the location of new X (True by default)
Returns:

a tuple containing the estimated posterior distribution of X and the model that optimize X

Return type:

(GPy.core.parameterization.variational.VariationalPosterior, GPy.core.Model)

GPy.inference.latent_function_inference.laplace module

class Laplace[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

LOO(kern, X, Y, likelihood, posterior, Y_metadata=None, K=None, f_hat=None, W=None, Ki_W_i=None)[source]

Leave one out log predictive density as found in “Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models” Vehtari et al. 2014.

inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None)[source]

Returns a Posterior class containing essential quantities of the posterior

mode_computations(f_hat, Ki_f, K, Y, likelihood, kern, Y_metadata)[source]

At the mode, compute the hessian and effective covariance matrix.

returns: logZ : approximation to the marginal likelihood
woodbury_inv : variable required for calculating the approximation to the covariance matrix dL_dthetaL : array of derivatives (1 x num_kernel_params) dL_dthetaL : array of derivatives (1 x num_likelihood_params)
rasm_mode(K, Y, likelihood, Ki_f_init, Y_metadata=None, *args, **kwargs)[source]

Rasmussen’s numerically stable mode finding For nomenclature see Rasmussen & Williams 2006 Influenced by GPML (BSD) code, all errors are our own

Parameters:
  • K (NxD matrix) – Covariance matrix evaluated at locations X
  • Y (np.ndarray) – The data
  • likelihood (a GPy.likelihood object) – the likelihood of the latent function value for the given data
  • Ki_f_init (np.ndarray) – the initial guess at the mode
  • Y_metadata (np.ndarray | None) – information about the data, e.g. which likelihood to take from a multi-likelihood object
Returns:

f_hat, mode on which to make laplace approxmiation

Return type:

np.ndarray

class LaplaceBlock[source]

Bases: GPy.inference.latent_function_inference.laplace.Laplace

mode_computations(f_hat, Ki_f, K, Y, likelihood, kern, Y_metadata)[source]
rasm_mode(K, Y, likelihood, Ki_f_init, Y_metadata=None, *args, **kwargs)[source]
warning_on_one_line(message, category, filename, lineno, file=None, line=None)[source]

GPy.inference.latent_function_inference.posterior module

class Posterior(woodbury_chol=None, woodbury_vector=None, K=None, mean=None, cov=None, K_chol=None, woodbury_inv=None, prior_mean=0)[source]

Bases: object

An object to represent a Gaussian posterior over latent function values, p(f|D). This may be computed exactly for Gaussian likelihoods, or approximated for non-Gaussian likelihoods.

The purpose of this class is to serve as an interface between the inference schemes and the model classes. the model class can make predictions for the function at any new point x_* by integrating over this posterior.

K_chol

Cholesky of the prior covariance K

covariance

Posterior covariance $$ K_{xx} - K_{xx}W_{xx}^{-1}K_{xx} W_{xx} := exttt{Woodbury inv} $$

mean

Posterior mean $$ K_{xx}v v := exttt{Woodbury vector} $$

precision

Inverse of posterior covariance

woodbury_chol

return $L_{W}$ where L is the lower triangular Cholesky decomposition of the Woodbury matrix $$ L_{W}L_{W}^{ op} = W^{-1} W^{-1} := exttt{Woodbury inv} $$

woodbury_inv

The inverse of the woodbury matrix, in the gaussian likelihood case it is defined as $$ (K_{xx} + Sigma_{xx})^{-1} Sigma_{xx} := exttt{Likelihood.variance / Approximate likelihood covariance} $$

woodbury_vector

Woodbury vector in the gaussian likelihood case only is defined as $$ (K_{xx} + Sigma)^{-1}Y Sigma := exttt{Likelihood.variance / Approximate likelihood covariance} $$

class PosteriorExact(woodbury_chol=None, woodbury_vector=None, K=None, mean=None, cov=None, K_chol=None, woodbury_inv=None, prior_mean=0)[source]

Bases: GPy.inference.latent_function_inference.posterior.Posterior

GPy.inference.latent_function_inference.svgp module

class SVGP[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

inference(q_u_mean, q_u_chol, kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None, KL_scale=1.0, batch_scale=1.0)[source]

GPy.inference.latent_function_inference.var_dtc module

class VarDTC(limit=1)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

For efficiency, we sometimes work with the cholesky of Y*Y.T. To save repeatedly recomputing this, we cache it.

get_VVTfactor(Y, prec)[source]
inference(kern, X, Z, likelihood, Y, Y_metadata=None, mean_function=None, precision=None, Lm=None, dL_dKmm=None, psi0=None, psi1=None, psi2=None, Z_tilde=None)[source]
set_limit(limit)[source]
const_jitter = 1e-08

GPy.inference.latent_function_inference.var_dtc_parallel module

class VarDTC_minibatch(batchsize=None, limit=3, mpi_comm=None)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

For efficiency, we sometimes work with the cholesky of Y*Y.T. To save repeatedly recomputing this, we cache it.

gatherPsiStat(kern, X, Z, Y, beta, uncertain_inputs)[source]
inference_likelihood(kern, X, Z, likelihood, Y)[source]

The first phase of inference: Compute: log-likelihood, dL_dKmm

Cached intermediate results: Kmm, KmmInv,

inference_minibatch(kern, X, Z, likelihood, Y)[source]

The second phase of inference: Computing the derivatives over a minibatch of Y Compute: dL_dpsi0, dL_dpsi1, dL_dpsi2, dL_dthetaL return a flag showing whether it reached the end of Y (isEnd)

set_limit(limit)[source]
const_jitter = 1e-08
update_gradients(model, mpi_comm=None)[source]
update_gradients_sparsegp(model, mpi_comm=None)[source]

GPy.inference.latent_function_inference.var_gauss module

class VarGauss(alpha, beta)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

The Variational Gaussian Approximation revisited

@article{Opper:2009,
title = {The Variational Gaussian Approximation Revisited}, author = {Opper, Manfred and Archambeau, C{‘e}dric}, journal = {Neural Comput.}, year = {2009}, pages = {786–792},

}

inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None, Z=None)[source]

Module contents

Inference over Gaussian process latent functions

In all our GP models, the consistency propery means that we have a Gaussian prior over a finite set of points f. This prior is

math:: N(f | 0, K)

where K is the kernel matrix.

We also have a likelihood (see GPy.likelihoods) which defines how the data are related to the latent function: p(y | f). If the likelihood is also a Gaussian, the inference over f is tractable (see exact_gaussian_inference.py).

If the likelihood object is something other than Gaussian, then exact inference is not tractable. We then resort to a Laplace approximation (laplace.py) or expectation propagation (ep.py).

The inference methods return a Posterior instance, which is a simple structure which contains a summary of the posterior. The model classes can then use this posterior object for making predictions, optimizing hyper-parameters, etc.

class InferenceMethodList[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference, list

on_optimization_end()[source]
on_optimization_start()[source]
class LatentFunctionInference[source]

Bases: object

on_optimization_end()[source]

This function gets called, just after the optimization loop ended.

on_optimization_start()[source]

This function gets called, just before the optimization loop to start.