GPy.inference.latent_function_inference package

Introduction

Certain GPy.models can be instanciated with an inference_method. This submodule contains objects that can be assigned to inference_method.

Inference over Gaussian process latent functions

In all our GP models, the consistency property means that we have a Gaussian prior over a finite set of points f. This prior is:

\[N(f | 0, K)\]

where \(K\) is the kernel matrix.

We also have a likelihood (see GPy.likelihoods) which defines how the data are related to the latent function: \(p(y | f)\). If the likelihood is also a Gaussian, the inference over \(f\) is tractable (see GPy.inference.latent_function_inference.exact_gaussian_inference).

If the likelihood object is something other than Gaussian, then exact inference is not tractable. We then resort to a Laplace approximation (GPy.inference.latent_function_inference.laplace) or expectation propagation (GPy.inference.latent_function_inference.expectation_propagation).

The inference methods return a Posterior instance, which is a simple structure which contains a summary of the posterior. The model classes can then use this posterior object for making predictions, optimizing hyper-parameters, etc.

class InferenceMethodList[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference, list

on_optimization_end()[source]

This function gets called, just after the optimization loop ended.

on_optimization_start()[source]

This function gets called, just before the optimization loop to start.

class LatentFunctionInference[source]

Bases: object

static from_dict(input_dict)[source]

Instantiate an object of a derived class using the information in input_dict (built by the to_dict method of the derived class). More specifically, after reading the derived class from input_dict, it calls the method _build_from_input_dict of the derived class. Note: This method should not be overrided in the derived class. In case it is needed, please override _build_from_input_dict instate.

Parameters:input_dict (dict) – Dictionary with all the information needed to instantiate the object.
on_optimization_end()[source]

This function gets called, just after the optimization loop ended.

on_optimization_start()[source]

This function gets called, just before the optimization loop to start.

to_dict()[source]

Submodules

GPy.inference.latent_function_inference.dtc module

class DTC[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

NB. It’s not recommended to use this function! It’s here for historical purposes.

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]
class vDTC[source]

Bases: object

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]

GPy.inference.latent_function_inference.exact_gaussian_inference module

class ExactGaussianInference[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian.

The function self.inference returns a Posterior object, which summarizes the posterior.

For efficiency, we sometimes work with the cholesky of Y*Y.T. To save repeatedly recomputing this, we cache it.

LOO(kern, X, Y, likelihood, posterior, Y_metadata=None, K=None)[source]

Leave one out error as found in “Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models” Vehtari et al. 2014.

inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None, K=None, variance=None, Z_tilde=None)[source]

Returns a Posterior class containing essential quantities of the posterior

to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object

GPy.inference.latent_function_inference.exact_studentt_inference module

class ExactStudentTInference[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference of student-t processes (not for GP with student-t likelihood!).

The function self.inference returns a StudentTPosterior object, which summarizes the posterior.

inference(kern, X, Y, nu, mean_function=None, K=None)[source]

GPy.inference.latent_function_inference.expectation_propagation module

class EP(epsilon=1e-06, eta=1.0, delta=1.0, always_reset=False, max_iters=inf, ep_mode='alternated', parallel_updates=False, loading=False)[source]

Bases: GPy.inference.latent_function_inference.expectation_propagation.EPBase, GPy.inference.latent_function_inference.exact_gaussian_inference.ExactGaussianInference

The expectation-propagation algorithm. For nomenclature see Rasmussen & Williams 2006.

Parameters:
  • epsilon (float) – Convergence criterion, maximum squared difference allowed between mean updates to stop iterations (float)
  • eta (float64) – parameter for fractional EP updates.
  • delta (float64) – damping EP updates factor.
  • always_reset – setting to always reset the approximation at the beginning of every inference call.
Max_iters:

int

Ep_mode:

string. It can be “nested” (EP is run every time the Hyperparameters change) or “alternated” (It runs EP at the beginning and then optimize the Hyperparameters).

Parallel_updates:
 

boolean. If true, updates of the parameters of the sites in parallel

Loading:

boolean. If True, prevents the EP parameters to change. Hack used when loading a serialized model

expectation_propagation(mean_prior, K, Y, likelihood, Y_metadata)[source]
inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None, precision=None, K=None)[source]

Returns a Posterior class containing essential quantities of the posterior

to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object
class EPBase(epsilon=1e-06, eta=1.0, delta=1.0, always_reset=False, max_iters=inf, ep_mode='alternated', parallel_updates=False, loading=False)[source]

Bases: object

The expectation-propagation algorithm. For nomenclature see Rasmussen & Williams 2006.

Parameters:
  • epsilon (float) – Convergence criterion, maximum squared difference allowed between mean updates to stop iterations (float)
  • eta (float64) – parameter for fractional EP updates.
  • delta (float64) – damping EP updates factor.
  • always_reset – setting to always reset the approximation at the beginning of every inference call.
Max_iters:

int

Ep_mode:

string. It can be “nested” (EP is run every time the Hyperparameters change) or “alternated” (It runs EP at the beginning and then optimize the Hyperparameters).

Parallel_updates:
 

boolean. If true, updates of the parameters of the sites in parallel

Loading:

boolean. If True, prevents the EP parameters to change. Hack used when loading a serialized model

on_optimization_end()[source]
on_optimization_start()[source]
reset()[source]
class EPDTC(epsilon=1e-06, eta=1.0, delta=1.0, always_reset=False, max_iters=inf, ep_mode='alternated', parallel_updates=False, loading=False)[source]

Bases: GPy.inference.latent_function_inference.expectation_propagation.EPBase, GPy.inference.latent_function_inference.var_dtc.VarDTC

The expectation-propagation algorithm. For nomenclature see Rasmussen & Williams 2006.

Parameters:
  • epsilon (float) – Convergence criterion, maximum squared difference allowed between mean updates to stop iterations (float)
  • eta (float64) – parameter for fractional EP updates.
  • delta (float64) – damping EP updates factor.
  • always_reset – setting to always reset the approximation at the beginning of every inference call.
Max_iters:

int

Ep_mode:

string. It can be “nested” (EP is run every time the Hyperparameters change) or “alternated” (It runs EP at the beginning and then optimize the Hyperparameters).

Parallel_updates:
 

boolean. If true, updates of the parameters of the sites in parallel

Loading:

boolean. If True, prevents the EP parameters to change. Hack used when loading a serialized model

expectation_propagation(Kmm, Kmn, Y, likelihood, Y_metadata)[source]
inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None, Lm=None, dL_dKmm=None, psi0=None, psi1=None, psi2=None)[source]
to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object
class cavityParams(num_data)[source]

Bases: object

static from_dict(input_dict)[source]
to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object
class gaussianApproximation(v, tau)[source]

Bases: object

static from_dict(input_dict)[source]
to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object
class marginalMoments(num_data)[source]

Bases: object

class posteriorParams(mu, Sigma, L=None)[source]

Bases: GPy.inference.latent_function_inference.expectation_propagation.posteriorParamsBase

static from_dict(input_dict)[source]
to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object
class posteriorParamsBase(mu, Sigma_diag)[source]

Bases: object

class posteriorParamsDTC(mu, Sigma_diag)[source]

Bases: GPy.inference.latent_function_inference.expectation_propagation.posteriorParamsBase

static from_dict(input_dict)[source]
to_dict()[source]

Convert the object into a json serializable dictionary.

Note: It uses the private method _save_to_input_dict of the parent.

Return dict:json serializable dictionary containing the needed information to instantiate the object

GPy.inference.latent_function_inference.fitc module

class FITC[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]
const_jitter = 1e-06

GPy.inference.latent_function_inference.gaussian_grid_inference module

class GaussianGridInference[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian and inputs are on a grid.

The function self.inference returns a GridPosterior object, which summarizes the posterior.

inference(kern, X, likelihood, Y, Y_metadata=None)[source]

Returns a GridPosterior class containing essential quantities of the posterior

kron_mvprod(A, b)[source]

GPy.inference.latent_function_inference.grid_posterior module

class GridPosterior(alpha_kron=None, QTs=None, Qs=None, V_kron=None)[source]

Bases: object

Specially intended for the Grid Regression case An object to represent a Gaussian posterior over latent function values, p(f|D).

The purpose of this class is to serve as an interface between the inference schemes and the model classes.

alpha_kron : QTs : transpose of eigen vectors resulting from decomposition of single dimension covariance matrices Qs : eigen vectors resulting from decomposition of single dimension covariance matrices V_kron : kronecker product of eigenvalues reulting decomposition of single dimension covariance matrices

QTs

array of transposed eigenvectors resulting for single dimension covariance

Qs

array of eigenvectors resulting for single dimension covariance

V_kron

kronecker product of eigenvalues s

alpha

GPy.inference.latent_function_inference.inferenceX module

class InferenceX(model, Y, name='inferenceX', init='L2')[source]

Bases: GPy.core.model.Model

The model class for inference of new X with given new Y. (replacing the “do_test_latent” in Bayesian GPLVM) It is a tiny inference model created from the original GP model. The kernel, likelihood (only Gaussian is supported at the moment) and posterior distribution are taken from the original model. For Regression models and GPLVM, a point estimate of the latent variable X will be inferred. For Bayesian GPLVM, the variational posterior of X will be inferred. X is inferred through a gradient optimization of the inference model.

Parameters:
  • model (GPy.core.Model) – the GPy model used in inference
  • Y (numpy.ndarray) – the new observed data for inference
  • init ('L2', 'NCC' and 'rand') – the distance metric of Y for initializing X with the nearest neighbour.
compute_dL()[source]
log_likelihood()[source]
parameters_changed()[source]

This method gets called when parameters have changed. Another way of listening to param changes is to add self as a listener to the param, such that updates get passed through. See :py:function:paramz.param.Observable.add_observer

infer_newX(model, Y_new, optimize=True, init='L2')[source]

Infer the distribution of X for the new observed data Y_new.

Parameters:
  • model (GPy.core.Model) – the GPy model used in inference
  • Y_new (numpy.ndarray) – the new observed data for inference
  • optimize (boolean) – whether to optimize the location of new X (True by default)
Returns:

a tuple containing the estimated posterior distribution of X and the model that optimize X

Return type:

(GPy.core.parameterization.variational.VariationalPosterior, GPy.core.Model)

GPy.inference.latent_function_inference.laplace module

class Laplace[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

Laplace Approximation

Find the moments hat{f} and the hessian at this point (using Newton-Raphson) of the unnormalised posterior

LOO(kern, X, Y, likelihood, posterior, Y_metadata=None, K=None, f_hat=None, W=None, Ki_W_i=None)[source]

Leave one out log predictive density as found in “Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models” Vehtari et al. 2014.

inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None)[source]

Returns a Posterior class containing essential quantities of the posterior

mode_computations(f_hat, Ki_f, K, Y, likelihood, kern, Y_metadata)[source]

At the mode, compute the hessian and effective covariance matrix.

returns: logZ : approximation to the marginal likelihood
woodbury_inv : variable required for calculating the approximation to the covariance matrix dL_dthetaL : array of derivatives (1 x num_kernel_params) dL_dthetaL : array of derivatives (1 x num_likelihood_params)
rasm_mode(K, Y, likelihood, Ki_f_init, Y_metadata=None, *args, **kwargs)[source]

Rasmussen’s numerically stable mode finding For nomenclature see Rasmussen & Williams 2006 Influenced by GPML (BSD) code, all errors are our own

Parameters:
  • K (NxD matrix) – Covariance matrix evaluated at locations X
  • Y (np.ndarray) – The data
  • likelihood (a GPy.likelihood object) – the likelihood of the latent function value for the given data
  • Ki_f_init (np.ndarray) – the initial guess at the mode
  • Y_metadata (np.ndarray | None) – information about the data, e.g. which likelihood to take from a multi-likelihood object
Returns:

f_hat, mode on which to make laplace approxmiation

Return type:

np.ndarray

class LaplaceBlock[source]

Bases: GPy.inference.latent_function_inference.laplace.Laplace

Laplace Approximation

Find the moments hat{f} and the hessian at this point (using Newton-Raphson) of the unnormalised posterior

mode_computations(f_hat, Ki_f, K, Y, likelihood, kern, Y_metadata)[source]

At the mode, compute the hessian and effective covariance matrix.

returns: logZ : approximation to the marginal likelihood
woodbury_inv : variable required for calculating the approximation to the covariance matrix dL_dthetaL : array of derivatives (1 x num_kernel_params) dL_dthetaL : array of derivatives (1 x num_likelihood_params)
rasm_mode(K, Y, likelihood, Ki_f_init, Y_metadata=None, *args, **kwargs)[source]

Rasmussen’s numerically stable mode finding For nomenclature see Rasmussen & Williams 2006 Influenced by GPML (BSD) code, all errors are our own

Parameters:
  • K (NxD matrix) – Covariance matrix evaluated at locations X
  • Y (np.ndarray) – The data
  • likelihood (a GPy.likelihood object) – the likelihood of the latent function value for the given data
  • Ki_f_init (np.ndarray) – the initial guess at the mode
  • Y_metadata (np.ndarray | None) – information about the data, e.g. which likelihood to take from a multi-likelihood object
Returns:

f_hat, mode on which to make laplace approxmiation

Return type:

np.ndarray

warning_on_one_line(message, category, filename, lineno, file=None, line=None)[source]

GPy.inference.latent_function_inference.pep module

class PEP(alpha)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

Sparse Gaussian processes using Power-Expectation Propagation for regression: alpha pprox 0 gives VarDTC and alpha = 1 gives FITC

Reference: A Unifying Framework for Sparse Gaussian Process Approximation using Power Expectation Propagation, https://arxiv.org/abs/1605.07066

inference(kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None)[source]
const_jitter = 1e-06

GPy.inference.latent_function_inference.posterior module

class Posterior(woodbury_chol=None, woodbury_vector=None, K=None, mean=None, cov=None, K_chol=None, woodbury_inv=None, prior_mean=0)[source]

Bases: object

An object to represent a Gaussian posterior over latent function values, p(f|D). This may be computed exactly for Gaussian likelihoods, or approximated for non-Gaussian likelihoods.

The purpose of this class is to serve as an interface between the inference schemes and the model classes. the model class can make predictions for the function at any new point x_* by integrating over this posterior.

woodbury_chol : a lower triangular matrix L that satisfies posterior_covariance = K - K L^{-T} L^{-1} K woodbury_vector : a matrix (or vector, as Nx1 matrix) M which satisfies posterior_mean = K M K : the proir covariance (required for lazy computation of various quantities) mean : the posterior mean cov : the posterior covariance

Not all of the above need to be supplied! You must supply:

K (for lazy computation) or K_chol (for lazy computation)

You may supply either:

woodbury_chol woodbury_vector

Or:

mean cov

Of course, you can supply more than that, but this class will lazily compute all other quantites on demand.

covariance_between_points(kern, X, X1, X2)[source]

Computes the posterior covariance between points.

Parameters:
  • kern – GP kernel
  • X – current input observations
  • X1 – some input observations
  • X2 – other input observations
K_chol

Cholesky of the prior covariance K

covariance

Posterior covariance $$ K_{xx} - K_{xx}W_{xx}^{-1}K_{xx} W_{xx} := exttt{Woodbury inv} $$

mean

Posterior mean $$ K_{xx}v v := exttt{Woodbury vector} $$

precision

Inverse of posterior covariance

woodbury_chol

return $L_{W}$ where L is the lower triangular Cholesky decomposition of the Woodbury matrix $$ L_{W}L_{W}^{ op} = W^{-1} W^{-1} := exttt{Woodbury inv} $$

woodbury_inv

The inverse of the woodbury matrix, in the gaussian likelihood case it is defined as $$ (K_{xx} + Sigma_{xx})^{-1} Sigma_{xx} := exttt{Likelihood.variance / Approximate likelihood covariance} $$

woodbury_vector

Woodbury vector in the gaussian likelihood case only is defined as $$ (K_{xx} + Sigma)^{-1}Y Sigma := exttt{Likelihood.variance / Approximate likelihood covariance} $$

class PosteriorEP(woodbury_chol=None, woodbury_vector=None, K=None, mean=None, cov=None, K_chol=None, woodbury_inv=None, prior_mean=0)[source]

Bases: GPy.inference.latent_function_inference.posterior.Posterior

woodbury_chol : a lower triangular matrix L that satisfies posterior_covariance = K - K L^{-T} L^{-1} K woodbury_vector : a matrix (or vector, as Nx1 matrix) M which satisfies posterior_mean = K M K : the proir covariance (required for lazy computation of various quantities) mean : the posterior mean cov : the posterior covariance

Not all of the above need to be supplied! You must supply:

K (for lazy computation) or K_chol (for lazy computation)

You may supply either:

woodbury_chol woodbury_vector

Or:

mean cov

Of course, you can supply more than that, but this class will lazily compute all other quantites on demand.

class PosteriorExact(woodbury_chol=None, woodbury_vector=None, K=None, mean=None, cov=None, K_chol=None, woodbury_inv=None, prior_mean=0)[source]

Bases: GPy.inference.latent_function_inference.posterior.Posterior

woodbury_chol : a lower triangular matrix L that satisfies posterior_covariance = K - K L^{-T} L^{-1} K woodbury_vector : a matrix (or vector, as Nx1 matrix) M which satisfies posterior_mean = K M K : the proir covariance (required for lazy computation of various quantities) mean : the posterior mean cov : the posterior covariance

Not all of the above need to be supplied! You must supply:

K (for lazy computation) or K_chol (for lazy computation)

You may supply either:

woodbury_chol woodbury_vector

Or:

mean cov

Of course, you can supply more than that, but this class will lazily compute all other quantites on demand.

class StudentTPosterior(deg_free, **kwargs)[source]

Bases: GPy.inference.latent_function_inference.posterior.PosteriorExact

GPy.inference.latent_function_inference.svgp module

class SVGP[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

inference(q_u_mean, q_u_chol, kern, X, Z, likelihood, Y, mean_function=None, Y_metadata=None, KL_scale=1.0, batch_scale=1.0)[source]

GPy.inference.latent_function_inference.var_dtc module

class VarDTC(limit=1)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

For efficiency, we sometimes work with the cholesky of Y*Y.T. To save repeatedly recomputing this, we cache it.

get_VVTfactor(Y, prec)[source]
inference(kern, X, Z, likelihood, Y, Y_metadata=None, mean_function=None, precision=None, Lm=None, dL_dKmm=None, psi0=None, psi1=None, psi2=None, Z_tilde=None)[source]
set_limit(limit)[source]
const_jitter = 1e-08

GPy.inference.latent_function_inference.var_dtc_parallel module

class VarDTC_minibatch(batchsize=None, limit=3, mpi_comm=None)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

An object for inference when the likelihood is Gaussian, but we want to do sparse inference.

The function self.inference returns a Posterior object, which summarizes the posterior.

For efficiency, we sometimes work with the cholesky of Y*Y.T. To save repeatedly recomputing this, we cache it.

gatherPsiStat(kern, X, Z, Y, beta, uncertain_inputs)[source]
inference_likelihood(kern, X, Z, likelihood, Y)[source]

The first phase of inference: Compute: log-likelihood, dL_dKmm

Cached intermediate results: Kmm, KmmInv,

inference_minibatch(kern, X, Z, likelihood, Y)[source]

The second phase of inference: Computing the derivatives over a minibatch of Y Compute: dL_dpsi0, dL_dpsi1, dL_dpsi2, dL_dthetaL return a flag showing whether it reached the end of Y (isEnd)

set_limit(limit)[source]
const_jitter = 1e-08
update_gradients(model, mpi_comm=None)[source]
update_gradients_sparsegp(model, mpi_comm=None)[source]

GPy.inference.latent_function_inference.var_gauss module

class VarGauss(alpha, beta)[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

The Variational Gaussian Approximation revisited

@article{Opper:2009,
title = {The Variational Gaussian Approximation Revisited}, author = {Opper, Manfred and Archambeau, C{‘e}dric}, journal = {Neural Comput.}, year = {2009}, pages = {786–792},

}

Parameters:
  • alpha – GPy.core.Param varational parameter
  • beta – GPy.core.Param varational parameter
inference(kern, X, likelihood, Y, mean_function=None, Y_metadata=None, Z=None)[source]

GPy.inference.latent_function_inference.vardtc_md module

class VarDTC_MD[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

The VarDTC inference method for sparse GP with missing data (GPy.models.SparseGPRegressionMD)

gatherPsiStat(kern, X, Z, Y, beta, uncertain_inputs)[source]
inference(kern, X, Z, likelihood, Y, indexD, output_dim, Y_metadata=None, Lm=None, dL_dKmm=None, Kuu_sigma=None)[source]

The first phase of inference: Compute: log-likelihood, dL_dKmm

Cached intermediate results: Kmm, KmmInv,

const_jitter = 1e-06

GPy.inference.latent_function_inference.vardtc_svi_multiout module

class PosteriorMultioutput(LcInvMLrInvT, LcInvScLcInvT, LrInvSrLrInvT, Lr, Lc, kern_r, Xr, Zr)[source]

Bases: object

class VarDTC_SVI_Multiout[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

The VarDTC inference method for Multi-output GP regression (GPy.models.GPMultioutRegression)

gatherPsiStat(kern, X, Z, uncertain_inputs)[source]
get_YYTfactor(Y)[source]
get_trYYT(Y)[source]
inference(kern_r, kern_c, Xr, Xc, Zr, Zc, likelihood, Y, qU_mean, qU_var_r, qU_var_c)[source]

The SVI-VarDTC inference

const_jitter = 1e-06

GPy.inference.latent_function_inference.vardtc_svi_multiout_miss module

class VarDTC_SVI_Multiout_Miss[source]

Bases: GPy.inference.latent_function_inference.LatentFunctionInference

The VarDTC inference method for Multi-output GP regression with missing data (GPy.models.GPMultioutRegressionMD)

gatherPsiStat(kern, X, Z, uncertain_inputs)[source]
get_YYTfactor(Y)[source]
get_trYYT(Y)[source]
inference(kern_r, kern_c, Xr, Xc, Zr, Zc, likelihood, Y, qU_mean, qU_var_r, qU_var_c, indexD, output_dim)[source]

The SVI-VarDTC inference

inference_d(d, beta, Y, indexD, grad_dict, mid_res, uncertain_inputs_r, uncertain_inputs_c, Mr, Mc)[source]
const_jitter = 1e-06