GPy.core.parameterization package

Introduction

Extends the functionality of the paramz package (dependency) to support paramterization of priors (GPy.core.parameterization.priors).

Inheritance diagram of GPy.core.parameterization.priors

Submodules

GPy.core.parameterization.param module

class Param(name, input_array, default_constraint=None, *a, **kw)[source]

Bases: paramz.param.Param, GPy.core.parameterization.priorizable.Priorizable

randomize(rand_gen=None, *args, **kwargs)

Randomize the model. Make this draw from the prior if one exists, else draw from given random generator

Parameters:
  • rand_gen – np random number generator which takes args and kwargs
  • loc (flaot) – loc parameter for random number generator
  • scale (float) – scale parameter for random number generator
  • kwargs (args,) – will be passed through to random number generator

GPy.core.parameterization.parameterized module

class Parameterized(name=None, parameters=[])[source]

Bases: paramz.parameterized.Parameterized, GPy.core.parameterization.priorizable.Priorizable

Parameterized class

Say m is a handle to a parameterized class.

Printing parameters:

  • print m: prints a nice summary over all parameters
  • print m.name: prints details for param with name ‘name’
  • print m[regexp]: prints details for all the parameters
    which match (!) regexp
  • print m[‘’]: prints details for all parameters

Fields:

Name: The name of the param, can be renamed! Value: Shape or value, if one-valued Constrain: constraint of the param, curly “{c}” brackets indicate

some parameters are constrained by c. See detailed print to get exact constraints.

Tied_to: which paramter it is tied to.

Getting and setting parameters:

Set all values in param to one:

m.name.to.param = 1

Handling of constraining, fixing and tieing parameters:

You can constrain parameters by calling the constrain on the param itself, e.g:

  • m.name[:,1].constrain_positive()
  • m.name[0].tie_to(m.name[1])

Fixing parameters will fix them to the value they are right now. If you change the parameters value, the param will be fixed to the new value!

If you want to operate on all parameters use m[‘’] to wildcard select all paramters and concatenate them. Printing m[‘’] will result in printing of all parameters in detail.

randomize(rand_gen=None, *args, **kwargs)

Randomize the model. Make this draw from the prior if one exists, else draw from given random generator

Parameters:
  • rand_gen – np random number generator which takes args and kwargs
  • loc (flaot) – loc parameter for random number generator
  • scale (float) – scale parameter for random number generator
  • kwargs (args,) – will be passed through to random number generator

GPy.core.parameterization.priorizable module

class Priorizable(name, default_prior=None, *a, **kw)[source]

Bases: paramz.core.parameter_core.Parameterizable

log_prior()[source]

evaluate the prior

set_prior(prior, warning=True)[source]

Set the prior for this object to prior. :param Prior prior: a prior to set for this parameter :param bool warning: whether to warn if another prior was set for this parameter

unset_priors(*priors)[source]

Un-set all priors given (in *priors) from this parameter handle.

GPy.core.parameterization.priors module

class DGPLVM(sigma2, lbl, x_shape)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.

Parameters:sigma2 – constant

Note

DGPLVM for Classification paper implementation

compute_Mi(cls)[source]
compute_Sb(cls, M_i, M_0)[source]
compute_Sw(cls, M_i)[source]
compute_cls(x)[source]
compute_indices(x)[source]
compute_listIndices(data_idx)[source]
compute_sig_alpha_W(data_idx, lst_idx_all, W_i)[source]
compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)[source]
compute_wj(data_idx, M_i)[source]
get_class_label(y)[source]
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
domain = 'real'
class DGPLVM_KFDA(lambdaa, sigma2, lbl, kern, x_shape)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Discriminative Gaussian Process Latent Variable function using Kernel Fisher Discriminant Analysis by Seung-Jean Kim for implementing Face paper by Chaochao Lu.

Parameters:
  • lambdaa – constant
  • sigma2 – constant

Note

Surpassing Human-Level Face paper dgplvm implementation

A description for init

compute_A(lst_ni)[source]
compute_a(lst_ni)[source]
compute_cls(x)[source]
compute_lst_ni()[source]
get_class_label(y)[source]
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
x_reduced(cls)[source]
domain = 'real'
class DGPLVM_Lamda(sigma2, lbl, x_shape, lamda, name='DP_prior')[source]

Bases: GPy.core.parameterization.priors.Prior, GPy.core.parameterization.parameterized.Parameterized

Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.

Parameters:sigma2 – constant

Note

DGPLVM for Classification paper implementation

compute_Mi(cls)[source]
compute_Sb(cls, M_i, M_0)[source]
compute_Sw(cls, M_i)[source]
compute_cls(x)[source]
compute_indices(x)[source]
compute_listIndices(data_idx)[source]
compute_sig_alpha_W(data_idx, lst_idx_all, W_i)[source]
compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)[source]
compute_wj(data_idx, M_i)[source]
get_class_label(y)[source]
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
domain = 'real'
class DGPLVM_T(sigma2, lbl, x_shape, vec)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.

Parameters:sigma2 – constant

Note

DGPLVM for Classification paper implementation

compute_Mi(cls)[source]
compute_Sb(cls, M_i, M_0)[source]
compute_Sw(cls, M_i)[source]
compute_cls(x)[source]
compute_indices(x)[source]
compute_listIndices(data_idx)[source]
compute_sig_alpha_W(data_idx, lst_idx_all, W_i)[source]
compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)[source]
compute_wj(data_idx, M_i)[source]
get_class_label(y)[source]
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
domain = 'real'
class Exponential(l)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Exponential probability function, coupled with random variables.

Parameters:l – shape parameter
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
summary()[source]
domain = 'positive'
class Gamma(a, b)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Gamma probability function, coupled with random variables.

Parameters:
  • a – shape parameter
  • b – rate parameter (warning: it’s the inverse of the scale)

Note

Bishop 2006 notation is used throughout the code

static from_EV(E, V)[source]

Creates an instance of a Gamma Prior by specifying the Expected value(s) and Variance(s) of the distribution.

Parameters:
  • E – expected value
  • V – variance
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
summary()[source]
a
b
domain = 'positive'
class Gaussian(mu, sigma)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the univariate Gaussian probability function, coupled with random variables.

Parameters:
  • mu – mean
  • sigma – standard deviation

Note

Bishop 2006 notation is used throughout the code

lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
domain = 'real'
class HalfT(A, nu)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the half student t probability function, coupled with random variables.

Parameters:
  • A – scale parameter
  • nu – degrees of freedom
lnpdf(theta)[source]
lnpdf_grad(theta)[source]
rvs(n)[source]
domain = 'positive'
class InverseGamma(a, b)[source]

Bases: GPy.core.parameterization.priors.Gamma

Implementation of the inverse-Gamma probability function, coupled with random variables.

Parameters:
  • a – shape parameter
  • b – rate parameter (warning: it’s the inverse of the scale)

Note

Bishop 2006 notation is used throughout the code

static from_EV(E, V)[source]

Creates an instance of a Gamma Prior by specifying the Expected value(s) and Variance(s) of the distribution.

Parameters:
  • E – expected value
  • V – variance
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
summary()[source]
domain = 'positive'
class LogGaussian(mu, sigma)[source]

Bases: GPy.core.parameterization.priors.Gaussian

Implementation of the univariate log-Gaussian probability function, coupled with random variables.

Parameters:
  • mu – mean
  • sigma – standard deviation

Note

Bishop 2006 notation is used throughout the code

lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
domain = 'positive'
class MultivariateGaussian(mu, var)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the multivariate Gaussian probability function, coupled with random variables.

Parameters:
  • mu – mean (N-dimensional array)
  • var – covariance matrix (NxN)

Note

Bishop 2006 notation is used throughout the code

lnpdf(x)[source]
lnpdf_grad(x)[source]
pdf(x)[source]
plot()[source]
rvs(n)[source]
summary()[source]
domain = 'real'
class Prior[source]

Bases: object

pdf(x)[source]
plot()[source]
domain = None
class StudentT(mu, sigma, nu)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the student t probability function, coupled with random variables.

Parameters:
  • mu – mean
  • sigma – standard deviation
  • nu – degrees of freedom

Note

Bishop 2006 notation is used throughout the code

lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
domain = 'real'
class Uniform(lower, upper)[source]

Bases: GPy.core.parameterization.priors.Prior

lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
gamma_from_EV(E, V)[source]

GPy.core.parameterization.transformations module

GPy.core.parameterization.variational module

Created on 6 Nov 2013

@author: maxz

class NormalPosterior(means=None, variances=None, name='latent space', *a, **kw)[source]

Bases: GPy.core.parameterization.variational.VariationalPosterior

NormalPosterior distribution for variational approximations.

holds the means and variances for a factorizing multivariate normal distribution

KL(other)[source]

Compute the KL divergence to another NormalPosterior Object. This only holds, if the two NormalPosterior objects have the same shape, as we do computational tricks for the multivariate normal KL divergence.

plot(*args, **kwargs)[source]

Plot latent space X in 1D:

See GPy.plotting.matplot_dep.variational_plots

class NormalPrior(name='normal_prior', **kw)[source]

Bases: GPy.core.parameterization.variational.VariationalPrior

KL_divergence(variational_posterior)[source]
update_gradients_KL(variational_posterior)[source]

updates the gradients for mean and variance in place

class SpikeAndSlabPosterior(means, variances, binary_prob, group_spike=False, sharedX=False, name='latent space')[source]

Bases: GPy.core.parameterization.variational.VariationalPosterior

The SpikeAndSlab distribution for variational approximations.

binary_prob : the probability of the distribution on the slab part.

collate_gradient()[source]
plot(*args, **kwargs)[source]

Plot latent space X in 1D:

See GPy.plotting.matplot_dep.variational_plots

propogate_val()[source]
set_gradients(grad)[source]
class SpikeAndSlabPrior(pi=None, learnPi=False, variance=1.0, group_spike=False, name='SpikeAndSlabPrior', **kw)[source]

Bases: GPy.core.parameterization.variational.VariationalPrior

KL_divergence(variational_posterior)[source]
update_gradients_KL(variational_posterior)[source]

updates the gradients for mean and variance in place

class VariationalPosterior(means=None, variances=None, name='latent space', *a, **kw)[source]

Bases: GPy.core.parameterization.parameterized.Parameterized

has_uncertain_inputs()[source]
set_gradients(grad)[source]
class VariationalPrior(name='latent prior', **kw)[source]

Bases: GPy.core.parameterization.parameterized.Parameterized

KL_divergence(variational_posterior)[source]
update_gradients_KL(variational_posterior)[source]

updates the gradients for mean and variance in place