Evaluate the Gaussian process at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPrior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Compute the latent function's multivariate normal distribution for a
given set of parameters. For any class inheriting the AbstractPrior class,
this method must be implemented.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
mean function object inheriting from AbstractMeanFunction.
__mul__
__mul__(other)
Combine the prior with a likelihood to form a posterior distribution.
The product of a prior and likelihood is proportional to the posterior
distribution. By computing the product of a GP prior and a likelihood
object, a posterior GP object will be returned. Mathematically, this can
be described by:
p(f(β )β£y)βp(yβ£f(β ))p(f(β )),
where p(yβ£f(β )) is the likelihood and p(f(β )) is the prior.
Compute the predictive prior distribution for a given set of
parameters. The output of this function is a GaussianDistribution
for a given set of inputs.
In the following example, we compute the predictive prior distribution
and then evaluate it on the interval :math:[0, 1]:
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Approximate samples from the Gaussian process prior.
Build an approximate sample from the Gaussian process prior. This method
provides a function that returns the evaluations of a sample across any
given inputs.
In particular, we approximate the Gaussian processes' prior as the
finite feature approximation
f^β(x)=βi=1mβΟiβ(x)ΞΈiβ where Οiβ are m features
sampled from the Fourier feature decomposition of the model's kernel and
ΞΈiβ are samples from a unit Gaussian.
A key property of such functional samples is that the same sample draw is
evaluated for all queries. Consistency is a property that is prohibitively costly
to ensure when sampling exactly from the GP prior, as the cost of exact sampling
scales cubically with the size of the sample. In contrast, finite feature representations
can be evaluated with constant cost regardless of the required number of queries.
In the following example, we build 10 such samples and then evaluate them
over the interval [0,1]:
For a prior distribution, the following code snippet will
build and evaluate an approximate sample.
Evaluate the Gaussian process at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPrior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Evaluate the Gaussian process posterior at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPosterior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Compute the latent function's multivariate normal distribution for a
given set of parameters. For any class inheriting the AbstractPosterior class,
this method must be implemented.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Evaluate the Gaussian process posterior at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPosterior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
A Gaussian process posterior distribution when the constituent likelihood
function is a Gaussian distribution. In such cases, the latent function values
f can be analytically integrated out of the posterior distribution.
As such, many computational operations can be simplified; something we make use
of in this object.
For a Gaussian process prior p(f) and a Gaussian likelihood
p(yβ£f)=N(yβ£f,Ο2)) where
f=f(x), the predictive posterior distribution at
a set of inputs x is given by
Conditional on a training data set, compute the GP's posterior
predictive distribution for a given set of parameters. The returned function
can be evaluated at a set of test inputs to compute the corresponding
predictive density.
The predictive distribution of a conjugate GP is given by
$$
p(\mathbf{f}^{\star}\mid \mathbf{y}) & = \int p(\mathbf{f}^{\star} \mathbf{f} \mid \mathbf{y})\
& =\mathcal{N}(\mathbf{f}^{\star} \boldsymbol{\mu}{\mid \mathbf{y}}, \boldsymbol{\Sigma}{\mid \mathbf{y}}
where
\boldsymbol{\mu}{\mid \mathbf{y}} & = k(\mathbf{x}^{\star}, \mathbf{x})\left(k(\mathbf{x}, \mathbf{x}')+\sigma^2\mathbf{I}_n\right)^{-1}\mathbf{y} \
\boldsymbol{\Sigma}{\mid \mathbf{y}} & =k(\mathbf{x}^{\star}, \mathbf{x}^{\star\prime}) -k(\mathbf{x}^{\star}, \mathbf{x})\left( k(\mathbf{x}, \mathbf{x}') + \sigma^2\mathbf{I}_n \right)^{-1}k(\mathbf{x}, \mathbf{x}^{\star}).
$$
The conditioning set is a GPJax Dataset object, whilst predictions
are made on a regular Jax array.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Draw approximate samples from the Gaussian process posterior.
Build an approximate sample from the Gaussian process posterior. This method
provides a function that returns the evaluations of a sample across any given
inputs.
Unlike when building approximate samples from a Gaussian process prior, decompositions
based on Fourier features alone rarely give accurate samples. Therefore, we must also
include an additional set of features (known as canonical features) to better model the
transition from Gaussian process prior to Gaussian process posterior. For more details
see Wilson et. al. (2020).
In particular, we approximate the Gaussian processes' posterior as the finite
feature approximation
f^β(x)=βi=1mβΟiβ(x)ΞΈiβ+βj=1Nvjβk(.,xjβ)
where Οiβ are m features sampled from the Fourier feature decomposition of
the model's kernel and k(.,xjβ) are N canonical features. The Fourier
weights ΞΈiβ are samples from a unit Gaussian. See
Wilson et. al. (2020) for expressions
for the canonical weights vjβ.
A key property of such functional samples is that the same sample draw is
evaluated for all queries. Consistency is a property that is prohibitively costly
to ensure when sampling exactly from the GP prior, as the cost of exact sampling
scales cubically with the size of the sample. In contrast, finite feature representations
can be evaluated with constant cost regardless of the required number of queries.
Parameters:
num_samples
(int)
β
The desired number of samples.
key
(KeyArray)
β
The random seed used for the sample(s).
num_features
(int, default:
100
)
β
The number of features used when approximating the
kernel.
Evaluate the Gaussian process posterior at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPosterior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
A non-conjugate Gaussian process posterior object.
A Gaussian process posterior object for models where the likelihood is
non-Gaussian. Unlike the ConjugatePosterior object, the
NonConjugatePosterior object does not provide an exact marginal
log-likelihood function. Instead, the NonConjugatePosterior object
represents the posterior distributions as a function of the model's
hyperparameters and the latent function. Markov chain Monte Carlo,
variational inference, or Laplace approximations can then be used to sample
from, or optimise an approximation to, the posterior distribution.
Conditional on a set of training data, compute the GP's posterior
predictive distribution for a given set of parameters. The returned
function can be evaluated at a set of test inputs to compute the
corresponding predictive density. Note, to gain predictions on the scale
of the original data, the returned distribution will need to be
transformed through the likelihood function's inverse link function.
test_inputs (Num[Array, "N D"]): A Jax array of test inputs at which the
predictive distribution is evaluated.
train_data (Dataset): A gpx.Dataset object that contains the input
and output data used for training dataset.
return_covariance_type: Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Evaluate the Gaussian process posterior at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPosterior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Evaluate the Gaussian process posterior at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPosterior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Evaluate the Gaussian process posterior at the given points.
The output of this function is a GaussianDistribution from which
the latent function's mean and covariance can be evaluated and the
distribution can be sampled.
Under the hood, __call__ invokes the predict method. Classes
inheriting AbstractPosterior should not overwrite __call__ and
should instead define a predict method.
Literal denoting whether to return the full covariance
of the joint predictive distribution at the test_inputs (dense)
or just the the standard-deviation of the predictive distribution at
the test_inputs.
Utility function for constructing a posterior object from a prior and
likelihood. The function will automatically select the correct posterior
object based on the likelihood.
The likelihood that represents our
beliefs around the distribution of the data.
Returns
AbstractPosterior
A posterior distribution. If the likelihood is Gaussian, then a
ConjugatePosterior will be returned. Otherwise, a
NonConjugatePosterior will be returned.