Rff
gpjax.kernels.approximations.rff
Compute Random Fourier Feature (RFF) kernel approximations.
RFF
dataclass
Bases: AbstractKernel
Computes an approximation of the kernel using Random Fourier Features.
All stationary kernels are equivalent to the Fourier transform of a probability distribution. We call the corresponding distribution the spectral density. Using a finite number of basis functions, we can compute the spectral density using a Monte-Carlo approximation. This is done by sampling from the spectral density and computing the Fourier transform of the samples. The kernel is then approximated by the inner product of the Fourier transform of the samples with the Fourier transform of the data.
The key reference for this implementation is the following papers: - 'Random Features for Large-Scale Kernel Machines' by Rahimi and Recht (2008). - 'On the Error of Random Fourier Features' by Sutherland and Schneider (2015).
active_dims: Optional[List[int]] = static_field(None)
class-attribute
instance-attribute
name: str = static_field('AbstractKernel')
class-attribute
instance-attribute
ndims
property
spectral_density: Optional[tfd.Distribution]
property
base_kernel: Union[AbstractKernel, None] = None
class-attribute
instance-attribute
num_basis_fns: int = static_field(50)
class-attribute
instance-attribute
frequencies: Union[Float[Array, 'M D'], None] = param_field(None, bijector=tfb.Identity())
class-attribute
instance-attribute
compute_engine: BasisFunctionComputation = static_field(BasisFunctionComputation(), repr=False)
class-attribute
instance-attribute
key: KeyArray = static_field(PRNGKey(123))
class-attribute
instance-attribute
__init_subclass__(mutable: bool = False)
replace(**kwargs: Any) -> Self
replace_meta(**kwargs: Any) -> Self
update_meta(**kwargs: Any) -> Self
replace_trainable(**kwargs: Dict[str, bool]) -> Self
Replace the trainability status of local nodes of the Module.
replace_bijector(**kwargs: Dict[str, tfb.Bijector]) -> Self
Replace the bijectors of local nodes of the Module.
constrain() -> Self
unconstrain() -> Self
stop_gradient() -> Self
trainables() -> Self
cross_covariance(x: Num[Array, 'N D'], y: Num[Array, 'M D'])
gram(x: Num[Array, 'N D'])
slice_input(x: Float[Array, '... D']) -> Float[Array, '... Q']
Slice out the relevant columns of the input matrix.
Select the relevant columns of the supplied matrix to be used within the kernel's evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Float[Array, '... D']
|
The matrix or vector that is to be sliced. |
required |
Returns
Float[Array, "... Q"]: A sliced form of the input matrix.
__add__(other: Union[AbstractKernel, ScalarFloat]) -> AbstractKernel
__radd__(other: Union[AbstractKernel, ScalarFloat]) -> AbstractKernel
__mul__(other: Union[AbstractKernel, ScalarFloat]) -> AbstractKernel
Multiply two kernels together.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
other |
AbstractKernel
|
The kernel to be multiplied with the current kernel. |
required |
Returns
AbstractKernel: A new kernel that is the product of the two kernels.
__post_init__() -> None
Post-initialisation function.
This function is called after the initialisation of the kernel. It is used to set the computation engine to be the basis function computation engine.
__call__(x: Float[Array, 'D 1'], y: Float[Array, 'D 1']) -> None
Superfluous for RFFs.
compute_features(x: Float[Array, 'N D']) -> Float[Array, 'N L']
Compute the features for the inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Float[Array, 'N D']
|
A array of inputs. |
required |
Returns
Float[Array, "N L"]: A array of features where .