Attend the Women in ML Symposium on December 7 Register now

tfp.substrates.jax.math.psd_kernels.ChangePoint

Stay organized with collections Save and categorize content based on your preferences.

Changepoint Kernel.

Inherits From: AutoCompositeTensorPsdKernel, PositiveSemidefiniteKernel

Given a list of kernels k_1, k_2, ..., k_n, and 1-D inputs x and y, this kernel computes a smooth interpolant between the kernels.

k(x, y) = (1 - s_1(x)) * k_1(x, y) * (1 - s1(y)) +
          (1 - s_2(x)) * s_1(x) * k_2(x, y) * (1 - s_2(y)) * s_1(y) +
          (1 - s_3(x)) * s_2(x) * k_3(x, y) * (1 - s_3(y)) * s_2(y) +
          ...
          s_{n-1}(x) * kn(x, y) * s_{n-1}(y)

where:

  • s_i(x) = sigmoid(slopes[i] * (x - locs[i]))
  • locs is a Tensor of length n - 1 that's in ascending order.
  • slopes is a positive Tensor of length n - 1.

If we have 2 kernels k1 and k2, this takes the form:

k(x, y) = (1 - s_1(x)) * k_1(x, y) * (1 - s_1(y)) +
          s_1(x) * k_2(x, y) * s_1(y)

When x and y are much less than locs[0], k(x, y) ~= k1(x, y), while x and y are much greater than locs[0], k(x, y) ~= k2(x, y).

In general, this kernel performs a smooth interpolation between ki(x, y) where k(x, y) ~= ki(x, y) when locs[i - 1] < x, y < locs[i].

This kernel accepts an optional weight_fn which consumes x and returns a scalar. This is used when computing the sigmoids si(x) = sigmoid(slopes[i] * (w(x) - locs[i])), which allows this kernel to be computed on arbitrary dimensional input. For instance, a weight_fn that is tf.linalg.norm would smoothly interpolate between different kernels over different annuli in the plane.

References

[1]: Andrew Gordon Wilson. The Change Point Kernel. https://www.cs.cmu.edu/~andrewgw/changepoints.pdf

kernels List of size [N] of PositiveSemidefiniteKernel instances to interpolate between.
locs Ascending Floating-point Tensor of shape broadcastable to [..., N - 1] that controls the regions for the interpolation. If kernels are a list of 1-D kernels with the default weight_fn, then between locs[i - 1] and locs[i], this kernel acts like kernels[i].
slopes Positive Floating-point Tensor of shape broadcastable to [..., N - 1] that controls how smooth the interpolation between kernels is (larger slopes means more discrete transitions).
weight_fn Python callable which takes an input x and feature_ndims argument, and returns a Tensor where a scalar is returned for each right-most feature_ndims of the input. (in other words, if x is a batch of inputs, weight_fn returns a batch of scalar, with the same batch shape). Default value: Sums over the last feature_ndims of the input x.
validate_args If True, parameters are checked for validity despite possibly degrading runtime performance
name Python str name prefixed to Ops created by this class.

batch_shape Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this PositiveSemidefiniteKernel.

dtype DType over which the kernel operates.
feature_ndims The number of feature dimensions.

Kernel functions generally act on pairs of inputs from some space like

R^(d1 x ... x dD)

or, in words: rank-D real-valued tensors of shape [d1, ..., dD]. Inputs can be vectors in some R^N, but are not restricted to be. Indeed, one might consider kernels over matrices, tensors, or even more general spaces, like strings or graphs.

kernels

locs

name Name prepended to all ops created by this class.
parameters Dictionary of parameters used to instantiate this PSDKernel.
slopes

trainable_variables

validate_args Python bool indicating possibly expensive checks are enabled.
variables

weight_fn

Methods

apply

View source

Apply the kernel function pairs of inputs.

Args
x1 Tensor input to the kernel, of shape B1 + E1 + F, where B1 and E1 may be empty (ie, no batch/example dims, resp.) and F (the feature shape) must have rank equal to the kernel's feature_ndims property. Batch shape must broadcast with the batch shape of x2 and with the kernel's batch shape. Example shape must broadcast with example shape of x2. x1 and x2 must have the same number of example dims (ie, same rank).
x2 Tensor input to the kernel, of shape B2 + E2 + F, where B2 and E2 may be empty (ie, no batch/example dims, resp.) and F (the feature shape) must have rank equal to the kernel's feature_ndims property. Batch shape must broadcast with the batch shape of x2 and with the kernel's batch shape. Example shape must broadcast with example shape of x2. x1 and x2 must have the same number of example
example_ndims A python integer, the number of example dims in the inputs. In essence, this parameter controls how broadcasting of the kernel's batch shape with input batch shapes works. The kernel batch shape will be broadcast against everything to the left of the combined example and feature dimensions in the input shapes.
name name to give to the op

Returns
Tensor containing the results of applying the kernel function to inputs x1 and x2. If the kernel parameters' batch shape is Bk then the shape of the Tensor resulting from this method call is broadcast(Bk, B1, B2) + broadcast(E1, E2).

Given an index set S, a kernel function is mathematically defined as a real- or complex-valued function on S satisfying the positive semi-definiteness constraint:

sum_i sum_j (c[i]*) c[j] k(x[i], x[j]) >= 0

for any finite collections {x[1], ..., x[N]} in S and {c[1], ..., c[N]} in the reals (or the complex plane). '*' is the complex conjugate, in the complex case.

This method most closely resembles the function described in the mathematical definition of a kernel. Given a PositiveSemidefiniteKernel k with scalar parameters and inputs x and y in S, apply(x, y) yields a single scalar value.

Examples

import tensorflow_probability as tfp; tfp = tfp.substrates.jax

# Suppose `SomeKernel` acts on vectors (rank-1 tensors)
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []

# `x` and `y` are batches of five 3-D vectors:
x = np.ones([5, 3], np.float32)
y = np.ones([5, 3], np.float32)
scalar_kernel.apply(x, y).shape
# ==> [5]

The above output is the result of vectorized computation of the five values

[k(x[0], y[0]), k(x[1], y[1]), ..., k(x[4], y[4])]

Now we can consider a kernel with batched parameters:

batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[.2, .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.apply(x, y).shape
# ==> Error! [2] and [5] can't broadcast.

The parameter batch shape of [2] and the input batch shape of [5] can't be broadcast together. We can fix this in either of two ways:

Fix #1

Give the parameter a shape of [2, 1] which will correctly broadcast with [5] to yield [2, 5]:

batch_kernel = tfp.math.psd_kernels.SomeKernel(
    param=[[.2], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.apply(x, y).shape
# ==> [2, 5]
Fix #2

By specifying example_ndims, which tells the kernel to treat the 5 in the input shape as part of the "example shape", and "pushing" the kernel batch shape to the left:

batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[.2, .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.apply(x, y, example_ndims=1).shape
# ==> [2, 5]

batch_shape_tensor

View source

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this PositiveSemidefiniteKernel.

Args
name name to give to the op

Returns
batch_shape Tensor.

copy

View source

Creates a copy of the kernel.

Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.

Returns
copied_kernel A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

matrix

View source

Construct (batched) matrices from (batches of) collections of inputs.

Args
x1 Tensor input to the first positional parameter of the kernel, of shape B1 + [e1] + F, where B1 may be empty (ie, no batch dims, resp.), e1 is a single integer (ie, x1 has example ndims exactly 1), and F (the feature shape) must have rank equal to the kernel's feature_ndims property. Batch shape must broadcast with the batch shape of x2 and with the kernel's batch shape.
x2 Tensor input to the second positional parameter of the kernel, shape B2 + [e2] + F, where B2 may be empty (ie, no batch dims, resp.), e2 is a single integer (ie, x2 has example ndims exactly 1), and F (the feature shape) must have rank equal to the kernel's feature_ndims property. Batch shape must broadcast with the batch shape of x1 and with the k