View source on GitHub |
Formal representation of a sparse linear regression.
Inherits From: StructuralTimeSeries
tfp.substrates.jax.sts.SparseLinearRegression(
design_matrix, weights_prior_scale=0.1, weights_batch_shape=None, name=None
)
This model defines a time series given by a sparse linear combination of covariate time series provided in a design matrix:
observed_time_series = matmul(design_matrix, weights)
This is identical to tfp.sts.LinearRegression
, except that
SparseLinearRegression
uses a parameterization of a Horseshoe
prior [1][2] to encode the assumption that many of the weights
are zero,
i.e., many of the covariate time series are irrelevant. See the mathematical
details section below for further discussion. The prior parameterization used
by SparseLinearRegression
is more suitable for inference than that
obtained by simply passing the equivalent tfd.Horseshoe
prior to
LinearRegression
; when sparsity is desired, SparseLinearRegression
will
likely yield better results.
This component does not itself include observation noise; it defines a
deterministic distribution with mass at the point
matmul(design_matrix, weights)
. In practice, it should be combined with
observation noise from another component such as tfp.sts.Sum
, as
demonstrated below.
Examples
Given series1
, series2
as Tensors
each of shape [num_timesteps]
representing covariate time series, we create a regression model that
conditions on these covariates:
regression = tfp.sts.SparseLinearRegression(
design_matrix=tf.stack([series1, series2], axis=-1),
weights_prior_scale=0.1)
The weights_prior_scale
determines the level of sparsity; small
scales encourage the weights to be sparse. In some cases, such as when
the likelihood is iid Gaussian with known scale, the prior scale can be
analytically related to the expected number of nonzero weights [2]; however,
this is not the case in general for STS models.
If the design matrix has batch dimensions, by default the model will create a
matching batch of weights. For example, if design_matrix.shape == [
num_users, num_timesteps, num_features]
, by default the model will fit
separate weights for each user, i.e., it will internally represent
weights.shape == [num_users, num_features]
. To share weights across some or
all batch dimensions, you can manually specify the batch shape for the
weights:
# design_matrix.shape == [num_users, num_timesteps, num_features]
regression = tfp.sts.SparseLinearRegression(
design_matrix=design_matrix,
weights_batch_shape=[]) # weights.shape -> [num_features]
Mathematical Details
The basic horseshoe prior [1] is defined as a Cauchy-normal scale mixture:
scales[i] ~ HalfCauchy(loc=0, scale=1)
weights[i] ~ Normal(loc=0., scale=scales[i] * global_scale)`
The Cauchy scale parameters puts substantial mass near zero, encouraging
weights to be sparse, but their heavy tails allow weights far from zero to be
estimated without excessive shrinkage. The horseshoe can be thought of as a
continuous relaxation of a traditional 'spike-and-slab' discrete sparsity
prior, in which the latent Cauchy scale mixes between 'spike'
(scales[i] ~= 0
) and 'slab' (scales[i] >> 0
) regimes.
Following the recommendations in [2], SparseLinearRegression
implements
a horseshoe with the following adaptations:
- The Cauchy prior on
scales[i]
is represented as an InverseGamma-Normal compound. - The
global_scale
parameter is integrated out following aCauchy(0., scale=weights_prior_scale)
hyperprior, which is also represented as an InverseGamma-Normal compound. - All compound distributions are implemented using a non-centered parameterization.
The compound, non-centered representation defines the same marginal prior as the original horseshoe (up to integrating out the global scale), but allows samplers to mix more efficiently through the heavy tails; for variational inference, the compound representation implicity expands the representational power of the variational model.
Note that we do not yet implement the regularized ('Finnish') horseshoe, proposed in [2] for models with weak likelihoods, because the likelihood in STS models is typically Gaussian, where it's not clear that additional regularization is appropriate. If you need this functionality, please email tfprobability@tensorflow.org.
The full prior parameterization implemented in SparseLinearRegression
is
as follows:
# Sample global_scale from Cauchy(0, scale=weights_prior_scale).
global_scale_variance ~ InverseGamma(alpha=0.5, beta=0.5)
global_scale_noncentered ~ HalfNormal(loc=0, scale=1)
global_scale = (global_scale_noncentered *
sqrt(global_scale_variance) *
weights_prior_scale)
# Sample local_scales from Cauchy(0, 1).
local_scale_variances[i] ~ InverseGamma(alpha=0.5, beta=0.5)
local_scales_noncentered[i] ~ HalfNormal(loc=0, scale=1)
local_scales[i] = local_scales_noncentered[i] * sqrt(local_scale_variances[i])
weights[i] ~ Normal(loc=0., scale=local_scales[i] * global_scale)
References
[1]: Carvalho, C., Polson, N. and Scott, J. Handling Sparsity via the Horseshoe. AISTATS (2009). http://proceedings.mlr.press/v5/carvalho09a/carvalho09a.pdf [2]: Juho Piironen, Aki Vehtari. Sparsity information and regularization in the horseshoe and other shrinkage priors (2017). https://arxiv.org/abs/1707.01694
Args | |
---|---|
design_matrix
|
float Tensor of shape concat([batch_shape,
[num_timesteps, num_features]]) . This may also optionally be
an instance of tf.linalg.LinearOperator .
|
weights_prior_scale
|
float Tensor defining the scale of the Horseshoe
prior on regression weights. Small values encourage the weights to be
sparse. The shape must broadcast with weights_batch_shape .
Default value: 0.1 .
|
weights_batch_shape
|
if None , defaults to
design_matrix.batch_shape_tensor() . Must broadcast with the batch
shape of design_matrix .
Default value: None .
|
name
|
the name of this model component. Default value: 'SparseLinearRegression'. |
Methods
batch_shape_tensor
batch_shape_tensor()
Runtime batch shape of models represented by this component.
Returns | |
---|---|
batch_shape
|
int Tensor giving the broadcast batch shape of
all model parameters. This should match the batch shape of
derived state space models, i.e.,
self.make_state_space_model(...).batch_shape_tensor() .
|
copy
copy(
**override_parameters_kwargs
)
Creates a deep copy.
Args | |
---|---|
**override_parameters_kwargs
|
String/value dictionary of initialization arguments to override with new values. |
Returns | |
---|---|
copy
|
A new instance of type(self) initialized from the union
of self.init_parameters and override_parameters_kwargs, i.e.,
dict(self.init_parameters, **override_parameters_kwargs) .
|
get_parameter
get_parameter(
parameter_name
)
Returns the parameter with the given name, or a KeyError.
joint_distribution
joint_distribution(
observed_time_series=None,
num_timesteps=None,
trajectories_shape=(),
initial_step=0,
mask=None,
experimental_parallelize=False
)
Constructs the joint distribution over parameters and observed values.
Args | |
---|---|
observed_time_series
|
Optional observed time series to model, as a
Tensor or tfp.sts.MaskedTimeSeries instance having shape
concat([batch_shape, trajectories_shape, num_timesteps, 1]) . If
an observed time series is provided, the num_timesteps ,
trajectories_shape , and mask arguments are ignored, and
an unnormalized (pinned) distribution over parameter values is returned.
Default value: None .
|
num_timesteps
|
scalar int Tensor number of timesteps to model. This
must be specified either directly or by passing an
observed_time_series .
Default value: 0 .
|
trajectories_shape
|
int Tensor shape of sampled trajectories
for each set of parameter values. Ignored if an observed_time_series
is passed.
Default value: () .
|
initial_step
|
Optional scalar int Tensor specifying the starting
timestep.
Default value: 0 .
|
mask
|
Optional bool Tensor having shape
concat([batch_shape, trajectories_shape, num_timesteps]) , in which
True entries indicate that the series value at the corresponding step
is missing and should be ignored. This argument should be passed only
if observed_time_series is not specified or does not already contain
a missingness mask; it is an error to pass both this
argument and an observed_time_series value containing a missingness
mask.
Default value: None .
|
experimental_parallelize
|
If True , use parallel message passing
algorithms from tfp.experimental.parallel_filter to perform time
series operations in O(log num_timesteps) sequential steps. The
overall FLOP and memory cost may be larger than for the sequential
implementations by a constant factor.
Default value: False .
|
Returns | |
---|---|
joint_distribution
|
joint distribution of model parameters and
observed trajectories. If no observed_time_series was specified, this
is an instance of tfd.JointDistributionNamedAutoBatched with a
random variable for each model parameter (with names and order matching
self.parameters ), plus a final random variable observed_time_series
representing a trajectory(ies) conditioned on the parameters. If
observed_time_series was specified, the return value is given by
joint_distribution.experimental_pin(
observed_time_series=observed_time_series) where joint_distribution
is as just described, so it defines an unnormalized posterior
distribution over the parameters.
|
Example:
The joint distribution can generate prior samples of parameters and trajectories:
from matplotlib import pylab as plt
import tensorflow_probability as tfp; tfp = tfp.substrates.jax
# Sample and plot 100 trajectories from the prior.
model = tfp.sts.LocalLinearTrend()
prior_samples = model.joint_distribution(num_timesteps=200).sample([100])
plt.plot(
tf.linalg.matrix_transpose(prior_samples['observed_time_series'][..., 0]))
It also integrates with TFP inference APIs, providing a more flexible alternative to the STS-specific fitting utilities.
jd = model.joint_distribution(observed_time_series)
# Variational inference.
surrogate_posterior = (
tfp.experimental.vi.build_factored_surrogate_posterior(
event_shape=jd.event_shape,
bijector=jd.experimental_default_event_space_bijector()))
losses = tfp.vi.fit_surrogate_posterior(
target_log_prob_fn=jd.unnormalized_log_prob,
surrogate_posterior=surrogate_posterior,
optimizer=tf.optimizers.Adam(0.1),
num_steps=200)
parameter_samples = surrogate_posterior.sample(50)
# No U-Turn Sampler.
samples, kernel_results = tfp.experimental.mcmc.windowed_adaptive_nuts(
n_draws=500, joint_dist=dist)
joint_log_prob
joint_log_prob(
observed_time_series
)
Build the joint density log p(params) + log p(y|params)
as a callable.
Args | |
---|---|
observed_time_series
|
Observed Tensor trajectories of shape
sample_shape + batch_shape + [num_timesteps, 1] (the trailing
1 dimension is optional if num_timesteps > 1 ), where
batch_shape should match self.batch_shape (the broadcast batch
shape of all priors on parameters for this structural time series
model). Any NaN s are interpreted as missing observations; missingness
may be also be explicitly specified by passing a
tfp.sts.MaskedTimeSeries instance.
|
Returns | |
---|---|
log_joint_fn
|
A function taking a Tensor argument for each model
parameter, in canonical order, and returning a Tensor log probability
of shape batch_shape . Note that, unlike tfp.Distributions
log_prob methods, the log_joint sums over the sample_shape from y,
so that sample_shape does not appear in the output log_prob. This
corresponds to viewing multiple samples in y as iid observations from a
single model, which is typically the desired behavior for parameter
inference.
|
make_state_space_model
make_state_space_model(
num_timesteps,
param_vals,
initial_state_prior=None,
initial_step=0,
**linear_gaussian_ssm_kwargs
)
Instantiate this model as a Distribution over specified num_timesteps
.
Args | |
---|---|
num_timesteps
|
Python int number of timesteps to model.
|
param_vals
|
a list of Tensor parameter values in order corresponding to
self.parameters , or a dict mapping from parameter names to values.
|
initial_state_prior
|
an optional Distribution instance overriding the
default prior on the model's initial state. This is used in forecasting
("today's prior is yesterday's posterior").
|
initial_step
|
optional int specifying the initial timestep to model.
This is relevant when the model contains time-varying components,
e.g., holidays or seasonality.
|
**linear_gaussian_ssm_kwargs
|
Optional additional keyword arguments to
to the base tfd.LinearGaussianStateSpaceModel constructor.
|
Returns | |
---|---|
dist
|
a LinearGaussianStateSpaceModel Distribution object.
|
params_to_weights
params_to_weights(
global_scale_variance,
global_scale_noncentered,
local_scale_variances,
local_scales_noncentered,
weights_noncentered
)
Build regression weights from model parameters.
prior_sample
prior_sample(
num_timesteps,
initial_step=0,
params_sample_shape=(),
trajectories_sample_shape=(),
seed=None
)
Sample from the joint prior over model parameters and trajectories.
Args | |
---|---|
num_timesteps
|
Scalar int Tensor number of timesteps to model.
|
initial_step
|
Optional scalar int Tensor specifying the starting
timestep.
Default value: 0.
|
params_sample_shape
|
Number of possible worlds to sample iid from the
parameter prior, or more generally, Tensor int shape to fill with
iid samples.
Default value: [] (i.e., draw a single sample and don't expand the
shape).
|
trajectories_sample_shape
|
For each sampled set of parameters, number
of trajectories to sample, or more generally, Tensor int shape to
fill with iid samples.
Default value: [] (i.e., draw a single sample and don't expand the
shape).
|
seed
|
PRNG seed; see tfp.random.sanitize_seed for details.
Default value: None .
|
Returns | |
---|---|
trajectories
|
float Tensor of shape
trajectories_sample_shape + params_sample_shape + [num_timesteps, 1]
containing all sampled trajectories.
|
param_samples
|
list of sampled parameter value Tensor s, in order
corresponding to self.parameters , each of shape
params_sample_shape + prior.batch_shape + prior.event_shape .
|
__add__
__add__(
other
)
Models the sum of the series from the two components.