Formal representation of a linear regression from provided covariates.
This model defines a time series given by a linear combination of covariate time series provided in a design matrix:
observed_time_series = matmul(design_matrix, weights)
The design matrix has shape
[num_timesteps, num_features]. The weights
are treated as an unknown random variable of size
components also support batch shape), and are integrated over using the same
approximate inference tools as other model parameters, i.e., generally HMC or
This component does not itself include observation noise; it defines a
deterministic distribution with mass at the point
matmul(design_matrix, weights). In practice, it should be combined with
observation noise from another component such as
Tensors each of shape
representing covariate time series, we create a regression model that
conditions on these covariates:
regression = tfp.sts.LinearRegression( design_matrix=tf.stack([series1, series2], axis=-1), weights_prior=tfd.Normal(loc=0., scale=1.))
Here we've also demonstrated specifying a custom prior, using an informative
Normal(0., 1.) prior instead of the default weakly-informative prior.
As a more advanced application, we might use the design matrix to encode holiday effects. For example, suppose we are modeling data from the month of December. We can combine day-of-week seasonality with special effects for Christmas Eve (Dec 24), Christmas (Dec 25), and New Year's Eve (Dec 31), by constructing a design matrix with indicators for those dates.
holiday_indicators = np.zeros([31, 3]) holiday_indicators[23, 0] = 1 # Christmas Eve holiday_indicators[24, 1] = 1 # Christmas Day holiday_indicators[30, 2] = 1 # New Year's Eve holidays = tfp.sts.LinearRegression(design_matrix=holiday_indicators, name='holidays') day_of_week = tfp.sts.Seasonal(num_seasons=7, observed_time_series=observed_time_series, name='day_of_week') model = tfp.sts.Sum(components=[holidays, seasonal], observed_time_series=observed_time_series)
Note that the
Sum component in the above model also incorporates observation
noise, with prior scale heuristically inferred from
In these examples, we've used a single design matrix, but batching is also supported. If the design matrix has batch shape, the default behavior constructs weights with matching batch shape, which will fit a separate regression for each design matrix. This can be overridden by passing an explicit weights prior with appropriate batch shape. For example, if each design matrix in a batch contains features with the same semantics (e.g., if they represent per-group or per-observation covariates), we might choose to share statistical strength by fitting a single weight vector that broadcasts across all design matrices:
design_matrix = get_batch_of_inputs() design_matrix.shape # => concat([batch_shape, [num_timesteps, num_features]]) # Construct a prior with batch shape `` and event shape `[num_features]`, # so that it describes a single vector of weights. weights_prior = tfd.Independent( tfd.StudentT(df=5, loc=tf.zeros([num_features]), scale=tf.ones([num_features])), reinterpreted_batch_ndims=1) linear_regression = LinearRegression(design_matrix=design_matrix, weights_prior=weights_prior)
__init__( design_matrix, weights_prior=None, name=None )
Specify a linear regression model.
weights_prior.batch_shape == : shares a single set of weights across all design matrices and observed time series. This may make sense if the features in each design matrix have the same semantics (e.g., grouping observations by country, with per-country design matrices capturing the same set of national economic indicators per country).
weights_prior.batch_shape ==design_matrix.batch_shape`: fits separate weights for each design matrix. If there are multiple observed time series for each design matrix, this shares statistical strength over those observations.
weights_prior.batch_shape ==observed_time_series.batch_shape`: fits a separate regression for each individual time series.
When modeling batches of time series, you should think carefully about
which behavior makes sense, and specify
the defaults may not do what you want!
concat([batch_shape, [num_timesteps, num_features]]). This may also optionally be an instance of
tfd.Distributionrepresenting a prior over the regression weights. Must have event shape
[num_features]and batch shape broadcastable to the design matrix's
event_shapemay be scalar (
), in which case the prior is internally broadcast as
TransformedDistribution(weights_prior, tfb.Identity(), event_shape=[num_features], batch_shape=design_matrix.batch_shape). If
None, defaults to
StudentT(df=5, loc=0., scale=10.), a weakly-informative prior loosely inspired by the Stan prior choice recommendations. Default value:
name: the name of this model component. Default value: 'LinearRegression'.
Static batch shape of models represented by this component.
tf.TensorShapegiving the broadcast batch shape of all model parameters. This should match the batch shape of derived state space models, i.e.,
self.make_state_space_model(...).batch_shape. It may be partially defined or unknown.
LinearOperator representing the design matrix.
int dimensionality of the latent space in this model.
Name of this model component.
List of Parameter(name, prior, bijector) namedtuples for this model.
Runtime batch shape of models represented by this component.
Tensorgiving the broadcast batch shape of all model parameters. This should match the batch shape of derived state space models, i.e.,
Build the joint density
log p(params) + log p(y|params) as a callable.
Tensortrajectories of shape
sample_shape + batch_shape + [num_timesteps, 1](the trailing
1dimension is optional if
num_timesteps > 1), where
self.batch_shape(the broadcast batch shape of all priors on parameters for this structural time series model). May optionally be an instance of
tfp.sts.MaskedTimeSeries, which includes a mask
Tensorto specify timesteps with missing observations.
log_joint_fn: A function taking a
Tensorargument for each model parameter, in canonical order, and returning a
Tensorlog probability of shape
batch_shape. Note that, unlike
log_jointsums over the
sample_shapefrom y, so that
sample_shapedoes not appear in the output log_prob. This corresponds to viewing multiple samples in
yas iid observations from a single model, which is typically the desired behavior for parameter inference.
make_state_space_model( num_timesteps, param_vals=None, initial_state_prior=None, initial_step=0 )
Instantiate this model as a Distribution over specified
intnumber of timesteps to model.
param_vals: a list of
Tensorparameter values in order corresponding to
self.parameters, or a dict mapping from parameter names to values.
initial_state_prior: an optional
Distributioninstance overriding the default prior on the model's initial state. This is used in forecasting ("today's prior is yesterday's posterior").
intspecifying the initial timestep to model. This is relevant when the model contains time-varying components, e.g., holidays or seasonality.
prior_sample( num_timesteps, initial_step=0, params_sample_shape=(), trajectories_sample_shape=(), seed=None )
Sample from the joint prior over model parameters and trajectories.
Tensornumber of timesteps to model.
initial_step: Optional scalar
Tensorspecifying the starting timestep. Default value: 0.
params_sample_shape: Number of possible worlds to sample iid from the parameter prior, or more generally,
intshape to fill with iid samples. Default value: .
trajectories_sample_shape: For each sampled set of parameters, number of trajectories to sample, or more generally,
intshape to fill with iid samples. Default value: .
trajectories_sample_shape + params_sample_shape + [num_timesteps, 1]containing all sampled trajectories.
param_samples: list of sampled parameter value
Tensors, in order corresponding to
self.parameters, each of shape
params_sample_shape + prior.batch_shape + prior.event_shape.