View source on GitHub

Decompose an observed time series into contributions from each component.

Used in the notebooks

Used in the tutorials

This method decomposes a time series according to the posterior represention of a structural time series model. In particular, it:

  • Computes the posterior marginal mean and covariances over the additive model's latent space.
  • Decomposes the latent posterior into the marginal blocks for each model component.
  • Maps the per-component latent posteriors back through each component's observation model, to generate the time series modeled by that component.

model An instance of tfp.sts.Sum representing a structural time series model.
observed_time_series float Tensor of shape batch_shape + [num_timesteps, 1] (omitting the trailing unit dimension is also supported when num_timesteps > 1), specifying an observed time series. May optionally be an instance of tfp.sts.MaskedTimeSeries, which includes a mask Tensor to specify timesteps with missing observations.
parameter_samples Python list of Tensors representing posterior samples of model parameters, with shapes [concat([ [num_posterior_draws], param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]. This may optionally also be a map (Python dict) of parameter names to Tensor values.

component_dists A collections.OrderedDict instance mapping component StructuralTimeSeries instances (elements of model.components) to tfd.Distribution instances representing the posterior marginal distributions on the process modeled by each component. Each distribution has batch shape matching that of posterior_means/posterior_covs, and event shape of [num_timesteps].


Suppose we've built a model and fit it to data:

  day_of_week = tfp.sts.Seasonal(
  local_linear_trend = tfp.sts.LocalLinearTrend(
  model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],

  num_steps_forecast = 50
  samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)

To extract the contributions of individual components, pass the time series and sampled parameters into decompose_by_component:

  component_dists = decompose_by_component(

  # Component mean and stddev have shape `[len(observed_time_series)]`.
  day_of_week_effect_mean = component_dists[day_of_week].mean()
  day_of_week_effect_stddev = component_dists[day_of_week].stddev()

Using the component distributions, we can visualize the uncertainty for each component:

from matplotlib import pylab as plt
num_components = len(component_dists)
xs = np.arange(len(observed_time_series))
fig = plt.figure(figsize=(12, 3 * num_components))
for i, (component, component_dist) in enumerate(component_dists.items()):

  # If in graph mode, replace `.numpy()` with `.eval()` or ``.
  component_mean = component_dist.mean().numpy()
  component_stddev = component_dist.stddev().numpy()

  ax = fig.add_subplot(num_components, 1, 1 + i)
  ax.plot(xs, component_mean, lw=2)
                  component_mean - 2 * component_stddev,
                  component_mean + 2 * component_stddev,