View source on GitHub 
Runs Sequential Monte Carlo to sample from the posterior distribution.
tfp.experimental.mcmc.sample_sequential_monte_carlo(
prior_log_prob_fn, likelihood_log_prob_fn, current_state, min_num_steps=2,
max_num_steps=25, max_stage=100,
make_kernel_fn=tfp.experimental.mcmc.make_rwmh_kernel_fn,
tuning_fn=tfp.experimental.mcmc.simple_heuristic_tuning,
make_tempered_target_log_prob_fn=default_make_tempered_target_log_prob_fn,
resample_fn=tfp.experimental.mcmc.resample_systematic, ess_threshold_ratio=0.5,
parallel_iterations=10, seed=None, name=None
)
This function uses an MCMC transition operator (e.g., Hamiltonian Monte Carlo) to sample from a series of distributions that slowly interpolates between an initial 'prior' distribution:
exp(prior_log_prob_fn(x))
and the target 'posterior' distribution:
exp(prior_log_prob_fn(x) + target_log_prob_fn(x))
,
by mutating a collection of MC samples (i.e., particles). The approach is also known as Particle Filter in some literature. The current implemenetation is largely based on Del Moral et al [1], which adapts the tempering sequence adaptively (base on the effective sample size) and the scaling of the mutation kernel (base on the sample covariance of the particles) at each stage.
Args  

prior_log_prob_fn

Python callable that returns the log density of the prior distribution. 
likelihood_log_prob_fn

Python callable which takes an argument like
current_state (or *current_state if it's a list) and returns its
(possibly unnormalized) logdensity under the likelihood distribution.

current_state

Nested structure of Tensor s, each of shape
concat([[num_particles, b1, ..., bN], latent_part_event_shape]) , where
b1, ..., bN are optional batch dimensions. Each batch represents an
independent SMC run.

min_num_steps

The minimal number of kernel transition steps in one mutation of the MC samples. 
max_num_steps

The maximum number of kernel transition steps in one mutation of the MC samples. Note that the actual number of steps in one mutation is tuned during sampling and likely lower than the max_num_step. 
max_stage

Integer number of the stage for increasing the temperature from 0 to 1. 
make_kernel_fn

Python callable which returns a TransitionKernel like
object. Must take one argument representing the TransitionKernel 's
target_log_prob_fn . The target_log_prob_fn argument represents the
TransitionKernel 's target log distribution. Note:
sample_sequential_monte_carlo creates a new target_log_prob_fn
which is an interpolation between the supplied target_log_prob_fn and
proposal_log_prob_fn ; it is this interpolated function which is used as
an argument to make_kernel_fn .

tuning_fn

Python callable which takes the number of steps, the log
scaling, and the log acceptance ratio from the last mutation and output
the number of steps and log scaling for the next mutation.

make_tempered_target_log_prob_fn

Python callable that takes the
prior_log_prob_fn , likelihood_log_prob_fn , and inverse_temperatures
and creates a target_log_prob_fn callable that pass to
make_kernel_fn .

resample_fn

Python callable to generate the indices of resampled
particles, given their weights. Generally, one of
tfp.experimental.mcmc.resample_independent or
tfp.experimental.mcmc.resample_systematic , or any function
with the same signature.
Default value: tfp.experimental.mcmc.resample_systematic .

ess_threshold_ratio

Target ratio for effective sample size. 
parallel_iterations

The number of iterations allowed to run in parallel.
It must be a positive integer. See tf.while_loop for more details.

seed

Python integer or TFP seedstream to seed the random number generator. 
name

Python str name prefixed to Ops created by this function.
Default value: None (i.e., 'sample_sequential_monte_carlo').

Returns  

n_stage

Number of the mutation stage SMC ran. 
final_state

Tensor or Python list of Tensor s representing the
final state(s) of the Markov chain(s). The output are the posterior
samples.

final_kernel_results

collections.namedtuple of internal calculations used
to advance the chain.

References
[1] Del Moral, Pierre, Arnaud Doucet, and Ajay Jasra. An adaptive sequential Monte Carlo method for approximate Bayesian computation. Statistics and Computing, 22.5(10091020), 2012.