View source on GitHub |
Computes the requested reductions over the kernel
's samples.
tfp.experimental.mcmc.sample_fold(
num_steps,
current_state,
previous_kernel_results=None,
kernel=None,
reducer=None,
previous_reducer_state=None,
return_final_reducer_states=False,
num_burnin_steps=0,
num_steps_between_results=0,
parallel_iterations=10,
seed=None,
name=None
)
To wit, runs the given kernel
for num_steps
steps, and consumes
the stream of samples with the given Reducer
s' one_step
method(s).
This runs in constant memory (unless a given Reducer
builds a
large structure).
The driver internally composes the correct onion of WithReductions
and SampleDiscardingKernel
to implement the requested optionally
thinned reduction; however, the kernel results of those applied
Transition Kernels will not be returned. Hence, if warm-restarting
reductions is desired, one should manually build the Transition Kernel
onion and use tfp.experimental.mcmc.step_kernel
.
An arbitrary collection of reducer
can be provided, and the resulting
finalized statistic(s) will be returned in an identical structure.
This function can sample from and reduce over multiple chains, in parallel.
Whether or not there are multiple chains is dictated by how the kernel
treats its inputs. Typically, the shape of the independent chains is shape of
the result of the target_log_prob_fn
used by the kernel
when applied to
the given current_state
.
Args | |
---|---|
num_steps
|
Integer or scalar Tensor representing the number of Reducer
steps.
|
current_state
|
Tensor or Python list of Tensor s representing the
current state(s) of the Markov chain(s).
|
previous_kernel_results
|
A Tensor or a nested collection of Tensor s.
Warm-start for the auxiliary state needed by the given kernel .
If not supplied, sample_fold will cold-start with
kernel.bootstrap_results .
|
kernel
|
An instance of tfp.mcmc.TransitionKernel which implements one step
of the Markov chain.
|
reducer
|
A (possibly nested) structure of Reducer s to be evaluated
on the kernel 's samples. If no reducers are given (reducer=None ),
then None will be returned in place of streaming calculations.
|
previous_reducer_state
|
A (possibly nested) structure of running states
corresponding to the structure in reducer . For resuming streaming
reduction computations begun in a previous run.
|
return_final_reducer_states
|
A Python bool giving whether to return
resumable final reducer states.
|
num_burnin_steps
|
Integer or scalar Tensor representing the number
of chain steps to take before starting to collect results.
Defaults to 0 (i.e., no burn-in).
|
num_steps_between_results
|
Integer or scalar Tensor representing
the number of chain steps between collecting a result. Only one out
of every num_steps_between_samples + 1 steps is included in the
returned results. Defaults to 0 (i.e., no thinning).
|
parallel_iterations
|
The number of iterations allowed to run in parallel. It
must be a positive integer. See tf.while_loop for more details.
|
seed
|
PRNG seed; see tfp.random.sanitize_seed for details.
|
name
|
Python str name prefixed to Ops created by this function.
Default value: None (i.e., 'mcmc_sample_fold').
|