See the guide: Seq2seq Library (contrib) > Dynamic Decoding
A training helper that adds scheduled sampling directly to outputs.
Returns False for sample_ids where no sampling took place; True elsewhere.
__init__( inputs, sequence_length, sampling_probability, time_major=False, seed=None, next_inputs_fn=None, auxiliary_inputs=None, name=None )
inputs: A (structure) of input tensors.
sequence_length: An int32 vector tensor.
sampling_probability: A 0D
float32tensor: the probability of sampling from the outputs instead of reading directly from the inputs.
time_major: Python bool. Whether the tensors in
inputsare time major. If
False(default), they are assumed to be batch major.
seed: The sampling seed.
next_inputs_fn: (Optional) callable to apply to the RNN outputs to create the next input when sampling. If
None(default), the RNN outputs will be used as the next inputs.
auxiliary_inputs: An optional (structure of) auxiliary input tensors with a shape that matches
inputsin all but (potentially) the final dimension. These tensors will be concatenated to the sampled output or the
inputswhen not sampling for use as the next input.
name: Name scope for any created operations.
sampling_probabilityis not a scalar or vector.
next_inputs( time, outputs, state, sample_ids, name=None )
sample( time, outputs, state, name=None )