|View source on GitHub|
A conditional accumulator for aggregating gradients.
tf.compat.v1.ConditionalAccumulator( dtype, shape=None, shared_name=None, name='conditional_accumulator', reduction_type='MEAN' )
Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator.
Extraction of the average gradient is blocked until the required number of gradients has been accumulated.
||Datatype of the accumulated gradients.|
||Shape of the accumulated gradients.|
||Optional. If non-empty, this accumulator will be shared under the given name across multiple sessions.|
||Optional name for the accumulator.|
||Reduction type to use when taking the gradient.|
||The underlying accumulator reference.|
||The datatype of the gradients accumulated by this accumulator.|
||The name of the underlying accumulator.|
apply_grad( grad, local_step=0, name=None )
Attempts to apply a gradient to the accumulator.
The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.
||The gradient tensor to be applied.|
||Time step at which the gradient was computed.|
||Optional name for the operation.|
|The operation that (conditionally) applies a gradient to the accumulator.|
||If grad is of the wrong shape|