|View source on GitHub|
Decorator that recomputes the function on the backwards pass.
tf.contrib.layers.recompute_grad( *args, **kwargs )
To use this function, you must use
`variable_scope(name, use_resource=True), which are the default in Eager mode
and when running on TPU.
fn: a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Note that
fnshould not close over any other Tensors or Variables.
Truewill use a dummy data dependency to force the recompute to happen. If
Falsewill use a control dependency. By default will be
Trueif in an XLA context and
Falseotherwise. XLA ignores control dependencies and so this data dependency is necessary.
Truewill use control dependencies to ensure that all gradients are produced before any are consumed by downstream ops. If
True, will use a data dependency instead of a control dependency.
A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pass (i.e. on a call to tf.gradients).
fncloses over any Tensors or Variables.