|View source on GitHub|
Computes generalized advantage estimation (GAE).
tf_agents.utils.value_ops.generalized_advantage_estimation( values, final_value, discounts, rewards, td_lambda=1.0, time_major=True )
For theory, see "High-Dimensional Continuous Control Using Generalized Advantage Estimation" by John Schulman, Philipp Moritz et al. See https://arxiv.org/abs/1506.02438 for full paper.
(B) batch size representing number of trajectories (T) number of steps per trajectory
||Tensor with shape [T, B] representing value estimates.|
||Tensor with shape [B] representing value estimate at t=T.|
||Tensor with shape [T, B] representing discounts received by following the behavior policy.|
||Tensor with shape [T, B] representing rewards received by following the behavior policy.|
||A float32 scalar between [0, 1]. It's used for variance reduction in temporal difference.|
||A boolean indicating whether input tensors are time major. False means input tensors have shape [B, T].|
|A tensor with shape [T, B] representing advantages. Shape is [B, T] when time_major is false.|