|View source on GitHub|
Utility to combine main and adversarial losses.
tf.contrib.gan.losses.wargs.combine_adversarial_loss( main_loss, adversarial_loss, weight_factor=None, gradient_ratio=None, gradient_ratio_epsilon=1e-06, variables=None, scalar_summaries=True, gradient_summaries=True, scope=None )
This utility combines the main and adversarial losses in one of two ways.
1) Fixed coefficient on adversarial loss. Use
weight_factor in this case.
2) Fixed ratio of gradients. Use
gradient_ratio in this case. This is often
used to make sure both losses affect weights roughly equally, as in
One can optionally also visualize the scalar and gradient behavior of the losses.
main_loss: A floating scalar Tensor indicating the main loss.
adversarial_loss: A floating scalar Tensor indication the adversarial loss.
weight_factor: If not
None, the coefficient by which to multiply the adversarial loss. Exactly one of this and
gradient_ratiomust be non-None.
gradient_ratio: If not
None, the ratio of the magnitude of the gradients. Specifically, gradient_ratio = grad_mag(main_loss) / grad_mag(adversarial_loss) Exactly one of this and
weight_factormust be non-None.
gradient_ratio_epsilon: An epsilon to add to the adversarial loss coefficient denominator, to avoid division-by-zero.
variables: List of variables to calculate gradients with respect to. If not present, defaults to all trainable variables.
scalar_summaries: Create scalar summaries of losses.
gradient_summaries: Create gradient summaries of losses.
scope: Optional name scope.
A floating scalar Tensor indicating the desired combined loss.
ValueError: Malformed input.