|View source on GitHub|
Solves the maximization problem weights^T*x with the constraint norm(x)=1.
nsl.lib.maximize_within_unit_norm( weights, norm_type, epsilon=1e-06 )
This op solves a batch of maximization problems at one time. The first axis of
weights is assumed to be the batch dimension, and each "row" is treated as
an independent maximization problem.
This op is mainly used to generate adversarial examples (e.g., FGSM proposed
by Goodfellow et al.). Specifically, the
weights are gradients, and
the adversarial perturbation. The desired perturbation is the one causing the
largest loss increase. In this op, the loss increase is approximated by the
dot product between the gradient and the perturbation, as in the first-order
Taylor approximation of the loss function.
||A lower bound value for the norm to avoid division by 0.|