Missed TensorFlow World? Check out the recap. Learn more

nsl.lib.maximize_within_unit_norm

View source on GitHub

Solves the maximization problem weights^T*x with the constraint norm(x)=1.

nsl.lib.maximize_within_unit_norm(
    weights,
    norm_type
)

This op solves a batch of maximization problems at one time. The first axis of weights is assumed to be the batch dimension, and each "row" is treated as an independent maximization problem.

This op is mainly used to generate adversarial examples (e.g., FGSM proposed by Goodfellow et al.). Specifically, the weights are gradients, and x is the adversarial perturbation. The desired perturbation is the one causing the largest loss increase. In this op, the loss increase is approximated by the dot product between the gradient and the perturbation, as in the first-order Taylor approximation of the loss function.

Args:

  • weights: tensor representing a batch of weights to define the maximization objective.
  • norm_type: one of nsl.configs.NormType, the type of vector norm.

Returns:

A tensor representing a batch of adversarial perturbations as the solution to the maximization problems. The returned tensor has the same shape and type as the input weights.