tf_privacy.v1.VectorizedDPAdam

Vectorized DP subclass of tf.compat.v1.train.AdamOptimizer using Gaussian averaging.

You can use this as a differentially private replacement for tf.compat.v1.train.AdamOptimizer. This optimizer implements DP-SGD using the standard Gaussian mechanism. It differs from DPAdamGaussianOptimizer in that it attempts to vectorize the gradient computation and clipping of microbatches.

When instantiating this optimizer, you need to supply several DP-related arguments followed by the standard arguments for AdamOptimizer.

Examples:

# Create optimizer.
opt = VectorizedDPAdamOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,
         <standard arguments>)

When using the optimizer, be sure to pass in the loss as a rank-one tensor with one entry for each example.

# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
    labels=labels, logits=logits)

train_op = opt.minimize(loss, global_step=global_step)

l2_norm_clip Clipping norm (max L2 norm of per microbatch gradients).
noise_multiplier Ratio of the standard deviation to the clipping norm.
num_microbatches Number of microbatches into which each minibatch is split. If None, will default to the size of the minibatch, and per-example gradients will be computed.
*args These will be passed on to the base class __init__ method.
**kwargs These will be passed on to the base class __init__ method.

GATE_GRAPH 2
GATE_NONE 0
GATE_OP 1