tf_privacy.DPKerasAdamOptimizer

Returns a DPOptimizerClass cls using the GaussianSumQuery.

This function is a thin wrapper around make_keras_optimizer_class.<locals>.DPOptimizerClass which can be used to apply a GaussianSumQuery to any DPOptimizerClass.

When combined with stochastic gradient descent, this creates the canonical DP-SGD algorithm of "Deep Learning with Differential Privacy" (see https://arxiv.org/abs/1607.00133).

When instantiating this optimizer, you need to supply several DP-related arguments followed by the standard arguments for {short_base_class}.

As an example, see the below or the documentation of the DPOptimizerClass.

# Create optimizer.
opt = {dp_keras_class}(l2_norm_clip=1.0, noise_multiplier=0.5,
    num_microbatches=1, <standard arguments>)

l2_norm_clip Clipping norm (max L2 norm of per microbatch gradients).
noise_multiplier Ratio of the standard deviation to the clipping norm.
num_microbatches Number of microbatches into which each minibatch is split. Default is None which means that number of microbatches is equal to batch size (i.e. each microbatch contains exactly one example). If gradient_accumulation_steps is greater than 1 and num_microbatches is not None then the effective number of microbatches is equal to num_microbatches * gradient_accumulation_steps.
gradient_accumulation_steps If greater than 1 then optimizer will be accumulating gradients for this number of optimizer steps before applying them to update model weights. If this argument is set to 1 then updates will be applied on each optimizer step.
*args These will be passed on to the base class __init__ method.
**kwargs These will be passed on to the base class __init__ method.