|View source on GitHub|
DPQuery for Gaussian sum queries with adaptive clipping.
tf_privacy.QuantileAdaptiveClipSumQuery( initial_l2_norm_clip, noise_multiplier, target_unclipped_quantile, learning_rate, clipped_count_stddev, expected_num_records, geometric_update=True )
Clipping norm is tuned adaptively to converge to a value such that a specified quantile of updates are clipped, using the algorithm of Andrew et al. ( https://arxiv.org/abs/1905.03871). See the paper for details and suggested hyperparameter settings.
||The initial value of clipping norm.|
||The stddev of the noise added to the output will be this times the current value of the clipping norm.|
||The desired quantile of updates which should be unclipped. I.e., a value of 0.8 means a value of l2_norm_clip should be found for which approximately 20% of updates are clipped each round. Andrew et al. recommends that this be set to 0.5 to clip to the median.|
||The learning rate for the clipping norm adaptation. With geometric updating, a rate of r means that the clipping norm will change by a maximum factor of exp(r) at each round. This maximum is attained when |actual_unclipped_fraction - target_unclipped_quantile| is 1.0. Andrew et al. recommends that this be set to 0.2 for geometric updating.|
The stddev of the noise added to the clipped_count.
Andrew et al. recommends that this be set to
||The expected number of records per round, used to estimate the clipped count quantile.|
accumulate_preprocessed_record( sample_state, preprocessed_record )
accumulate_record( params, sample_state, record )
Accumulates a single record into the sample state.
This is a helper method that simply delegates to
accumulate_preprocessed_record for the common case when both of those
functions run on a single device. Typically this will be a simple sum.
||The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped)|
||The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients.|
||The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches).|
|The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument.|
derive_metrics( global_state )
Returns the current clipping norm as a metric.
derive_sample_params( global_state )
get_noised_result( sample_state, global_state )
initial_sample_state( template )
merge_sample_states( sample_state_1, sample_state_2 )
preprocess_record( params, record )