|View source on GitHub|
DPQuery to estimate target quantile of a univariate distribution.
tf_privacy.QuantileEstimatorQuery( initial_estimate, target_quantile, learning_rate, below_estimate_stddev, expected_num_records, geometric_update=False )
Uses the algorithm of Andrew et al. (https://arxiv.org/abs/1905.03871). See the paper for details and suggested hyperparameter settings.
||The initial estimate of the quantile.|
||The target quantile. I.e., a value of 0.8 means a value should be found for which approximately 80% of updates are less than the estimate each round.|
||The learning rate. A rate of r means that the estimate will change by a maximum of r at each step (for arithmetic updating) or by a maximum factor of exp(r) (for geometric updating). Andrew et al. recommends that this be set to 0.2 for geometric updating.|
The stddev of the noise added to the count of
records currently below the estimate. Andrew et al. recommends that this
be set to
||The expected number of records per round.|
||If True, use geometric updating of estimate. Geometric updating is preferred for non-negative records like vector norms that could potentially be very large or very close to zero.|
accumulate_preprocessed_record( sample_state, preprocessed_record )
accumulate_record( params, sample_state, record )
Accumulates a single record into the sample state.
This is a helper method that simply delegates to
accumulate_preprocessed_record for the common case when both of those
functions run on a single device. Typically this will be a simple sum.
||The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped)|
||The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients.|
||The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches).|
|The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument.|
derive_metrics( global_state )
derive_sample_params( global_state )
get_noised_result( sample_state, global_state )
initial_sample_state( template=None )
merge_sample_states( sample_state_1, sample_state_2 )
preprocess_record( params, record )