|View source on GitHub|
DPQuery for queries with a
DPQuery numerator and fixed denominator.
tf_privacy.NormalizedQuery( numerator_query, denominator )
If the number of records per round is a public constant R,
could be used with a sum query as the numerator and R as the denominator to
implement an average. Under some sampling schemes, such as Poisson
subsampling, the actual number of records in a sample is a private quantity,
so we cannot use it directly. Using this class with the expected number of
records as the denominator gives an unbiased estimate of the average.
||A SumAggregationDPQuery for the numerator.|
||A value for the denominator. May be None if it will be supplied via the set_denominator function before get_noised_result is called.|
accumulate_preprocessed_record( sample_state, preprocessed_record )
accumulate_record( params, sample_state, record )
Accumulates a single record into the sample state.
This is a helper method that simply delegates to
accumulate_preprocessed_record for the common case when both of those
functions run on a single device. Typically this will be a simple sum.
||The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped)|
||The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients.|
||The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches).|
|The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument.|
derive_metrics( global_state )
derive_sample_params( global_state )
get_noised_result( sample_state, global_state )
initial_sample_state( template )
merge_sample_states( sample_state_1, sample_state_2 )
preprocess_record( params, record )