|View source on GitHub|
Implements DPQuery for discrete distributed Gaussian sum queries.
tf_privacy.DistributedDiscreteGaussianSumQuery( l2_norm_bound, local_stddev )
For each local record, we check the L2 norm bound and add discrete Gaussian noise. In particular, this DPQuery does not perform L2 norm clipping and the norms of the input records are expected to be bounded.
||The L2 norm bound to verify for each record.|
||The stddev of the local discrete Gaussian noise.|
accumulate_preprocessed_record( sample_state, preprocessed_record )
accumulate_record( params, sample_state, record )
Accumulates a single record into the sample state.
This is a helper method that simply delegates to
accumulate_preprocessed_record for the common case when both of those
functions run on a single device. Typically this will be a simple sum.
||The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped)|
||The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients.|
||The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches).|
|The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument.|
derive_metrics( global_state )
Derives metric information from the current global state.
Any metrics returned should be derived only from privatized quantities.
||The global state from which to derive metrics.|
derive_sample_params( global_state )
Given the global state, derives parameters to use for the next sample.
For example, if the mechanism needs to clip records to bound the norm, the clipping norm should be part of the sample params. In a distributed context, this is the part of the state that would be sent to the workers so they can process records.
||The current global state.|
|Parameters to use to process records in the next sample.|
get_noised_result( sample_state, global_state )
Gets the query result after all records of sample have been accumulated.
The global state can also be updated for use in the next application of the DP mechanism.
||The sample state after all records have been accumulated. In standard DP-SGD training, the accumulated sum of clipped microbatch gradients (in the special case of microbatches of size 1, the clipped per-example gradients).|
||The global state, storing long-term privacy bookkeeping.|
Returns the initial global state for the DPQuery.
The global state contains any state information that changes across repeated applications of the mechanism. The default implementation returns just an empty tuple for implementing classes that do not have any persistent state.
This object must be processable via tf.nest.map_structure.
|The global state.|
initial_sample_state( template=None )
merge_sample_states( sample_state_1, sample_state_2 )
preprocess_record( params, record )
Check record norm and add noise to the record.