tf_privacy.DistributedDiscreteGaussianSumQuery

Implements DPQuery for discrete distributed Gaussian sum queries.

Inherits From: SumAggregationDPQuery, DPQuery

For each local record, we check the L2 norm bound and add discrete Gaussian noise. In particular, this DPQuery does not perform L2 norm clipping and the norms of the input records are expected to be bounded.

l2_norm_bound The L2 norm bound to verify for each record.
local_stddev The stddev of the local discrete Gaussian noise.

Methods

accumulate_preprocessed_record

View source

Implements tensorflow_privacy.DPQuery.accumulate_preprocessed_record.

accumulate_record

View source

Accumulates a single record into the sample state.

This is a helper method that simply delegates to preprocess_record and accumulate_preprocessed_record for the common case when both of those functions run on a single device. Typically this will be a simple sum.

Args
params The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped)
sample_state The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients.
record The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches).

Returns
The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument.

derive_metrics

View source

Derives metric information from the current global state.

Any metrics returned should be derived only from privatized quantities.

Args
global_state The global state from which to derive metrics.

Returns
A collections.OrderedDict mapping string metric names to tensor values.

derive_sample_params

View source

Given the global state, derives parameters to use for the next sample.

For example, if the mechanism needs to clip records to bound the norm, the clipping norm should be part of the sample params. In a distributed context, this is the part of the state that would be sent to the workers so they can process records.

Args
global_state The current global state.

Returns
Parameters to use to process records in the next sample.

get_noised_result

View source

Gets the query result after all records of sample have been accumulated.

The global state can also be updated for use in the next application of the DP mechanism.

Args
sample_state The sample state after all records have been accumulated. In standard DP-SGD training, the accumulated sum of clipped microbatch gradients (in the special case of microbatches of size 1, the clipped per-example gradients).
global_state The global state, storing long-term privacy bookkeeping.

Returns
A tuple (result, new_global_state, event) where:

  • result is the result of the query,
  • new_global_state is the updated global state, and
  • event is the DpEvent that occurred. In standard DP-SGD training, the result is a gradient update comprising a noised average of the clipped gradients in the sample state---with the noise and averaging performed in a manner that guarantees differential privacy.

initial_global_state

View source

Returns the initial global state for the DPQuery.

The global state contains any state information that changes across repeated applications of the mechanism. The default implementation returns just an empty tuple for implementing classes that do not have any persistent state.

This object must be processable via tf.nest.map_structure.

Returns
The global state.

initial_sample_state

View source

Implements tensorflow_privacy.DPQuery.initial_sample_state.

merge_sample_states

View source

Implements tensorflow_privacy.DPQuery.merge_sample_states.

preprocess_record

View source

Check record norm and add noise to the record.