Zeroes out extremely large values for robustness to data corruption on
clients and clips in the L2 norm to moderately high norm for robustness to
outliers. After weighting in mean, the weighted values are uniformly quantized
to reduce the size of the model update communicated from clients to the
server. For details, see Suresh et al. (2017)
http://proceedings.mlr.press/v70/suresh17a/suresh17a.pdf The default
configuration is chosen such that compression does not have adverse effect on
trained model quality in typical tasks.
Whether to enable adaptive zeroing for data corruption mitigation.
Whether to enable adaptive clipping in the L2 norm for robustness.
Note this clipping is performed prior to the per-coordinate clipping
required for quantization.