tfc.entropy_models.UniversalIndexedEntropyModel

Indexed entropy model model which implements Universal Quantization.

In contrast to the base class, which uses rounding for quantization, here "quantization" is performed additive uniform noise, which is implemented with Universal Quantization.

This is described in Sec. 3.2. in the paper

"Universally Quantized Neural Compression"
Eirikur Agustsson & Lucas Theis
https://arxiv.org/abs/2006.09952

prior_fn A callable returning a tfp.distributions.Distribution object, typically a Distribution class or factory function. This is a density model fitting the marginal distribution of the bottleneck data with additive uniform noise, which is shared a priori between the sender and the receiver. For best results, the distributions should be flexible enough to have a unit-width uniform distribution as a special case, since this is the marginal distribution for bottleneck dimensions that are constant. The callable will receive keyword arguments as determined by parameter_fns.
index_ranges Iterable of integers. Compared to bottleneck, indexes in __call__() must have an additional trailing dimension, and the values of the kth channel must be in the range [0, index_ranges[k]).
parameter_fns Dict of strings to callables. Functions mapping indexes to each distribution parameter. For each item, indexes is passed to the callable, and the string key and return value make up one keyword argument to prior_fn.
coding_rank Integer. Number of innermost dimensions considered a coding unit. Each coding unit is compressed to its own bit string, and the bits() method sums over each coding unit.
compression Boolean. If set to True, the range coding tables used by compress() and decompress() will be built on instantiation. This assumes eager mode (throws an error if in graph mode or inside a tf.function call). If set to False, these two methods will not be accessible.
laplace_tail_mass Float. If positive, will augment the prior with a laplace mixture for training stability. (experimental)
expected_grads If True, will use analytical expected gradients during backpropagation w.r.t. additive uniform noise.
tail_mass Float. Approximate probability mass which is encoded using an Elias gamma code embedded into the range coder.
range_coder_precision Integer. Precision passed to the range coding op.
bottleneck_dtype tf.dtypes.DType. Data type of bottleneck tensor. Defaults to tf.keras.mixed_precision.global_policy().compute_dtype.
prior_dtype tf.dtypes.DType. Data type of prior and probability computations. Defaults to tf.float32.
stateless Boolean. If True, creates range coding tables as Tensors rather than Variables.
num_noise_levels Integer. The number of levels used to quantize the uniform noise.
decode_sanity_check Boolean. If True, an raises an error if the binary strings passed into decompress are not completely decoded.

bottleneck_dtype Data type of the bottleneck tensor.
cdf The CDFs used by range coding.
cdf_offset The CDF offsets used by range coding.
coding_rank Number of innermost dimensions considered a coding unit.
compression Whether this entropy model is prepared for compression.
expected_grads Whether to use analytical expected gradients during backpropagation.
index_ranges Upper bound(s) on values allowed in indexes tensor.
index_ranges_without_offsets Upper bound(s) on values allowed in indexes , excluding offsets.
laplace_tail_mass Whether to augment the prior with a NoisyLaplace mixture.
name Returns the name of this module as passed or determined in the ctor.

name_scope Returns a tf.name_scope instance for this class.
non_trainable_variables Sequence of non-trainable variables owned by this module and its submodules.
parameter_fns Functions mapping indexes to each distribution parameter.
prior Prior distribution, used for deriving range coding tables.
prior_dtype Data type of prior.
prior_fn Class or factory function returning a Distribution object.
range_coder_precision Precision used in range coding op.
stateless Whether range coding tables are created as Tensors or Variables.
submodules Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
list(a.submodules) == [b, c]
True
list(b.submodules) == [c]
True
list(c.submodules) == []
True

tail_mass Approximate probability mass which is range encoded with overflow.
trainable_variables Sequence of trainable variables owned by this module and its submodules.

variables Sequence of variables owned by this module and its submodules.

Methods

compress

View source

Compresses a floating-point tensor.

Compresses the tensor to bit strings. bottleneck is first quantized as in quantize(), and then compressed using the probability tables derived from indexes. The quantized tensor can later be recovered by calling decompress().

The innermost self.coding_rank dimensions are treated as one coding unit, i.e. are compressed into one string each. Any additional dimensions to the left are treated as batch dimensions.

Args
bottleneck tf.Tensor containing the data to be compressed.
indexes tf.Tensor specifying the scalar distribution for each element in bottleneck. See class docstring for examples.

Returns
A tf.Tensor having the same shape as bottleneck without the self.coding_rank innermost dimensions, containing a string for each coding unit.

decompress

View source

Decompresses a tensor.

Reconstructs the quantized tensor from bit strings produced by compress().

Args
strings tf.Tensor containing the compressed bit strings.
indexes tf.Tensor specifying the scalar distribution for each output element. See class docstring for examples.

Returns
A tf.Tensor of the same shape as indexes (without the optional channel dimension).

get_config

View source

Returns the configuration of the entropy model.

Returns
A JSON-serializable Python dict.

Raises
RuntimeError on attempting to call this method on an entropy model with compression=False or with stateless=True.

get_weights

View source

set_weights

View source

with_name_scope

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>

Args
method The method to wrap.

Returns
The original method wrapped such that it enters the module's name scope.

__call__

View source

Perturbs a tensor with additive uniform noise and estimates bitcost.

Args
bottleneck tf.Tensor containing a non-perturbed bottleneck. Must have at least self.coding_rank dimensions.
indexes tf.Tensor specifying the scalar distribution for each element in bottleneck. See class docstring for examples.
training Boolean. If False, computes the bitcost using discretized uniform noise. If True, estimates the differential entropy with uniform noise.

Returns
A tuple (bottleneck_perturbed, bits) where bottleneck_perturbed is bottleneck perturbed with nosie and bits is the bitcost of transmitting such a sample having the same shape as bottleneck without the self.coding_rank innermost dimensions.