tfl.layers.RTL

View source on GitHub

Layer which includes a random ensemble of lattices.

RTL (Random Tiny Lattices) is an ensemble of tfl.layers.Lattice layers that takes in a collection of monotonic and unconstrained features and randomly arranges them into lattices of a given rank. The input is taken as "groups", and inputs from the same group will not be used in the same lattice. E.g. the input can be the ouput of a calibration layer with multiple units applied to the same input feature. If there are more slots in the RTL than the number of inputs, inputs will be repeatedly used. Repeats will be approximately uniform accross all inputs.

Input shape:

A dict with keys in ['unconstrained', 'increasing'], and the values either a list of tensors of shape (batch_size, D_i), or a single tensor of shape (batch_size, D) that will be split into a list of D tensors of size (batch_size, 1). Each tensor in the list is considered a "group" of features that the RTL layer should try not to use in the same lattice.

Output shape:

If separate_outputs == True, the output will be in the same format as the input and can be passed to follow on RTL layers: {'unconstrained': unconstrained_out, 'increasing': mon_out} where unconstrained_out and mon_out are of (batch_size, num_unconstrained_out) and (batch_size, num_mon_out) respectively, and num_unconstrained_out + num_mon_out == num_lattices. If separate_outputs == False the output will be a rank-2 tensor with shape: (batch_size, num_lattices).

Example:

a = tf.keras.Input(shape=(1,))
b = tf.keras.Input(shape=(1,))
c = tf.keras.Input(shape=(1,))
d = tf.keras.Input(shape=(1,))
cal_a = tfl.layers.CategoricalCalibration(
    units=10, output_min=0, output_max=1, ...)(a)
cal_b = tfl.layers.PWLCalibration(
    units=20, output_min=0, output_max=1, ...)(b)
cal_c = tfl.layers.PWLCalibration(
    units=10, output_min=0, output_max=1, monotonicity='increasing', ...)(c)
cal_d = tfl.layers.PWLCalibration(
    units=20, output_min=0, output_max=1, monotonicity='decreasing', ...)(d)
rtl_0 = RTL(
    num_lattices=20,
    lattice_rank=3,
    output_min=0,
    output_max=1,
    separate_outputs=True,
)({
    'unconstrained': [cal_a, cal_b],
    'increasing': [cal_c, cal_d],
})
rtl_1 = RTL(num_lattices=5, lattice_rank=4)(rtl_0)
outputs = tfl.layers.Linear(
    num_input_dims=5,
    monotonicities=['increasing'] * 5,
)(rtl_1)
model = tf.keras.Model(inputs=[a, b, c, d], outputs=outputs)
`num_lattices` Number of lattices in the ensemble.
`lattice_rank` Number of features used in each lattice.
`lattice_size` Number of lattice vertices per dimension (minimum is 2).
`output_min` None or lower bound of the output.
`output_max` None or upper bound of the output.
`separate_outputs` If set to true, the output will be a dict in the same format as the input to the layer, ready to be passed to another RTL layer. If false, the output will be a single tensor of shape (batch_size, num_lattices). See output shape for details.
`random_seed` Random seed for the randomized feature arrangement in the ensemble.
`num_projection_iterations` Number of iterations of Dykstra projections algorithm. Projection updates will be closer to a true projection (with respect to the L2 norm) with higher number of iterations. Increasing this number has diminishing return on projection precsion. Infinite number of iterations would yield perfect projection. Increasing this number might slightly improve convergence by cost of slightly increasing running time. Most likely you want this number to be proportional to number of lattice vertices in largest constrained dimension.
`monotonic_at_every_step` Whether to strictly enforce monotonicity and trust constraints after every gradient update by applying a final imprecise projection. Setting this parameter to True together with small num_projection_iterations parameter is likely to hurt convergence.
`clip_inputs` If inputs should be clipped to the input range of the lattice.
`kernel_initializer` One of: - `'linear_initializer'`: initialize parameters to form a linear function with positive and equal coefficients for monotonic dimensions and 0.0 coefficients for other dimensions. Linear function is such that minimum possible output is equal to output_min and maximum possible output is equal to output_max. See tfl.lattice_layer.LinearInitializer class docstring for more details. - `'random_monotonic_initializer'`: initialize parameters uniformly at random such that all parameters are monotonically increasing for each input. Parameters will be sampled uniformly at random from the range `[output_min, output_max]`. See tfl.lattice_layer.RandomMonotonicInitializer class docstring for more details.
`kernel_regularizer` None or a single element or a list of following: - Tuple `('torsion', l1, l2)` where l1 and l2 represent corresponding regularization amount for graph Torsion regularizer. l1 and l2 can either be single floats or lists of floats to specify different regularization amount for every dimension. - Tuple `('laplacian', l1, l2)` where l1 and l2 represent corresponding regularization amount for graph Laplacian regularizer. l1 and l2 can either be single floats or lists of floats to specify different regularization amount for every dimension.
`**kwargs` Other args passed to tf.keras.layers.Layer initializer.
`ValueError` If layer hyperparameters are invalid.
`activity_regularizer` Optional regularizer function for the output of this layer.
`dtype` Dtype used by the weights of the layer, set in the constructor.
`dynamic` Whether the layer is dynamic (eager-only); set in the constructor.
`input` Retrieves the input tensor(s) of a layer. Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.
`input_spec` `InputSpec` instance(s) describing the input format for this layer. When you create a layer subclass, you can set `self.input_spec` to enable the layer to run input compatibility checks when it is called. Consider a `Conv2D` layer: it can only be called on a single input tensor of rank 4. As such, you can set, in `__init__()`: ```python self.input_spec = tf.keras.layers.InputSpec(ndim=4) ``` Now, if you try to call the layer on an input that isn't rank 4 (for instance, an input of shape `(2,)`, it will raise a nicely-formatted error: ``` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] ``` Input checks that can be specified via `input_spec` include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype For more information, see tf.keras.layers.InputSpec.
`losses` Losses which are associated with this `Layer`. Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a tf.GradientTape will propagate gradients back to the corresponding variables.
`metrics` List of tf.keras.metrics.Metric instances tracked by the layer.
`name` Name of the layer (string), set in the constructor.
`name_scope` Returns a tf.name_scope instance for this class.
`non_trainable_weights` List of all non-trainable weights tracked by this layer. Non-trainable weights are *not* updated during training. They are expected to be updated manually in `call()`.
`output` Retrieves the output tensor(s) of a layer. Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.
`submodules` Sequence of all sub-modules. Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
list(a.submodules) == [b, c]
True
list(b.submodules) == [c]
True
list(c.submodules) == []
True
`trainable`
`trainable_weights` List of all trainable weights tracked by this layer. Trainable weights are updated via gradient descent during training.
`weights` Returns the list of all layer variables/weights.

Methods

add_loss

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model's call function, in which case losses should be a Tensor or list of Tensors.

Example:

class MyLayer(tf.keras.layers.Layer):
  def call(inputs, self):
    self.add_loss(tf.abs(tf.reduce_mean(inputs)), inputs=True)
    return inputs

This method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's Inputs. These losses become part of the model's topology and are tracked in get_config.

Example:

inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized.

Example:

inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(x.kernel))

The get_losses_for method allows to retrieve the losses relevant to a specific set of inputs.

Arguments
`losses` Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor.
`inputs` Ignored when executing eagerly. If anything other than None is passed, it signals the losses are conditional on some of the layer's inputs, and thus they should only be run where these inputs are available. This is the case for activity regularization losses, for instance. If `None` is passed, the losses are assumed to be unconditional, and will apply across all dataflows of the layer (e.g. weight regularization losses).

add_metric

Adds metric tensor to the layer.

Args
`value` Metric tensor.
`aggregation` Sample-wise metric reduction function. If `aggregation=None`, it indicates that the metric tensor provided has been aggregated already. eg, `bin_acc = BinaryAccuracy(name='acc')` followed by `model.add_metric(bin_acc(y_true, y_pred))`. If aggregation='mean', the given metric tensor will be sample-wise reduced using `mean` function. eg, `model.add_metric(tf.reduce_sum(outputs), name='output_mean', aggregation='mean')`.
`name` String metric name.
Raises
`ValueError` If `aggregation` is anything other than None or `mean`.

assert_constraints

View source

Asserts that weights satisfy all constraints.

In graph mode builds and returns a list of assertion ops. In eager mode directly executes assetions.

Args
`eps` allowed constraints violation.
Returns
List of assertion ops in graph mode or immideately asserts in eager mode.

build

View source

Standard Keras build() method.

compute_mask

Computes an output mask tensor.

Arguments
`inputs` Tensor or list of tensors.
`mask` Tensor or list of tensors.
Returns
None or a tensor (or list of tensors, one per output tensor of the layer).

compute_output_shape

View source

Standard Keras compute_output_shape() method.

count_params

Count the total number of scalars composing the weights.

Returns
An integer count.
Raises
`ValueError` if the layer isn't yet built (in which case its weights aren't yet defined).

finalize_constraints

View source

Ensures layers weights strictly satisfy constraints.

Applies approximate projection to strictly satisfy specified constraints. If monotonic_at_every_step == True there is no need to call this function.

Returns
In eager mode directly updates weights and returns variable which stores them. In graph mode returns a list of `assign_add` op which has to be executed to updates weights.

from_config

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Arguments
`config` A Python dictionary, typically the output of get_config.
Returns
A layer instance.

get_config

View source

Standard Keras get_config() method.

get_weights

Returns the current weights of the layer.

The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of Numpy arrays, which can in turn be used to load state into similarly parameterized layers.

For example, a Dense layer returns a list of two values-- per-output weights and the bias value. These can be used to set the weights of another Dense layer:

a = tf.keras.layers.Dense(1,
  kernel_initializer=tf.constant_initializer(1.))
a_out = a(tf.convert_to_tensor([[1., 2., 3.]]))
a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
b = tf.keras.layers.Dense(1,
  kernel_initializer=tf.constant_initializer(2.))
b_out = b(tf.convert_to_tensor([[10., 20., 30.]]))
b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
b.set_weights(a.get_weights())
b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Returns
Weights values as a list of numpy arrays.

set_weights

Sets the weights of the layer, from Numpy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function by calling the layer.

For example, a Dense layer returns a list of two values-- per-output weights and the bias value. These can be used to set the weights of another Dense layer:

a = tf.keras.layers.Dense(1,
  kernel_initializer=tf.constant_initializer(1.))
a_out = a(tf.convert_to_tensor([[1., 2., 3.]]))
a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
b = tf.keras.layers.Dense(1,
  kernel_initializer=tf.constant_initializer(2.))
b_out = b(tf.convert_to_tensor([[10., 20., 30.]]))
b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
b.set_weights(a.get_weights())
b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Arguments
`weights` a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`).
Raises
`ValueError` If the provided weights list does not match the layer's specifications.

with_name_scope

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args
`method` The method to wrap.
Returns
The original method wrapped such that it enters the module's name scope.

__call__

Wraps call, applying pre- and post-processing steps.

Arguments
`*args` Positional arguments to be passed to `self.call`.
`**kwargs` Keyword arguments to be passed to `self.call`.
Returns
Output tensor(s).

Note:

  • The following optional keyword arguments are reserved for specific uses:
    • training: Boolean scalar tensor of Python boolean indicating whether the call is meant for training or inference.
    • mask: Boolean input mask.
  • If the layer's call method takes a mask argument (as some Keras layers do), its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.
Raises
`ValueError` if the layer's `call` method returns None (an invalid value).
`RuntimeError` if `super().__init__()` was not called in the constructor.