tfa.optimizers.MultiOptimizer

Multi Optimizer Wrapper for Discriminative Layer Training.

Creates a wrapper around a set of instantiated optimizer layer pairs. Generally useful for transfer learning of deep networks.

Each optimizer will optimize only the weights associated with its paired layer. This can be used to implement discriminative layer training by assigning different learning rates to each optimizer layer pair. (tf.keras.optimizers.Optimizer, List[tf.keras.layers.Layer]) pairs are also supported. Please note that the layers must be instantiated before instantiating the optimizer.

optimizers_and_layers a list of tuples of an optimizer and a layer or model. Each tuple should contain exactly 1 instantiated optimizer and 1 object that subclasses tf.keras.Model, tf.keras.Sequential or tf.keras.layers.Layer. Nested layers and models will be automatically discovered. Alternatively, in place of a single layer, you can pass a list of layers.
optimizer_specs specialized list for serialization. Should be left as None for almost all cases. If you are loading a serialized version of this optimizer, please use tf.keras.models.load_model after saving a model compiled with this optimizer.

Usage:

model = tf.keras.Sequential([
    tf.keras.Input(shape=(4,)),
    tf.keras.layers.Dense(8),
    tf.keras.layers.Dense(16),
    tf.keras.layers.Dense(32),
])
optimizers = [
    tf.keras.optimizers.Adam(learning_rate=1e-4),
    tf.keras.optimizers.Adam(learning_rate=1e-2)
]
optimizers_and_layers = [(optimizers[0], model.layers[0]), (optimizers[1], model.layers[1:])]
optimizer = tfa.optimizers.MultiOptimizer(optimizers_and_layers)
model.compile(optimizer=optimizer, loss="mse")

Reference:

This code should function on CPU, GPU, and TPU. Apply with tf.distribute.Strategy().scope() context as you would with any other optimizer.

ValueError in case of any invalid argument.

clipnorm float or None. If set, clips gradients to a maximum norm.
clipvalue float or None. If set, clips gradients to a maximum value.
global_clipnorm float or None. If set, clips gradients to a maximum norm.
iterations Variable. The number of training steps this Optimizer has run.
weights Returns variables of this Optimizer based on the order created.

Methods

add_slot

Add a new slot variable for var.

A slot variable is an additional variable associated with var to train. It is allocated and managed by optimizers, e.g. Adam.

Args
var a Variable object.
slot_name name of the slot variable.
initializer initializer of the slot variable
shape (Optional) shape of the slot variable. If not set, it will default to the shape of var.

Returns
A slot variable.

add_weight

apply_gradients

View source

Wrapped apply_gradient method.

Returns an operation to be executed.

create_optimizer_spec

View source

Creates a serializable optimizer spec.

The name of each variable is used rather than var.ref() to enable serialization and deserialization.

from_config

Creates an optimizer from its config.

This method is the reverse of get_config, capable of instantiating the same optimizer from the config dictionary.

Args
config A Python dictionary, typically the output of get_config.
custom_objects A Python dictionary mapping names to additional Python objects used to create this optimizer, such as a function used for a hyperparameter.

Returns
An optimizer instance.

get_config

View source

Returns the config of the optimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns
Python dictionary.

get_gradients

Returns gradients of loss with respect to params.

Should be used only in legacy v1 graph mode.

Args
loss Loss tensor.
params List of variables.

Returns
List of gradient tensors.

Raises
ValueError In case any gradient cannot be computed (e.g. if gradient function not implemented).

get_slot

get_slot_names

A list of names for this optimizer's slots.

get_updates

get_weights

Returns the current weights of the optimizer.

The weights of an optimizer are its state (ie, variables). This function returns the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer's state variables in the order they were created. The returned list can in turn be used to load state into similarly parameterized optimizers.

For example, the RMSprop optimizer for this simple model returns a list of three values-- the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:

opt = tf.keras.optimizers.RMSprop()
m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
m.compile(opt, loss='mse')
data = np.arange(100).reshape(5, 20)
labels = np.zeros(5)
print('Training'); results = m.fit(data, labels)
Training ...
len(opt.get_weights())
3

Returns
Weights values as a list of numpy arrays.

maybe_initialize_optimizer_spec

View source

minimize

Minimize loss by updating var_list.

This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.

Args
loss Tensor or callable. If a callable, loss should take no arguments and return the value to minimize. If a Tensor, the tape argument must be passed.
var_list list or tuple of Variable objects to update to minimize loss, or a callable returning the list or tuple of Variable objects. Use callable when the variable list would otherwise be incomplete before minimize since the variables are created at the first time loss is called.
grad_loss (Optional). A Tensor holding the gradient computed for loss.
name (Optional) str. Name for the returned operation.
tape (Optional) tf.GradientTape. If loss is provided as a Tensor, the tape that computed the loss must be provided.

Returns
An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.

Raises
ValueError If some of the variables are not Variable objects.

set_weights

Set the weights of the optimizer.

The weights of an optimizer are its state (ie, variables). This function takes the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer's state variables in the order they are created. The passed values are used to set the new state of the optimizer.

For example, the RMSprop optimizer for this simple model takes a list of three values-- the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:

opt = tf.keras.optimizers.RMSprop()
m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
m.compile(opt, loss='mse')
data = np.arange(100).reshape(5, 20)
labels = np.zeros(5)
print('Training'); results = m.fit(data, labels)
Training ...
new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])]
opt.set_weights(new_weights)
opt.iterations
<tf.Variable 'RMSprop/iter:0' shape=() dtype=int64, numpy=10>

Args
weights weight values as a list of numpy arrays.

variables

Returns variables of this Optimizer based on the order created.