|TensorFlow 1 version||View source on GitHub|
This is the class from which all layers inherit.
Compat aliases for migration
See Migration guide for more details.
tf.keras.layers.Layer( trainable=True, name=None, dtype=None, dynamic=False, **kwargs )
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
call() method, and a state (weight variables), defined
either in the constructor
__init__() or in the
Users will just instantiate a layer and then treat it as a callable.
||Boolean, whether the layer's variables should be trainable.|
||String name of the layer.|
The dtype of the layer's computations and weights (default of
Set this to
We recommend that descendants of
Layer implement the following methods:
__init__(): Defines custom layer attributes, and creates layer state variables that do not depend on input shapes, using
build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using
__call__()will automatically build the layer (if it has not been built yet) by calling
call(self, *args, **kwargs): Called in
__call__after making sure
build()has been called.
call()performs the logic of applying the layer to the input tensors (which should be passed in as argument). Two reserved keyword arguments you can optionally use in
training(boolean, whether the call is in inference mode or training mode)
mask(boolean tensor encoding masked timesteps in the input, used in RNN layers)
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in
__init__, then override
from_config(self)as well. This method is used when saving the layer or a model that contains this layer.
Here's a basic example: a layer with two variables,
y = w . x + b.
It shows how to implement
Variables set as attributes of a layer are tracked as weights
of the layers (in
class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): # Create the state of the layer (weights) w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(input_shape[-1], self.units), dtype='float32'), trainable=True) b_init = tf.zeros_initializer() self.b = tf.Variable( initial_value=b_init(shape=(self.units,), dtype='float32'), trainable=True) def call(self, inputs): # Defines the computation from inputs to outputs return tf.matmul(inputs, self.w) + self.b # Instantiates the layer. linear_layer = SimpleDense(4) # This will also call `build(input_shape)` and create the weights. y = linear_layer(tf.ones((2, 2))) assert len(linear_layer.weights) == 2 # These weights are trainable, so they're listed in `trainable_weights`: assert len(linear_layer.trainable_weights) == 2
Note that the method
add_weight() offers a shortcut to create weights:
class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b = self.add_weight(shape=(self.units,), initializer='random_normal', trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during
call(). Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer): def __init__(self, input_dim): super(ComputeSum, self).__init__() # Create a non-trainable weight. self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False) def call(self, inputs): self.total.assign_add(tf.reduce_sum(inputs, axis=0)) return self.total my_sum = ComputeSum(2) x = tf.ones((2, 2)) y = my_sum(x) print(y.numpy()) # [2. 2.] y = my_sum(x) print(y.numpy()) # [4. 4.] assert my_sum.weights == [my_sum.total] assert my_sum.non_trainable_weights == [my_sum.total] assert my_sum.trainable_weights == 
For more information about creating layers, see the guide Writing custom layers and models with Keras
About the layer's
Each layer has a dtype, which is typically the dtype of the layer's
computations and variables. A layer's dtype can be queried via the
Layer.dtype property. The dtype is specified with the
argument. In TensorFlow 2, the dtype defaults to
if no dtype is passed.
floatx() itself defaults to "float32". Additionally,
layers will cast their inputs to the layer's dtype in TensorFlow 2. When mixed
precision is used, layers may have different computation and variable dtypes.
tf.keras.mixed_precision.experimental.Policy for details on layer
||The name of the layer (string).|
The dtype of the layer's computations and weights. If mixed
precision is used with a
||List of variables to be included in backprop.|
||List of variables that should not be included in backprop.|
||The concatenation of the lists trainable_weights and non_trainable_weights (in this order).|
Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
Optional (list of)
||Optional regularizer function for the output of this layer.|
||Whether the layer is dynamic (eager-only); set in the constructor.|
Retrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.
List of losses added using the
Variable regularization tensors are created when this property is accessed,
so it is eager safe: accessing
List of metrics added using the
Retrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.
Whether this layer supports computing a mask using
add_loss( losses, **kwargs )
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be dependent
on the inputs passed when calling a layer. Hence, when reusing the same
layer on different inputs
b, some entries in
be dependent on
a and some on
b. This method automatically keeps track
This method can be used inside a subclassed layer or model's
function, in which case
losses should be a Tensor or list of Tensors.
class MyLayer(tf.keras.layers.Layer): def call(self, inputs): self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs
This method can also be called directly on a Functional Model during
construction. In this case, any loss Tensors passed to this Model must
be symbolic and be able to be traced back to the model's
losses become part of the model's topology and are tracked in
inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x)))
If this is not the case for your loss (if, for example, your loss references
Variable of one of the model's layers), you can wrap your loss in a
zero-argument lambda. These losses are not tracked as part of the model's
topology since they can't be serialized.
inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel))
||Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor.|
||Additional keyword arguments for backward compat|