![]() |
![]() |
Sequential
groups a linear stack of layers into a tf.keras.Model
.
Inherits From: Model
tf.keras.Sequential(
layers=None, name=None
)
Sequential
provides training and inference features on this model.
Examples:
# Optionally, the first layer can receive an `input_shape` argument:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
# Afterwards, we do automatic shape inference:
model.add(tf.keras.layers.Dense(4))
# This is identical to the following:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_dim=16))
# And to the following:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, batch_input_shape=(None, 16)))
# Note that you can also omit the `input_shape` argument.
# In that case the model doesn't have any weights until the first call
# to a training/evaluation method (since it isn't yet built):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(4))
# model.weights not created yet
# Whereas if you specify the input shape, the model gets built
# continuously as you are adding layers:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
model.add(tf.keras.layers.Dense(4))
len(model.weights)
4
# When using the delayed-build pattern (no input shape specified), you can
# choose to manually build your model by calling
# `build(batch_input_shape)`:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(4))
model.build((None, 16))
len(model.weights)
4
# Note that when using the delayed-build pattern (no input shape specified),
# the model gets built the first time you call `fit` (or other training and
# evaluation methods).
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(1))
model.compile(optimizer='sgd', loss='mse')
# This builds the model for the first time:
model.fit(x, y, batch_size=32, epochs=10)
Args | |
---|---|
layers
|
Optional list of layers to add to the model. |
name
|
Optional name for the model. |
Attributes | |
---|---|
distribute_strategy
|
The tf.distribute.Strategy this model was created under.
|
layers
|
|
metrics_names
|
Returns the model's display labels for all outputs.
|
run_eagerly
|
Settable attribute indicating whether the model should run eagerly.
Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. |
state_updates
|
Returns the updates from all layers that are stateful.
This is useful for separating training updates and state updates, e.g. when we need to update a layer's internal state during prediction. |
Methods
add
add(
layer
)
Adds a layer instance on top of the layer stack.
Arguments | |
---|---|
layer
|
layer instance. |
Raises | |
---|---|
TypeError
|
If layer is not a layer instance.
|
ValueError
|
In case the layer argument does not
know its input shape.
|
ValueError
|
In case the layer argument has
multiple output tensors, or is already connected
somewhere else (forbidden in Sequential models).
|
compile
compile(
optimizer='rmsprop', loss=None, metrics=None, loss_weights=None,
sample_weight_mode=None, weighted_metrics=None, **kwargs
)
Configures the model for training.
Arguments | |
---|---|
optimizer
|
String (name of optimizer) or optimizer instance.
See tf.keras.optimizers .
|
loss
|
String (name of objective function), objective function or
tf.keras.losses.Loss instance. See tf.keras.losses .
An objective function is any callable with the signature
loss = fn(y_true, y_pred) , where
y_true = ground truth values with shape = [batch_size, d0, .. dN] ,
except sparse loss functions such as sparse categorical crossentropy
where shape = [batch_size, d0, .. dN-1] .
y_pred = predicted values with shape = [batch_size, d0, .. dN] .
It returns a weighted loss float tensor.
If a custom Loss instance is used and reduction is set to NONE,
return value has the shape [batch_size, d0, .. dN-1] ie. per-sample
or per-timestep loss values; otherwise, it is a scalar.
If the model has multiple outputs, you can use a different loss on
each output by passing a dictionary or a list of losses. The loss
value that will be minimized by the model will then be the sum of
all individual losses.
|
metrics
|
List of metrics to be evaluated by the model during training
and testing.
Each of this can be a string (name of a built-in function), function
or a tf.keras.metrics.Metric instance. See tf.keras.metrics .
Typically you will use metrics=['accuracy'] . A function is any
callable with the signature result = fn(y_true, y_pred) .
To specify different metrics for different outputs of a
multi-output model, you could also pass a dictionary, such as
metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']} .
You can also pass a list (len = len(outputs)) of lists of metrics
such as metrics=[['accuracy'], ['accuracy', 'mse']] or
metrics=['accuracy', ['accuracy', 'mse']] .
When you pass the strings 'accuracy' or 'acc', we convert this to
one of tf.keras.metrics.BinaryAccuracy ,
tf.keras.metrics.CategoricalAccuracy ,
tf.keras.metrics.SparseCategoricalAccuracy based on the loss
function used and the model output shape. We do a similar conversion
for the strings 'crossentropy' and 'ce' as well.
|
loss_weights
|
Optional list or dictionary specifying scalar
coefficients (Python floats) to weight the loss contributions
of different model outputs.
The loss value that will be minimized by the model
will then be the weighted sum of all individual losses,
weighted by the loss_weights coefficients.
If a list, it is expected to have a 1:1 mapping
to the model's outputs. If a dict, it is expected to map
output names (strings) to scalar coefficients.
|
sample_weight_mode
|
If you need to do timestep-wise
sample weighting (2D weights), set this to "temporal" .
None defaults to sample-wise weights (1D).
If the model has multiple outputs, you can use a different
sample_weight_mode on each output by passing a
dictionary or a list of modes.
|
weighted_metrics
|
List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing. |
**kwargs
|
Any additional arguments. For eager execution, pass
run_eagerly=True .
|
Raises | |
---|---|
ValueError
|
In case of invalid arguments for
optimizer , loss , metrics or sample_weight_mode .
|
evaluate
evaluate(
x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None,
callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False,
return_dict=False
)
Returns the loss value & metrics values for the model in test mode.
Computation is done in batches.
Arguments | |
---|---|
x
|
Input data. It could be: - A Numpy array (or array-like), or a list
of arrays (in case the model has multiple inputs). - A TensorFlow
tensor, or a list of tensors (in case the model has multiple inputs).
|