Sequential provides training and inference features on this model.
Examples:
# Optionally, the first layer can receive an `input_shape` argument:model = tf.keras.Sequential()model.add(tf.keras.layers.Dense(8, input_shape=(16,)))# Afterwards, we do automatic shape inference:model.add(tf.keras.layers.Dense(4))
# This is identical to the following:model = tf.keras.Sequential()model.add(tf.keras.Input(shape=(16,)))model.add(tf.keras.layers.Dense(8))
# Note that you can also omit the `input_shape` argument.# In that case the model doesn't have any weights until the first call# to a training/evaluation method (since it isn't yet built):model = tf.keras.Sequential()model.add(tf.keras.layers.Dense(8))model.add(tf.keras.layers.Dense(4))# model.weights not created yet
# Whereas if you specify the input shape, the model gets built# continuously as you are adding layers:model = tf.keras.Sequential()model.add(tf.keras.layers.Dense(8, input_shape=(16,)))model.add(tf.keras.layers.Dense(4))len(model.weights)4
# When using the delayed-build pattern (no input shape specified), you can# choose to manually build your model by calling# `build(batch_input_shape)`:model = tf.keras.Sequential()model.add(tf.keras.layers.Dense(8))model.add(tf.keras.layers.Dense(4))model.build((None, 16))len(model.weights)4
# Note that when using the delayed-build pattern (no input shape specified),
# the model gets built the first time you call `fit`, `eval`, or `predict`,
# or the first time you call the model on some input data.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(1))
model.compile(optimizer='sgd', loss='mse')
# This builds the model for the first time:
model.fit(x, y, batch_size=32, epochs=10)
Settable attribute indicating whether the model should run eagerly.
Running eagerly means that your model will be run step by step,
like Python code. Your model might run slower, but it should become easier
for you to debug it by stepping into individual layer calls.
By default, we will attempt to compile your model to a static graph to
deliver the best execution performance.
String (name of objective function), objective function or
tf.keras.losses.Loss instance. See tf.keras.losses. An objective
function is any callable with the signature loss = fn(y_true,
y_pred), where y_true = ground truth values with shape =
[batch_size, d0, .. dN], except sparse loss functions such as sparse
categorical crossentropy where shape = [batch_size, d0, .. dN-1].
y_pred = predicted values with shape = [batch_size, d0, .. dN]. It
returns a weighted loss float tensor. If a custom Loss instance is
used and reduction is set to NONE, return value has the shape
[batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values;
otherwise, it is a scalar. If the model has multiple outputs, you can
use a different loss on each output by passing a dictionary or a list
of losses. The loss value that will be minimized by the model will
then be the sum of all individual losses.
metrics
List of metrics to be evaluated by the model during training
and testing. Each of this can be a string (name of a built-in
function), function or a tf.keras.metrics.Metric instance. See
tf.keras.metrics. Typically you will use metrics=['accuracy']. A
function is any callable with the signature result = fn(y_true,
y_pred). To specify different metrics for different outputs of a
multi-output model, you could also pass a dictionary, such as
metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}.
You can also pass a list (len = len(outputs)) of lists of metrics
such as metrics=[['accuracy'], ['accuracy', 'mse']] or
metrics=['accuracy', ['accuracy', 'mse']]. When you pass the
strings 'accuracy' or 'acc', we convert this to one of
tf.keras.metrics.BinaryAccuracy,
tf.keras.metrics.CategoricalAccuracy,
tf.keras.metrics.SparseCategoricalAccuracy based on the loss
function used and the model output shape. We do a similar
conversion for the strings 'crossentropy' and 'ce' as well.
loss_weights
Optional list or dictionary specifying scalar coefficients
(Python floats) to weight the loss contributions of different model
outputs. The loss value that will be minimized by the model will then
be the weighted sum of all individual losses, weighted by the
loss_weights coefficients.
If a list, it is expected to have a 1:1 mapping to the model's
outputs. If a dict, it is expected to map output names (strings)
to scalar coefficients.
weighted_metrics
List of metrics to be evaluated and weighted by
sample_weight or class_weight during training and testing.
run_eagerly
Bool. Defaults to False. If True, this Model's
logic will not be wrapped in a tf.function. Recommended to leave
this as None unless your Model cannot be run inside a
tf.function.
steps_per_execution
Int. Defaults to 1. The number of batches to
run during each tf.function call. Running multiple batches
inside a single tf.function call can greatly improve performance
on TPUs or small models with a large Python overhead.
At most, one full epoch will be run each
execution. If a number larger than the size of the epoch is passed,
the execution will be truncated to the size of the epoch.
Note that if steps_per_execution is set to N,
Callback.on_batch_begin and Callback.on_batch_end methods
will only be called every N batches
(i.e. before/after each tf.function execution).
**kwargs
Arguments supported for backwards compatibility only.
Raises
ValueError
In case of invalid arguments for
optimizer, loss or metrics.
Returns the loss value & metrics values for the model in test mode.
Computation is done in batches (see the batch_size arg.)
Arguments
x
Input data. It could be:
A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors
(in case the model has multiple inputs).
A dict mapping input names to the corresponding array/tensors,
if the model has named inputs.
A tf.data dataset. Should return a tuple
of either (inputs, targets) or
(inputs, targets, sample_weights).
A generator or keras.utils.Sequence returning (inputs, targets)
or (inputs, targets, sample_weights).
A more detailed description of unpacking behavior for iterator types
(Dataset, generator, Sequence) is given in the Unpacking behavior
for iterator-like inputs section of Model.fit.
y
Target data. Like the input data x, it could be either Numpy
array(s) or TensorFlow tensor(s). It should be consistent with x
(you cannot have Numpy inputs and tensor targets, or inversely). If
x is a dataset, generator or keras.utils.Sequence instance, y
should not be specified (since targets will be obtained from the
iterator/dataset).
batch_size
Integer or None. Number of samples per batch of
computation. If unspecified, batch_size will default to 32. Do not
specify the batch_size if your data is in the form of a dataset,
generators, or keras.utils.Sequence instances (since they generate
batches).
verbose
0 or 1. Verbosity mode. 0 = silent, 1 = progress bar.
sample_weight
Optional Numpy array of weights for the test samples,
used for weighting the loss function. You can either pass a flat (1D)
Numpy array with the same length as the input samples
(1:1 mapping between weights and samples), or in the case of
temporal data, you can pass a 2D array with shape (samples,
sequence_length), to apply a different weight to every timestep
of every sample. This argument is not supported when x is a
dataset, instead pass sample weights as the third element of x.
steps
Integer or None. Total number of steps (batches of samples)
before declaring the evaluation round finished. Ignored with the
default value of None. If x is a tf.data dataset and steps is
None, 'evaluate' will run until the dataset is exhausted. This
argument is not supported with array inputs.
Integer. Used for generator or keras.utils.Sequence
input only. Maximum size for the generator queue. If unspecified,
max_queue_size will default to 10.
workers
Integer. Used for generator or keras.utils.Sequence input
only. Maximum number of processes to spin up when using process-based
threading. If unspecified, workers will default to 1. If 0, will
execute the generator on the main thread.
use_multiprocessing
Boolean. Used for generator or
keras.utils.Sequence input only. If True, use process-based
threading. If unspecified, use_multiprocessing will default to
False. Note that because this implementation relies on
multiprocessing, you should not pass non-picklable arguments to the
generator as they can't be passed easily to children processes.
return_dict
If True, loss and metric results are returned as a dict,
with each key being the name of the metric. If False, they are
returned as a list.
See the discussion of Unpacking behavior for iterator-like inputs for
Model.fit.
Returns
Scalar test loss (if the model has a single output and no metrics)
or list of scalars (if the model has multiple outputs
and/or metrics). The attribute model.metrics_names will give you
the display labels for the scalar outputs.