Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge

tf.compat.v1.train.Saver

Saves and restores variables.

Migrate to TF2

tf.compat.v1.train.Saver is not supported for saving and restoring checkpoints in TF2. Please switch to tf.train.Checkpoint or tf.keras.Model.save_weights, which perform a more robust object-based saving.

How to Rewrite Checkpoints

Please rewrite your checkpoints immediately using the object-based checkpoint APIs.

You can load a name-based checkpoint written by tf.compat.v1.train.Saver using tf.train.Checkpoint.restore or tf.keras.Model.load_weights. However, you may have to change the names of the variables in your model to match the variable names in the name-based checkpoint, which can be viewed with tf.train.list_variables(path).

Another option is to create an assignment_map that maps the name of the variables in the name-based checkpoint to the variables in your model, eg:

{
    'sequential/dense/bias': model.variables[0],
    'sequential/dense/kernel': model.variables[1]
}

and use tf.compat.v1.train.init_from_checkpoint(path, assignment_map) to restore the name-based checkpoint.

After restoring, re-encode your checkpoint using tf.train.Checkpoint.save or tf.keras.Model.save_weights.

See the Checkpoint compatibility section of the migration guide for more details.

Checkpoint Management in TF2

Use tf.train.CheckpointManager to manage checkpoints in TF2. tf.train.CheckpointManager offers equivalent keep_checkpoint_every_n_hours and max_to_keep parameters.

To recover the latest checkpoint,

checkpoint = tf.train.Checkpoint(model)
manager = tf.train.CheckpointManager(checkpoint)
status = checkpoint.restore(manager.latest_checkpoint)

tf.train.CheckpointManager also writes a CheckpointState proto which contains the timestamp when each checkpoint was created.

Writing MetaGraphDefs in TF2

To replace, tf.compat.v1.train.Saver.save(write_meta_graph=True), use tf.saved_model.save to write the MetaGraphDef (which is contained in saved_model.pb).

Description

Used in the notebooks

Used in the guide

See Variables for an overview of variables, saving and restoring.

The Saver class adds ops to save and restore variables to and from checkpoints. It also provides convenience methods to run these ops.

Checkpoints are binary files in a proprietary format which map variable names to tensor values. The best way to examine the contents of a checkpoint is to load it using a Saver.

Savers can automatically number checkpoint filenames with a provided counter. This lets you keep multiple checkpoints at different steps while training a model. For example you can number the checkpoint filenames with the training step number. To avoid filling up disks, savers manage checkpoint files automatically. For example, they can keep only the N most recent files, or one checkpoint for every N hours of training.

You number checkpoint filenames by passing a value to the optional global_step argument to save():

saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
...
saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'

Additionally, optional arguments to the Saver() constructor let you control the proliferation of checkpoint files on disk:

  • max_to_keep indicates the maximum number of recent checkpoint files to keep. As new files are created, older files are deleted. If None or 0, no checkpoints are deleted from the filesystem but only the last one is kept in the checkpoint file. Defaults to 5 (that is, the 5 most recent checkpoint files are kept.)

  • keep_checkpoint_every_n_hours: In addition to keeping the most recent max_to_keep checkpoint files, you might want to keep one checkpoint file for every N hours of training. This can be useful if you want to later analyze how a model progressed during a long training session. For example, passing keep_checkpoint_every_n_hours=2 ensures that you keep one checkpoint file for every 2 hours of training. The default value of 10,000 hours effectively disables the feature.

Note that you still have to call the save() method to save the model. Passing these arguments to the constructor will not save variables automatically for you.

A training program that saves regularly looks like:

...
# Create a saver.
saver = tf.compat.v1.train.Saver(...variables...)
# Launch the graph and train, saving the model every 1,000 steps.
sess = tf.compat.v1.Session()
for step in xrange(1000000):
    sess.run(..training_op..)
    if step % 1000 == 0:
        # Append the step number to the checkpoint name:
        saver.save(sess, 'my-model', global_step=step)

In addition to checkpoint files, savers keep a protocol buffer on disk with the list of recent checkpoints. This is used to manage numbered checkpoint files and by latest_checkpoint(), which makes it easy to discover the path to the most recent checkpoint. That protocol buffer is stored in a file named 'checkpoint' next to the checkpoint files.

If you create several savers, you can specify a different filename for the protocol buffer file in the call to save().

var_list A list of Variable/SaveableObject, or a dictionary mapping names to SaveableObjects. If None, defaults to the list of all saveable objects.