|TensorFlow 2 version||View source on GitHub|
Groups trackable objects, saving and restoring them.
`tf.contrib.eager.Checkpoint`Compat aliases for migration
See Migration guide for more details.
tf.train.Checkpoint( **kwargs )
Checkpoint's constructor accepts keyword arguments whose values are types
that contain trackable state, such as
tf.keras.Layer implementations, or
tf.keras.Model implementations. It saves these values with a checkpoint, and
save_counter for numbering checkpoints.
Example usage when graph building:
import tensorflow as tf import os checkpoint_directory = "/tmp/training_checkpoints" checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) train_op = optimizer.minimize( ... ) status.assert_consumed() # Optional sanity checks. with tf.compat.v1.Session() as session: # Use the Session to restore variables, or initialize them if # tf.train.latest_checkpoint returned None. status.initialize_or_restore(session) for _ in range(num_training_steps): session.run(train_op) checkpoint.save(file_prefix=checkpoint_prefix)
Example usage with eager execution enabled:
import tensorflow as tf import os tf.compat.v1.enable_eager_execution() checkpoint_directory = "/tmp/training_checkpoints" checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) for _ in range(num_training_steps): optimizer.minimize( ... ) # Variables will be restored on creation. status.assert_consumed() # Optional sanity checks. checkpoint.save(file_prefix=checkpoint_prefix)
Checkpoint.restore write and read object-based
checkpoints, in contrast to
tf.compat.v1.train.Saver which writes and reads
variable.name based checkpoints. Object-based checkpointing saves a graph of
dependencies between Python objects (
etc.) with named edges, and this graph is used to match variables when
restoring a checkpoint. It can be more robust to changes in the Python
program, and helps to support restore-on-create for variables when executing
tf.compat.v1.train.Saver for new
Checkpoint objects have dependencies on the objects passed as keyword
arguments to their constructors, and each dependency is given a name that is
identical to the name of the keyword argument for which it was created.
TensorFlow classes like
Optimizers will automatically add
dependencies on their variables (e.g. "kernel" and "bias" for
tf.keras.layers.Dense). Inheriting from
tf.keras.Model makes managing
dependencies easy in user-defined classes, since
Model hooks into attribute
assignment. For example:
class Regress(tf.keras.Model): def __init__(self): super(Regress, self).__init__() self.input_transform = tf.keras.layers.Dense(10) # ... def call(self, inputs): x = self.input_transform(inputs) # ...
Model has a dependency named "input_transform" on its
which in turn depends on its variables. As a result, saving an instance of
tf.train.Checkpoint will also save all the variables created
When variables are assigned to multiple workers, each worker writes its own section of the checkpoint. These sections are then merged/re-indexed to behave as a single checkpoint. This avoids copying all variables to one worker, but does require that all workers see a common filesystem.
tf.train.Checkpoint.save save in the
same format, note that the root of the resulting checkpoint is the object the
save method is attached to. This means saving a
save_weights and loading into a
tf.train.Checkpoint with a
attached (or vice versa) will not match the
Model's variables. See the
guide to training
||Keyword arguments are set as attributes of this object, and are saved with the checkpoint. Values must be trackable objects.|
If objects in
restore( save_path )
Restore a training checkpoint.
Checkpoint and any objects it depends on.
When executing eagerly, either assigns values immediately if variables to restore have been created already, or defers restoration until the variables are created. Dependencies added after this call will be matched if they have a corresponding object in the checkpoint (the restore request will queue in any trackable object waiting for the expected dependency to be added).
When graph building, restoration ops are added to the graph but not run immediately.
To ensure that loading is complete and no more assignments will take place,
assert_consumed() method of the status object returned by
checkpoint = tf.train.Checkpoint( ... ) checkpoint.restore(path).assert_consumed()
An exception will be raised if any Python objects in the dependency graph were not found in the checkpoint, or if any checkpointed values do not have a matching Python object.
When graph building,
assert_consumed() indicates that all of the restore
ops that will be created for this checkpoint have been created. They can be
run via the
run_restore_ops() method of the status object:
If the checkpoint has not been consumed completely, then the list of restore ops will grow as more objects are added to the dependency graph.
tf.compat.v1.train.Saver checkpoints can be loaded using this
method. Names are used to match variables. No restore ops are created/run
initialize_or_restore() are called on the
returned status object when graph building, but there is restore-on-creation
when executing eagerly. Re-encode name-based checkpoints using
tf.train.Checkpoint.save as soon as possible.
The path to the checkpoint, as returned by
A load status object, which can be used to make assertions about the
status of a checkpoint restoration and run initialization/restore ops.
The returned status object has the following methods:
save( file_prefix, session=None )
Saves a training checkpoint and provides basic checkpoint management.
The saved checkpoint includes variables created by this object and any
trackable objects it depends on at the time
save is a basic convenience wrapper around the
sequentially numbering checkpoints using
save_counter and updating the
metadata used by
tf.train.latest_checkpoint. More advanced checkpoint
management, for example garbage collection and custom numbering, may be
provided by other utilities which also wrap
tf.contrib.checkpoint.CheckpointManager for example).
A prefix to use for the checkpoint filenames
(/path/to/directory/and_a_prefix). Names are generated based on this
||The session to evaluate variables in. Ignored when executing eagerly. If not provided when graph building, the default session is used.|
|The full path to the checkpoint.|
write( file_prefix, session=None )
Writes a training checkpoint.
The checkpoint includes variables created by this object and any
trackable objects it depends on at the time
write does not number checkpoints, increment
save_counter, or update the
metadata used by
tf.train.latest_checkpoint. It is primarily intended for
use by higher level checkpoint management utilities.
save provides a very
basic implementation of these features.
||A prefix to use for the checkpoint filenames (/path/to/directory/and_a_prefix).|