Deletes old checkpoints.
import tensorflow as tf checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) manager = tf.contrib.checkpoint.CheckpointManager( checkpoint, directory="/tmp/model", max_to_keep=5) status = checkpoint.restore(manager.latest_checkpoint) while True: # train manager.save()
CheckpointManager preserves its own state across instantiations (see the
__init__ documentation for details). Only one should be active in a
particular directory at a time.
__init__( checkpoint, directory, max_to_keep, keep_checkpoint_every_n_hours=None )
CheckpointManager for use in
CheckpointManager was previously used in
state will be restored. This includes the list of managed checkpoints and
the timestamp bookkeeping necessary to support
keep_checkpoint_every_n_hours. The behavior of the new
will be the same as the previous
CheckpointManager, including cleaning up
existing checkpoints if appropriate.
Checkpoints are only considered for deletion just after a new checkpoint has
been added. At that point,
max_to_keep checkpoints will remain in an
"active set". Once a checkpoint is preserved by
keep_checkpoint_every_n_hours it will not be deleted by this
CheckpointManager or any future
CheckpointManager instantiated in
directory (regardless of the new setting of
max_to_keep checkpoints in the
active set may be deleted by this
CheckpointManager or a future
CheckpointManager instantiated in
directory (subject to its
tf.train.Checkpointinstance to save and manage checkpoints for.
directory: The path to a directory in which to write checkpoints. A special file named "checkpoint" is also written to this directory (in a human-readable text format) which contains the state of the
max_to_keep: An integer, the number of checkpoints to keep. Unless preserved by
keep_checkpoint_every_n_hours, checkpoints will be deleted from the active set, oldest first, until only
max_to_keepcheckpoints remain. If
None, no checkpoints are deleted and everything stays in the active set. Note that
max_to_keep=Nonewill keep all checkpoint paths in memory and in the checkpoint state protocol buffer on disk.
keep_checkpoint_every_n_hours: Upon removal from the active set, a checkpoint will be preserved if it has been at least
keep_checkpoint_every_n_hourssince the last preserved checkpoint. The default setting of
Nonedoes not preserve any checkpoints in this way.
max_to_keepis not a positive integer.
A list of managed checkpoints.
Note that checkpoints saved due to
keep_checkpoint_every_n_hours will not
show up in this list (to avoid ever-growing filename lists).
A list of filenames, sorted from oldest to newest.
The prefix of the most recent checkpoint in
the constructor argument to
Suitable for passing to
tf.train.Checkpoint.restore to resume training.
The checkpoint prefix. If there are no checkpoints, returns
save( session=None, checkpoint_number=None )
Creates a new checkpoint and manages it.
session: The session to evaluate variables in. Ignored when executing eagerly. If not provided when graph building, the default session is used.
checkpoint_number: An optional integer, or an integer-dtype
Tensor, used to number the checkpoint. If
None(default), checkpoints are numbered using
checkpoint.save_counter. Even if
save_counteris still incremented. A user-provided
checkpoint_numberis not incremented even if it is a
The path to the new checkpoint. It is also recorded in the