|View source on GitHub|
A training helper that checkpoints models and computes summaries.
tf.compat.v1.train.Supervisor( graph=None, ready_op=USE_DEFAULT, ready_for_local_init_op=USE_DEFAULT, is_chief=True, init_op=USE_DEFAULT, init_feed_dict=None, local_init_op=USE_DEFAULT, logdir=None, summary_op=USE_DEFAULT, saver=USE_DEFAULT, global_step=USE_DEFAULT, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=USE_DEFAULT, init_fn=None, local_init_run_options=None )
This class is deprecated. Please use
The Supervisor is a small wrapper around a
SessionManager that takes care of common needs of TensorFlow
Use for a single program
with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. sv = Supervisor(logdir='/tmp/mydir') # Get a TensorFlow session managed by the supervisor. with sv.managed_session(FLAGS.master) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>)
with sv.managed_session() block all variables in the graph have
been initialized. In addition, a few services have been started to
checkpoint the model and add summaries to the event log.
If the program crashes and is restarted, the managed session automatically reinitialize variables from the most recent checkpoint.
The supervisor is notified of any exception raised by one of the services.
After an exception is raised,
True. In that case
the training loop should also stop. This is why the training loop has to
Exceptions that indicate that the training inputs have been exhausted,
tf.errors.OutOfRangeError, also cause
sv.should_stop() to return
but are not re-raised from the
with block: they indicate a normal
Use for multiple replicas
To train with replicas you deploy the same program in a
One of the tasks must be identified as the chief: the task that handles
initialization, checkpoints, summaries, and recovery. The other tasks
depend on the chief for these services.
The only change you have to do to the single program code is to indicate if the program is running as the chief.
# Choose a task as the chief. This could be based on server_def.task_index, # or job_def.name, or job_def.tasks. It's entirely up to the end user. # But there can be only one *chief*. is_chief = (server_def.task_index == 0) server = tf.distribute.Server(server_def) with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that uses log directory on a shared file system. # Indicate if you are the 'chief' sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) # Get a Session in a TensorFlow server on the cluster. with sv.managed_session(server.target) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>)
In the chief task, the
Supervisor works exactly as in the first example
above. In the other tasks
sv.managed_session() waits for the Model to have
been initialized before returning a session to the training code. The
non-chief tasks depend on the chief task for initializing the model.
If one of the tasks crashes and restarts,
checks if the Model is initialized. If yes, it just creates a session and
returns it to the training code that proceeds normally. If the model needs
to be initialized, the chief task takes care of reinitializing it; the other
tasks just wait for the model to have been initialized.
master string to use
Whether you are running on your machine or in the cluster you can use the following values for the --master flag:
''requests an in-process session that does not use RPC.
'local'requests a session that uses the RPC-based "Master interface" to run TensorFlow programs. See
'grpc://hostname:port'requests a session that uses the RPC interface to a specific host, and also allows the in-process master to access remote tensorflow workers. Often, it is appropriate to pass
Launching additional services
managed_session() launches the Checkpoint and Summary services (threads).
If you need more services to run you can simply launch them in the block
Example: Start a thread to print losses. We want this thread to run
every 60 seconds, so we launch it with
... sv = Supervisor(logdir='/tmp/mydir') with sv.managed_session(FLAGS.master) as sess: sv.loop(60, print_loss, (sess, )) while not sv.should_stop(): sess.run(my_train_op)
Launching fewer services
managed_session() launches the "summary" and "checkpoint" threads which use
either the optionally
saver passed to the constructor, or
default ones created automatically by the supervisor. If you want to run
your own summary and checkpointing logic, disable these services by passing
None to the
Example: Create summaries manually every 100 steps in the chief.
# Create a Supervisor with no automatic summaries. sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None) # As summary_op was None, managed_session() does not start the # summary thread. with sv.managed_session(FLAGS.master) as sess: for step in xrange(1000000): if sv.should_stop(): break if is_chief and step % 100 == 0: # Create the summary every 100 chief steps. sv.summary_computed(sess, sess.run(my_summary_op)) else: # Train normally sess.run(my_train_op)
Custom model initialization
managed_session() only supports initializing the model by running an
init_op or restoring from the latest checkpoint. If you have special
initialization needs, see how to specify a
local_init_op when creating the
supervisor. You can also use the
SessionManager directly to create a
session and check if it could be initialized automatically.
1-D string |