|View source on GitHub|
A training helper that checkpoints models and computes summaries.
tf.compat.v1.train.Supervisor( graph=None, ready_op=USE_DEFAULT, ready_for_local_init_op=USE_DEFAULT, is_chief=True, init_op=USE_DEFAULT, init_feed_dict=None, local_init_op=USE_DEFAULT, logdir=None, summary_op=USE_DEFAULT, saver=USE_DEFAULT, global_step=USE_DEFAULT, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=USE_DEFAULT, init_fn=None, local_init_run_options=None )
This class is deprecated. Please use
The Supervisor is a small wrapper around a
SessionManager that takes care of common needs of TensorFlow
Use for a single program
with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. sv = Supervisor(logdir='/tmp/mydir') # Get a TensorFlow session managed by the supervisor. with sv.managed_session(FLAGS.master) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>)
with sv.managed_session() block all variables in the graph have
been initialized. In addition, a few services have been started to
checkpoint the model and add summaries to the event log.
If the program crashes and is restarted, the managed session automatically reinitialize variables from the most recent checkpoint.
The supervisor is notified of any exception raised by one of the services.
After an exception is raised,
True. In that case
the training loop should also stop. This is why the training loop has to
Exceptions that indicate that the training inputs have been exhausted,
tf.errors.OutOfRangeError, also cause
sv.should_stop() to return
but are not re-raised from the
with block: they indicate a normal
Use for multiple replicas
To train with replicas you deploy the same program in a
One of the tasks must be identified as the chief: the task that handles
initialization, checkpoints, summaries, and recovery. The other tasks
depend on the chief for these services.
The only change you have to do to the single program code is to indicate if the program is running as the chief.
# Choose a task as the chief. This could be based on server_def.task_index, # or job_def.name, or job_def.tasks. It's entirely up to the end user. # But there can be only one *chief*. is_chief = (server_def.task_index == 0) server = tf.distribute.Server(server_def) with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that uses log directory on a shared file system. # Indicate if you are the 'chief' sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) # Get a Session in a TensorFlow server on the cluster. with sv.managed_session(server.target) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>)
In the chief task, the
Supervisor works exactly as in the first example
above. In the other tasks
sv.managed_session() waits for the Model to have
been initialized before returning a session to the training code. The
non-chief tasks depend on the chief task for initializing the model.
If one of the tasks crashes and restarts,
checks if the Model is initialized. If yes, it just creates a session and
returns it to the training code that proceeds normally. If the model needs
to be initialized, the chief task takes care of reinitializing it; the other
tasks just wait for the model to have been initialized.
master string to use
Whether you are running on your machine or in the cluster you can use the following values for the --master flag:
''requests an in-process session that does not use RPC.
'local'requests a session that uses the RPC-based "Master interface" to run TensorFlow programs. See