Given graph, a directory to write summaries to (output_dir), a checkpoint
to restore variables from, and a dict of Tensors to evaluate, run an eval
loop for max_steps steps, or until an exception (generally, an
end-of-input signal from a reader operation) is raised from running
In each step of evaluation, all tensors in the eval_dict are evaluated, and
every log_every_steps steps, they are logged. At the very end of evaluation,
a summary is evaluated (finding the summary ops using Supervisor's logic)
and written to output_dir.
A Graph to train. It is expected that this graph is not in use
A string containing the directory to write a summary to.
A string containing the path to a checkpoint to restore.
Can be None if the graph doesn't require loading any variables.
A dict mapping string names to tensors to evaluate. It is
evaluated in every logging step. The result of the final evaluation is
returned. If update_op is None, then it's evaluated in every step. If
max_steps is None, this should depend on a reader that will raise an
end-of-input exception when the inputs are exhausted.
A Tensor which is run in every step.
A Variable containing the global step. If None,
one is extracted from the graph using the same logic as in Supervisor.
Used to place eval summaries on training curves.
The master string to use when preparing the session.
Integer. Output logs every log_every_steps evaluation
steps. The logs contain the eval_dict and timing information.
A function that is called every iteration to produce a feed_dict
passed to session.run calls. Optional.
Integer. Evaluate eval_dict this many times.
A tuple (eval_results, global_step):
A dict mapping string to numeric values (int, float)
that are the result of running eval_dict in the last step. None if no
eval steps were run.