Adding Summaries to Event Files

See Summaries and TensorBoard for an overview of summaries, event files, and visualization in TensorBoard.

class tf.train.SummaryWriter

Writes Summary protocol buffers to event files.

The SummaryWriter class provides a mechanism to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.


tf.train.SummaryWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None) {:#SummaryWriter.init}

Creates a SummaryWriter and an event file.

On construction the summary writer creates a new event file in logdir. This event file will contain Event protocol buffers constructed when you call one of the following functions: add_summary(), add_session_log(), add_event(), or add_graph().

If you pass a Graph to the constructor it is added to the event file. (This is equivalent to calling add_graph() later).

TensorBoard will pick the graph from the file and display it graphically so you can interactively explore the graph you built. You will usually pass the graph from the session in which you launched it:

...create a graph...
# Launch the graph in a session.
sess = tf.Session()
# Create a summary writer, add the 'graph' to the event file.
writer = tf.train.SummaryWriter(<some-directory>, sess.graph)

The other arguments to the constructor control the asynchronous writes to the event file:

  • flush_secs: How often, in seconds, to flush the added summaries and events to disk.
  • max_queue: Maximum number of summaries or events pending to be written to disk before one of the 'add' calls block.
Args:
  • logdir: A string. Directory where event file will be written.
  • graph: A Graph object, such as sess.graph.
  • max_queue: Integer. Size of the queue for pending events and summaries.
  • flush_secs: Number. How often, in seconds, to flush the pending events and summaries to disk.
  • graph_def: DEPRECATED: Use the graph argument instead.

tf.train.SummaryWriter.add_summary(summary, global_step=None)

Adds a Summary protocol buffer to the event file.

This method wraps the provided summary in an Event protocol buffer and adds it to the event file.

You can pass the result of evaluating any summary op, using Session.run() or Tensor.eval(), to this function. Alternatively, you can pass a tf.Summary protocol buffer that you populate with your own data. The latter is commonly done to report evaluation results in event files.

Args:
  • summary: A Summary protocol buffer, optionally serialized as a string.
  • global_step: Number. Optional global step value to record with the summary.

tf.train.SummaryWriter.add_session_log(session_log, global_step=None)

Adds a SessionLog protocol buffer to the event file.

This method wraps the provided session in an Event procotol buffer and adds it to the event file.

Args:
  • session_log: A SessionLog protocol buffer.
  • global_step: Number. Optional global step value to record with the summary.

tf.train.SummaryWriter.add_event(event)

Adds an event to the event file.

Args:
  • event: An Event protocol buffer.

tf.train.SummaryWriter.add_graph(graph, global_step=None, graph_def=None)

Adds a Graph to the event file.

The graph described by the protocol buffer will be displayed by TensorBoard. Most users pass a graph in the constructor instead.

Args:
  • graph: A Graph object, such as sess.graph.
  • global_step: Number. Optional global step counter to record with the graph.
  • graph_def: DEPRECATED. Use the graph parameter instead.
Raises:
  • ValueError: If both graph and graph_def are passed to the method.

tf.train.SummaryWriter.add_run_metadata(run_metadata, tag, global_step=None)

Adds a metadata information for a single session.run() call.

Args:
  • run_metadata: A RunMetadata protobuf object.
  • tag: The tag name for this metadata.
  • global_step: Number. Optional global step counter to record with the StepStats.
Raises:
  • ValueError: If the provided tag was already used for this type of event.

tf.train.SummaryWriter.get_logdir()

Returns the directory where event file will be written.


tf.train.SummaryWriter.flush()

Flushes the event file to disk.

Call this method to make sure that all pending events have been written to disk.


tf.train.SummaryWriter.close()

Flushes the event file to disk and close the file.

Call this method when you do not need the summary writer anymore.

Other Methods


tf.train.SummaryWriter.reopen()

Reopens the summary writer.

Can be called after close() to add more events in the same directory. The events will go into a new events file.

Does nothing if the summary writer was not closed.


tf.train.summary_iterator(path)

An iterator for reading Event protocol buffers from an event file.

You can use this function to read events written to an event file. It returns a Python iterator that yields Event protocol buffers.

Example: Print the contents of an events file.

for e in tf.train.summary_iterator(path to events file):
    print(e)

Example: Print selected summary values.

# This example supposes that the events file contains summaries with a
# summary value tag 'loss'.  These could have been added by calling
# `add_summary()`, passing the output of a scalar summary op created with
# with: `tf.scalar_summary(['loss'], loss_tensor)`.
for e in tf.train.summary_iterator(path to events file):
    for v in e.summary.value:
        if v.tag == 'loss':
            print(v.simple_value)

See the protocol buffer definitions of Event and Summary for more information about their attributes.

Args:
  • path: The path to an event file created by a SummaryWriter.
Yields:

Event protocol buffers.