Utility functions

tf.device(device_name_or_function)

Wrapper for Graph.device() using the default graph.

See Graph.device() for more details.

Args:
  • device_name_or_function: The device name or function to use in the context.
Returns:

A context manager that specifies the default device to use for newly created ops.


tf.container(container_name)

Wrapper for Graph.container() using the default graph.

Args:
  • container_name: The container string to use in the context.
Returns:

A context manager that specifies the default container to use for newly created stateful ops.


tf.name_scope(name, default_name=None, values=None)

Returns a context manager for use when defining a Python op.

This context manager validates that the given values are from the same graph, makes that graph the default graph, and pushes a name scope in that graph (see Graph.name_scope() for more details on that).

For example, to define a new Python op called my_op:

def my_op(a, b, c, name=None):
  with tf.name_scope(name, "MyOp", [a, b, c]) as scope:
    a = tf.convert_to_tensor(a, name="a")
    b = tf.convert_to_tensor(b, name="b")
    c = tf.convert_to_tensor(c, name="c")
    # Define some computation that uses `a`, `b`, and `c`.
    return foo_op(..., name=scope)
Args:
  • name: The name argument that is passed to the op function.
  • default_name: The default name to use if the name argument is None.
  • values: The list of Tensor arguments that are passed to the op function.
Returns:

A context manager for use in defining Python ops. Yields the name scope.

Raises:
  • ValueError: if neither name nor default_name is provided but values are.

tf.control_dependencies(control_inputs)

Wrapper for Graph.control_dependencies() using the default graph.

See Graph.control_dependencies() for more details.

Args:
  • control_inputs: A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies.
Returns:

A context manager that specifies control dependencies for all operations constructed within the context.


tf.convert_to_tensor(value, dtype=None, name=None, preferred_dtype=None)

Converts the given value to a Tensor.

This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example:

import numpy as np

def my_func(arg):
  arg = tf.convert_to_tensor(arg, dtype=tf.float32)
  return tf.matmul(arg, arg) + arg

# The following calls are equivalent.
value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))

This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects.

Args:
  • value: An object whose type has a registered Tensor conversion function.
  • dtype: Optional element type for the returned tensor. If missing, the type is inferred from the type of value.
  • name: Optional name to use if a new Tensor is created.
  • preferred_dtype: Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so preferred_dtype can be used as a soft preference. If the conversion to preferred_dtype is not possible, this argument has no effect.
Returns:

An Output based on value.

Raises:
  • TypeError: If no conversion function is registered for value.
  • RuntimeError: If a registered conversion function returns an invalid value.

tf.convert_to_tensor_or_indexed_slices(value, dtype=None, name=None)

Converts the given object to a Tensor or an IndexedSlices.

If value is an IndexedSlices or SparseTensor it is returned unmodified. Otherwise, it is converted to a Tensor using convert_to_tensor().

Args:
  • value: An IndexedSlices, SparseTensor, or an object that can be consumed by convert_to_tensor().
  • dtype: (Optional.) The required DType of the returned Tensor or IndexedSlices.
  • name: (Optional.) A name to use if a new Tensor is created.
Returns:

An Tensor, IndexedSlices, or SparseTensor based on value.

Raises:
  • ValueError: If dtype does not match the element type of value.

tf.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None)

Converts value to a SparseTensor or Tensor.

Args:
  • value: A SparseTensor, SparseTensorValue, or an object whose type has a registered Tensor conversion function.
  • dtype: Optional element type for the returned tensor. If missing, the type is inferred from the type of value.
  • name: Optional name to use if a new Tensor is created.
Returns:

A SparseTensor or Tensor based on value.

Raises:
  • RuntimeError: If result type is incompatible with dtype.

tf.get_default_graph()

Returns the default graph for the current thread.

The returned graph will be the innermost graph on which a Graph.as_default() context has been entered, or a global default graph if none has been explicitly created.

NOTE: The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function.

Returns:

The default Graph being used in the current thread.


tf.reset_default_graph()

Clears the default graph stack and resets the global default graph.

NOTE: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.Session or tf.InteractiveSession is active will result in undefined behavior. Using any previously created tf.Operation or tf.Tensor objects after calling this function will result in undefined behavior.


tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)

Imports the TensorFlow graph in graph_def into the Python Graph.

This function provides a way to import a serialized TensorFlow GraphDef protocol buffer, and extract individual objects in the GraphDef as Tensor and Operation objects. See Graph.as_graph_def() for a way to create a GraphDef proto.

Args:
  • graph_def: A GraphDef proto containing operations to be imported into the default graph.
  • input_map: A dictionary mapping input names (as strings) in graph_def to Tensor objects. The values of the named input tensors in the imported graph will be re-mapped to the respective Tensor values.
  • return_elements: A list of strings containing operation names in graph_def that will be returned as Operation objects; and/or tensor names in graph_def that will be returned as Tensor objects.
  • name: (Optional.) A prefix that will be prepended to the names in graph_def. Defaults to "import".
  • op_dict: (Optional.) A dictionary mapping op type names to OpDef protos. Must contain an OpDef proto for each op type named in graph_def. If omitted, uses the OpDef protos registered in the global registry.
  • producer_op_list: (Optional.) An OpList proto with the (possibly stripped) list of OpDefs used by the producer of the graph. If provided, attrs for ops in graph_def that are not in op_dict that have their default value according to producer_op_list will be removed. This will allow some more GraphDefs produced by later binaries to be accepted by earlier binaries.
Returns:

A list of Operation and/or Tensor objects from the imported graph, corresponding to the names in return_elements.

Raises:
  • TypeError: If graph_def is not a GraphDef proto, input_map is not a dictionary mapping strings to Tensor objects, or return_elements is not a list of strings.
  • ValueError: If input_map, or return_elements contains names that do not appear in graph_def, or graph_def is not well-formed (e.g. it refers to an unknown tensor).

tf.load_file_system_library(library_filename)

Loads a TensorFlow plugin, containing file system implementation.

Pass library_filename to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here.

Args:
  • library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.
Returns:

None.

Raises:
  • RuntimeError: when unable to load the library.

tf.load_op_library(library_filename)

Loads a TensorFlow plugin, containing custom ops and kernels.

Pass "library_filename" to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here. When the library is loaded, ops and kernels registered in the library via the REGISTER_* macros are made available in the TensorFlow process. Note that ops with the same name as an existing op are rejected and not registered with the process.

Args:
  • library_filename: Path to the plugin. Relative or absolute filesystem path to a dynamic library file.
Returns:

A python module containing the Python wrappers for Ops defined in the plugin.

Raises:
  • RuntimeError: when unable to load the library or get the python wrappers.