# tf.contrib.tpu.replicate

tf.contrib.tpu.replicate(
computation,
inputs=None,
infeed_queue=None,
device_assignment=None,
name=None
)


Builds a graph operator that runs a replicated TPU computation.

#### Args:

• computation: A Python function that builds the computation to replicate.
• inputs: A list of lists of input tensors or None (equivalent to [[]]), indexed by [replica_num][input_num]. All replicas must have the same number of inputs.
• infeed_queue: If not None, the InfeedQueue from which to append a tuple of arguments as inputs to computation.
• device_assignment: If not None, a DeviceAssignment describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if None. The DeviceAssignment may be omitted if each replica of the computation uses only one core, and there is either only one replica, or the number of replicas is equal to the number of cores in the TPU system.
• name: (Deprecated) Does nothing.

#### Returns:

A list of lists of output tensors, indexed by [replica_num][output_num].

#### Raises:

• ValueError: If all replicas do not have equal numbers of input tensors.
• ValueError: If the number of inputs per replica does not match the number of formal parameters to computation.