tf.contrib.tpu.device_assignment( topology, computation_shape=None, computation_stride=None, num_replicas=1 )
Computes a device_assignment of a computation across a TPU topology.
DeviceAssignment that describes the cores in the topology assigned
to each core of each replica.
computation_stride values should be powers of 2 for
Topologyobject that describes the TPU cluster topology. To obtain a TPU topology, evaluate the
Session.run. Either a serialized
Topologyobject may be passed. Note: you must evaluate the
Tensorfirst; you cannot pass an unevaluated
computation_shape: A rank 1 int32 numpy array of size 3, describing the shape of the computation's block of cores. If None, the
[1, 1, 1].
computation_stride: A rank 1 int32 numpy array of size 3, describing the inter-core spacing of the
computation_shapecores in the TPU topology. If None, the
[1, 1, 1].
num_replicas: The number of computation replicas to run. The replicas will be packed into the free spaces of the topology.
A DeviceAssignment object, which describes the mapping between the logical cores in each computation replica and the physical cores in the TPU topology.
topologyis not a valid
computation_strideare not 1D int32 numpy arrays with shape  where all values are positive.
ValueError: If computation's replicas cannot fit into the TPU topology.