Mapping from logical cores in a computation to the physical TPU topology.
Prefer to use the
DeviceAssignment.build() helper to construct a
DeviceAssignment; it is easier if less flexible than constructing a
__init__( topology, core_assignment )
Topologyobject that describes the physical TPU topology.
core_assignment: A logical to physical core mapping, represented as a rank 3 numpy array. See the description of the
core_assignmentproperty for more details.
core_assignmentis not a rank 3 numpy array.
The logical to physical core mapping.
An integer numpy array of rank 3, with shape
[num_replicas, num_cores_per_replica, topology_rank]. Maps
(replica, logical core) pairs to physical topology coordinates.
The number of cores per replica.
The number of replicas of the computation.
Topology that describes the TPU topology.
@staticmethod build( topology, computation_shape=None, computation_stride=None, num_replicas=1 )
coordinates( replica, logical_core )
Returns the physical topology coordinates of a logical core.
host_device( replica=0, logical_core=0, job=None )
Returns the CPU device attached to a logical core.
lookup_replicas( task_id, logical_core )
Lookup replica ids by task number and logical core.
task_id: TensorFlow task number.
logical_core: An integer, identifying a logical core.
A sorted list of the replicas that are attached to that task and logical_core.
ValueError: If no replica exists in the task which contains the logical core.
tpu_device( replica=0, logical_core=0, job=None )
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal( replica=0, logical_core=0 )
Returns the ordinal of the TPU device assigned to a logical core.