|TensorFlow 1 version||View source on GitHub|
Mapping from logical cores in a computation to the physical TPU topology.
Compat aliases for migration
See Migration guide for more details.
tf.tpu.experimental.Topology, core_assignment: np.ndarray )
Prefer to use the
DeviceAssignment.build() helper to construct a
DeviceAssignment; it is easier if less flexible than constructing a
A logical to physical core mapping, represented as a
rank 3 numpy array. See the description of the
||The logical to physical core mapping.|
||The number of cores per replica.|
||The number of replicas of the computation.|
tf.tpu.experimental.Topology, computation_shape: Optional[np.ndarray] = None, computation_stride: Optional[np.ndarray] = None, num_replicas: int = 1 ) -> "DeviceAssignment"
coordinates( replica: int, logical_core: int ) -> Tuple
Returns the physical topology coordinates of a logical core.
host_device( replica: int = 0, logical_core: int = 0, job: Optional[Text] = None ) -> Text
Returns the CPU device attached to a logical core.
lookup_replicas( task_id: int, logical_core: int ) -> List[int]
Lookup replica ids by task number and logical core.
||TensorFlow task number.|
||An integer, identifying a logical core.|
|A sorted list of the replicas that are attached to that task and logical_core.|
||If no replica exists in the task which contains the logical core.|
tpu_device( replica: int = 0, logical_core: int = 0, job: Optional[Text] = None ) -> Text
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal( replica: int = 0, logical_core: int = 0 ) -> int
Returns the ordinal of the TPU device assigned to a logical core.