View source on GitHub |
Creates a distributed mesh.
tf.experimental.dtensor.create_distributed_mesh(
mesh_dims: List[Tuple[str, int]],
mesh_name: str = '',
local_devices: Optional[List[str]] = None,
device_type: Optional[str] = None,
use_xla_spmd: bool = layout.USE_XLA_SPMD
) -> tf.experimental.dtensor.Mesh
This is similar to create_mesh
, but with a different set of arguments to
create a mesh that spans evenly across a multi-client DTensor cluster.
For CPU and GPU meshes, users can choose to use fewer local devices than what
is available local_devices
.
For TPU, only meshes that uses all TPU cores is supported by the DTensor runtime.
Returns | |
---|---|
A mesh that spans evenly across all DTensor clients in the cluster. |