|TensorFlow 1 version||View source on GitHub|
Cluster Resolver for Google Cloud TPUs.
Compat aliases for migration
See Migration guide for more details.
tf.distribute.cluster_resolver.TPUClusterResolver( tpu=None, zone=None, project=None, job_name='worker', coordinator_name=None, coordinator_address=None, credentials='default', service=None, discovery_url=None )
Used in the notebooks
|Used in the guide||Used in the tutorials|
This is an implementation of cluster resolvers for the Google Cloud TPU service.
TPUClusterResolver supports the following distinct environments: Google Compute Engine Google Kubernetes Engine Google internal
It can be passed into
tf.distribute.TPUStrategy to support TF2 training on
||A string corresponding to the TPU to use. It can be the TPU name or TPU worker gRPC address. If not set, it will try automatically resolve the TPU address on Cloud TPUs. If set to "local", it will assume that the TPU is directly connected to the VM instead of over the network.|
||Zone where the TPUs are located. If omitted or empty, we will assume that the zone of the TPU is the same as the zone of the GCE VM, which we will try to discover from the GCE metadata service.|
||Name of the GCP project containing Cloud TPUs. If omitted or empty, we will try to discover the project name of the GCE VM from the GCE metadata service.|
||Name of the TensorFlow job the TPUs belong to.|
||The name to use for the coordinator. Set to None if the coordinator should not be included in the computed ClusterSpec.|
||The address of the coordinator (typically an ip:port pair). If set to None, a TF server will be started. If coordinator_name is None, a TF server will not be started even if coordinator_address is Non|