tf.distribute.experimental.TPUStrategy

Synchronous training on TPUs and TPU Pods.

Inherits From: Strategy

To construct a TPUStrategy object, you need to run the initialization code as below:

resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)

While using distribution strategies, the variables created within the strategy's scope will be replicated across all the replicas and can be kept in sync using all-reduce algorithms.

To run TF2 programs on TPUs, you can either use .compile and .fit APIs in tf.keras with TPUStrategy, or write your own customized training loop by calling strategy.run directly. Note that TPUStrategy doesn't support pure eager execution, so please make sure the function passed into strategy.run is a tf.function or strategy.run is called inside a tf.function if eager behavior is enabled.

tpu_cluster_resolver A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster.
device_assignment Optional tf.tpu.experimental.DeviceAssignment to specify the placement of replicas on the TPU cluster.