Abstract class for all implementations of ClusterResolvers.

This defines the skeleton for all implementations of ClusterResolvers. ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary information to set up distributed training.

By letting TensorFlow communicate with these systems, we will be able to automatically discover and resolve IP addresses for various TensorFlow workers. This will eventually allow us to automatically recover from underlying machine failures and scale TensorFlow worker clusters up and down.

Note to Implementors of tf.distribute.cluster_resolver.ClusterResolver subclass: In addition to these abstract methods, when task_type, task_id, and rpc_layer attributes are applicable, you should also implement them either as properties with getters or setters, or directly set the attributes self._task_type, self._task_id, or self._rpc_layer so the base class' getters and setters are used. See tf.distribute.clusterresolver.SimpleClusterResolver.init_ for an example.

In general, multi-client tf.distribute strategies such as tf.distribute.experimental.MultiWorkerMirroredStrategy require task_type and task_id properties to be available in the ClusterResolver they are using. On the other hand, these concepts are not applicable in single-client strategies, such as tf.distribute.experimental.TPUStrategy, because the program is only expected to be run on one task, so there should not be a need to have code branches according to task type and task id.

  • task_type is the name of the server's current named job (e.g. 'worker', 'ps' in a distributed parameterized training job).
  • task_id is the ordinal index of the server within the task type.
  • rpc_layer is the protocol used by TensorFlow to communicate with other TensorFlow servers in a distributed environment.

environment Returns the current environment which TensorFlow is running in.

There are two possible return values, "google" (wh