Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge


Creates and initializes the requested tf.distribute strategy.

Example usage:

strategy = get_strategy("MirroredStrategy")

strategy Key for a tf.distribute strategy to be used to train the model. Choose from ["MirroredStrategy", "MultiWorkerMirroredStrategy", "ParameterServerStrategy", "TPUStrategy"]. If None, no distributed strategy will be used.
cluster_resolver A cluster_resolver to build strategy.
variable_partitioner Variable partitioner to be used in ParameterServerStrategy. If the argument is not specified, a recommended tf.distribute.experimental.partitioners.MinSizePartitioner is used. If the argument is explicitly specified as None, no partitioner is used and that variables are not partitioned. This arg is used only when the strategy is tf.distribute.experimental.ParameterServerStrategy. See tf.distribute.experimental.ParameterServerStrategy class doc for more information.
tpu TPU address for TPUStrategy. Not used for other strategy.

A strategy will be used for distributed training.

ValueError if strategy is not supported.