Module: tfr.keras.strategy_utils

tf.distribute strategy utils for Ranking pipeline in tfr.keras.

In TF2, the distributed training can be easily handled with Strategy offered in tf.distribute. Depending on device and MapReduce technique, there are four strategies are currently supported. They are: MirroredStrategy: synchronous strategy on a single CPU/GPU worker. MultiWorkerMirroredStrategy: synchronous strategy on multiple CPU/GPU workers. TPUStrategy: distributed strategy working on TPU. ParameterServerStrategy: asynchronous distributed strategy on CPU/GPU workers.

Please check https://www.tensorflow.org/guide/distributed_training for more information.

Classes

class NullContextManager: A null context manager for local training.

Functions

get_output_filepath(...): Gets filepaths for different workers to resolve conflict of MWMS.

get_strategy(...): Creates and initializes the requested tf.distribute strategy.

strategy_scope(...): Gets the strategy.scope() for training with strategy.

MIRRORED_STRATEGY 'MirroredStrategy'
MWMS_STRATEGY 'MultiWorkerMirroredStrategy'
PS_STRATEGY 'ParameterServerStrategy'
TPU_STRATEGY 'TPUStrategy'