tf.distribute.cluster_resolver.SlurmClusterResolver

ClusterResolver for system with Slurm workload manager.

Inherits From: ClusterResolver

This is an implementation of ClusterResolver for Slurm clusters. This allows the specification of jobs and task counts, number of tasks per node, number of GPUs on each node and number of GPUs for each task. It retrieves system attributes by Slurm environment variables, resolves allocated computing node names, constructs a cluster and returns a ClusterResolver object which can be used for distributed TensorFlow.

jobs Dictionary with job names as key and number of tasks in the job as value. Defaults to as many 'worker's as there are (Slurm) tasks.
port_base The first port number to start with for processes on a node.
gpus_per_node Number of GPUs available on each node. Defaults to the number of GPUs reported by nvidia-smi
gpus_per_task Number of GPUs to be used for each task. Default is to evenly distribute the gpus_per_node to tasks_per_node.
tasks_per_node Number of tasks running on each node. Can be an integer if the number of tasks per node is constant or a dictionary mapping hostnames to number of tasks on that node. If not set the Slurm environment is queried for the correct mapping.
auto_set_gpu Set the visible CUDA devices automatically while resolving the cluster by setting CUDA_VISIBLE_DEVICES environment variable. Defaults to True.
rpc_layer The protocol TensorFlow used to communicate between nodes. Defaults to 'grpc'.

RuntimeError If requested more GPUs per node then available or requested more tasks then assigned tasks or resolving missing values from the environment failed.

environment Returns the current environment which TensorFlow is running in.

There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere).

If you are implementing a ClusterResolver that works in both the Google enviro