tf.compat.v1.distribute.OneDeviceStrategy

View source on GitHub

A distribution strategy for running on a single device.

Inherits From: Strategy

Using this strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via strategy.run will also be placed on the specified device as well.

Typical usage of this strategy could be testing your code with the tf.distribute.Strategy API before switching to other strategies which actually distribute to multiple devices/machines.

For example:

tf.enable_eager_execution()
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")

with strategy.scope():
  v = tf.Variable(1.0)
  print(v.device)  # /job:localhost/replica:0/task:0/device:GPU:0

def step_fn(x):
  return x * 2

result = 0
for i in range(10):
  result += strategy.run(step_fn, args=(i,))
print(result)  # 90

device Device string identifier for the device on which the variables should be placed. See class docs for more details on how the device is used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0"

cluster_resolver Returns the cluster resolver associated with this strategy.

In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.experimental.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property.

Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property.

Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None.

The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,


os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})

# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()

...

if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.

For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.

extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated.

Methods

experimental_distribute_dataset

View source

Creates tf.distribute.DistributedDataset from tf.data.Dataset.

The returned tf.distribute.DistributedDataset can be iterated over similar to how regular datasets can. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset.

The following is an example:

strategy = tf.distribute.MirroredStrategy()

# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
  "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])

# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)

# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
  # process dataset elements
  strategy.run(replica_fn, args=(x,))

In the code snippet above, the tf.distribute.DistributedDataset dist_dataset is batched by GLOBAL_BATCH_SIZE, and we iterate through it using for x in dist_dataset. x a tf.distribute.DistributedValues containing data for all replicas, which aggregates to a batch of GLOBAL_BATCH_SIZE. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica.

What's under the hood of this method, when we say the tf.data.Dataset instance - dataset - gets distributed? It depends on how you set the tf.data.experimental.AutoShardPolicy through tf.data.experimental.DistributeOptions. By default, it is set to tf.data.experimental.AutoShardPolicy.AUTO. In a multi-worker setting, we will first attempt to distribute dataset by detecting whether dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) and if so, try to shard the input files. Note that there has to be at least one input file per worker. If you have less than one input file per worker, we suggest that you disable dataset sharding across workers, by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.

If the attempt to shard by file is unsuccessful (i.e. the dataset is not read from files), we will shard the dataset evenly at the end by appending a .shard operation to the end of the processing pipeline. This will cause the entire preprocessing pipeline for all the data to be run on every worker, and each worker will do redundant work. We will print a warning if this route is selected.

As mentioned before, within each worker, we will also split the data among all the worker devices (if more than one a present). This will happen even if multi-worker sharding is disabled.

If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.experimental_distribute_datasets_from_function instead, which does not do any automatic splitting or sharding.

You can also use the element_spec property of the tf.distribute.DistributedDataset instance returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function.

strategy = tf.distribute.MirroredStrategy()

# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
  "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])

# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)

@tf.function(input_signature=[dist_dataset.element_spec])
def train_step(inputs):
  # train model with inputs
  return

# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
  # process dataset elements
  strategy.run(train_step, args=(x,))

Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.

Returns
A tf.distribute.DistributedDataset.

experimental_distribute_datasets_from_function

View source

Distributes tf.data.Dataset instances created by calls to dataset_fn.

dataset_fn will be called once for each worker in the strategy. Each replica on that worker will dequeue one batch of inputs from the local Dataset (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step).

This method can be used for several purposes. For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. experimental_distribute_dataset may also sometimes fail to split the batch across replicas on a worker. In that case, this method can be used where that limitation does not exist.

The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed.

You can also use the element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function.

global_batch_size = 8
def dataset_fn(input_context):
  batch_size = input_context.get_per_replica_batch_size(
                   global_batch_size)
  d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
  return d.shard(
      input_context.num_input_pipelines,
      input_context.input_pipeline_id)
strategy = tf.distribute.MirroredStrategy()
ds = strategy.experimental_distribute_datasets_from_function(dataset_fn)
def train(ds):
  @tf.function(input_signature=[ds.element_spec])
  def step_fn(inputs):
    # train the model with inputs
    return inputs

... for batch in ds: ... replica_results = strategy.run(replica_fn, args=(batch,))

train(ds)

Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.

Returns
A tf.distribute.DistributedDataset.

experimental_local_results

View so