Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge


Random-number generator.

Used in the notebooks

Used in the guide Used in the tutorials


Creating a generator from a seed:

g = tf.random.Generator.from_seed(1234)
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.9356609 ,  1.0854305 , -0.93788373],
       [-0.5061547 ,  1.3169702 ,  0.7137579 ]], dtype=float32)>

Creating a generator from a non-deterministic state:

g = tf.random.Generator.from_non_deterministic_state()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>

All the constructors allow explicitly choosing an Random-Number-Generation (RNG) algorithm. Supported algorithms are "philox" and "threefry". For example:

g = tf.random.Generator.from_seed(123, alg="philox")
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.8673864 , -0.29899067, -0.9310337 ],
       [-1.5828488 ,  1.2481191 , -0.6770643 ]], dtype=float32)>

CPU, GPU and TPU with the same algorithm and seed will generate the same integer random numbers. Float-point results (such as the output of normal) may have small numerical discrepancies between different devices.

This class uses a tf.Variable to manage its internal state. Every time random numbers are generated, the state of the generator will change. For example:

g = tf.random.Generator.from_seed(1234)
<tf.Variable ... numpy=array([1234,    0,    0])>
g.normal(shape=(2, 3))
<tf.Variable ... numpy=array([2770,    0,    0])>

The shape of the state is algorithm-specific.

There is also a global generator:

g = tf.random.get_global_generator()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>

When creating a generator inside a tf.distribute.Strategy scope, each replica will get a different stream of random numbers.

For example, in this code:

strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
  g = tf.random.Generator.from_seed(1)
  def f():
    return g.normal([])
  results =

results[0] and results[1] will have different values.

If the generator is seeded (e.g. created via Generator.from_seed), the random numbers will be determined by the seed, even though different replicas get different numbers. One can think of a random number generated on a replica as a hash of the replica ID and a "master" random number that may be common to all replicas. Hence, the whole system is still deterministic.

(Note that the random numbers on different replicas are not correlated, even if they are deterministically determined by the same seed. They are not correlated in the sense that no matter what statistics one calculates on them, there won't be any discernable correlation.)

Generators can be freely saved and restored using tf.train.Checkpoint. The checkpoint can be restored in a distribution strategy with a different number of replicas than the original strategy. If a replica ID is present in both the original and the new distribution strategy, its state will be properly restored (i.e. the random-number stream from the restored point will be the same as that from the saving point) unless the replicas have already diverged in their RNG call traces before saving (e.g. one replica has made one RNG call while another has made two RNG calls). We don't have such guarantee if the generator is saved in a strategy scope and restored outside of any strategy scope, or vice versa.

When a generator is created within the scope of tf.distribute.experimental.ParameterServerStrategy, the workers will share the generator's state (placed on one of the parameter servers). In this way the workers will still get different random-number streams, as stated above. (This is similar to replicas in a tf.distribute.MirroredStrategy sequentially accessing a generator created outside the strategy.) Each RNG call on a worker will incur a round-trip to a parameter server, which may have performance impacts. When creating a tf.distribute.experimental.ParameterServerStrategy, please make sure that the variable_partitioner argument won't shard small variables of shape [2] or [3] (because generator states must not be sharded). Ways to avoid sharding small variables include setting variable_partitioner to None or to tf.distribute.experimental.partitioners.MinSizePartitioner with a large enough min_shard_bytes (see tf.distribute.experimental.ParameterServerStrategy's documentation for more details).

copy_from a generator to be copied from.
state a vector of dtype STATE_TYPE representing the i