tf.random.Generator

Random-number generator.

Used in the notebooks

Used in the guide Used in the tutorials

Example:

Creating a generator from a seed:

g = tf.random.Generator.from_seed(1234)
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.9356609 ,  1.0854305 , -0.93788373],
       [-0.5061547 ,  1.3169702 ,  0.7137579 ]], dtype=float32)>

Creating a generator from a non-deterministic state:

g = tf.random.Generator.from_non_deterministic_state()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>

All the constructors allow explicitly choosing an Random-Number-Generation (RNG) algorithm. Supported algorithms are "philox" and "threefry". For example:

g = tf.random.Generator.from_seed(123, alg="philox")
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.8673864 , -0.29899067, -0.9310337 ],
       [-1.5828488 ,  1.2481191 , -0.6770643 ]], dtype=float32)>

CPU, GPU and TPU with the same algorithm and seed will generate the same integer random numbers. Float-point results (such as the output of normal) may have small numerical discrepancies between different devices.

This class uses a tf.Variable to manage its internal state. Every time random numbers are generated, the state of the generator will change. For example:

g = tf.random.Generator.from_seed(1234)
g.state
<tf.Variable ... numpy=array([1234,    0,    0])>
g.normal(shape=(2, 3))
<...>
g.state
<tf.Variable ... numpy=array([2770,    0,    0])>

The shape of the state is algorithm-specific.

There is also a global generator:

g = tf.random.get_global_generator()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>

When creating a generator inside a tf.distribute.Strategy scope, each replica will get a different stream of random numbers.

For example, in this code:

strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
  g = tf.random.Generator.from_seed(1)
  def f():
    return g.normal([])
  results = strat.run(f).values

results[0] and results[1] will have different values.

If the generator is seeded (e.g. created via Generator.from_seed), the random numbers will be determined by the seed, even though different replicas get different numbers. One can think of a random number generated on a replica as a hash of the replica ID and a "master" random number that may be common to all replicas. Hence, the whole system is still deterministic.

(Note that the random numbers on different replicas are not correlated, even if they are deterministically determined by the same seed. They are not correlated in the sense that no matter what statistics one calculates on them, there won't be any discernable correlation.)

Generators can be freely saved and restored using tf.train.Checkpoint. The checkpoint can be restored in a distribution strategy with a different number of replicas than the original strategy. If a replica ID is present in both the original and the new distribution strategy, its state will be properly restored (i.e. the random-number stream from the restored point will be the same as that from the saving point) unless the replicas have already diverged in their RNG call traces before saving (e.g. one replica has made one RNG call while another has made two RNG calls). We don't have such guarantee if the generator is saved in a strategy scope and restored outside of any strategy scope, or vice versa.

copy_from a generator to be copied from.
state a vector of dtype STATE_TYPE representing the initial state of the RNG, whose length and semantics are algorithm-specific. If it's a variable, the generator will reuse it instead of creating a new variable.
alg the RNG algorithm. Possible values are tf.random.Algori