View source on GitHub 
Randomnumber generator.
tf.random.Generator(
copy_from=None, state=None, alg=None
)
Example:
Creating a generator from a seed:
g = tf.random.Generator.from_seed(1234)
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.9356609 , 1.0854305 , 0.93788373],
[0.5061547 , 1.3169702 , 0.7137579 ]], dtype=float32)>
Creating a generator from a nondeterministic state:
g = tf.random.Generator.from_non_deterministic_state()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>
All the constructors allow explicitly choosing an RandomNumberGeneration
(RNG) algorithm. Supported algorithms are "philox"
and "threefry"
. For
example:
g = tf.random.Generator.from_seed(123, alg="philox")
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.8673864 , 0.29899067, 0.9310337 ],
[1.5828488 , 1.2481191 , 0.6770643 ]], dtype=float32)>
CPU, GPU and TPU with the same algorithm and seed will generate the same
integer random numbers. Floatpoint results (such as the output of normal
)
may have small numerical discrepancies between different devices.
This class uses a tf.Variable
to manage its internal state. Every time
random numbers are generated, the state of the generator will change. For
example:
g = tf.random.Generator.from_seed(1234)
g.state
<tf.Variable ... numpy=array([1234, 0, 0])>
g.normal(shape=(2, 3))
<...>
g.state
<tf.Variable ... numpy=array([2770, 0, 0])>
The shape of the state is algorithmspecific.
There is also a global generator:
g = tf.random.get_global_generator()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>
When creating a generator inside a tf.distribute.Strategy
scope, each
replica will get a different stream of random numbers.
For example, in this code:
strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
g = tf.random.Generator.from_seed(1)
def f():
return g.normal([])
results = strat.run(f).values
results[0]
and results[1]
will have different values.
If the generator is seeded (e.g. created via Generator.from_seed
), the
random numbers will be determined by the seed, even though different replicas
get different numbers. One can think of a random number generated on a
replica as a hash of the replica ID and a "master" random number that may be
common to all replicas. Hence, the whole system is still deterministic.
(Note that the random numbers on different replicas are not correlated, even if they are deterministically determined by the same seed. They are not correlated in the sense that no matter what statistics one calculates on them, there won't be any discernable correlation.)
Generators can be freely saved and restored using tf.train.Checkpoint
. The
checkpoint can be restored in a distribution strategy with a different number
of replicas than the original strategy. If a replica ID is present in both the
original and the new distribution strategy, its state will be properly
restored (i.e. the randomnumber stream from the restored point will be the
same as that from the saving point) unless the replicas have already diverged
in their RNG call traces before saving (e.g. one replica has made one RNG call
while another has made two RNG calls). We don't have such guarantee if the
generator is saved in a strategy scope and restored outside of any strategy
scope, or vice versa.
When a generator is created within the scope of
tf.distribute.experimental.ParameterServerStrategy
, the workers
will share the generator's state (placed on one of the parameter
servers). In this way the workers will still get different
randomnumber streams, as stated above. (This is similar to replicas
in a tf.distribute.MirroredStrategy
sequentially accessing a
generator created outside the strategy.) Each RNG call on a worker
will incur a roundtrip to a parameter server, which may have
performance impacts. When creating a
tf.distribute.experimental.ParameterServerStrategy
, please make
sure that the variable_partitioner
argument won't shard small
variables of shape [2]
or [3]
(because generator states must not
be sharded). Ways to avoid sharding small variables include setting
variable_partitioner
to None
or to
tf.distribute.experimental.partitioners.MinSizePartitioner
with a
large enough min_shard_bytes
(see
tf.distribute.experimental.ParameterServerStrategy
's documentation
for more details).
Args  

copy_from

a generator to be copied from. 
state

a vector of dtype STATE_TYPE representing the initial state of the RNG, whose length and semantics are algorithmspecific. If it's a variable, the generator will reuse it instead of creating a new variable. 
alg

the RNG algorithm. Possible values are
tf.random.Algorithm.PHILOX for the Philox algorithm and
tf.random.Algorithm.THREEFRY for the ThreeFry algorithm
(see paper 'Parallel Random Numbers: As Easy as 1, 2, 3'
[https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]).
The string names "philox" and "threefry" can also be used.
Note PHILOX guarantees the same numbers are produced (given
the same random state) across all architectures (CPU, GPU, XLA etc).

Methods
binomial
binomial(
shape,
counts,
probs,
dtype=tf.dtypes.int32
,
name=None
)
Outputs random values from a binomial distribution.
The generated values follow a binomial distribution with specified count and probability of success parameters.
Example:
counts = [10., 20.]
# Probability of success.
probs = [0.8]
rng = tf.random.Generator.from_seed(seed=234)
binomial_samples = rng.binomial(shape=[2], counts=counts, probs=probs)
counts = ... # Shape [3, 1, 2]
probs = ... # Shape [1, 4, 2]
shape = [3, 4, 3, 4, 2]
rng = tf.random.Generator.from_seed(seed=1717)
# Sample shape will be [3, 4, 3, 4, 2]
binomial_samples = rng.binomial(shape=shape, counts=counts, probs=probs)
Args  

shape

A 1D integer Tensor or Python array. The shape of the output tensor. 
counts

Tensor. The counts of the binomial distribution. Must be
broadcastable with probs , and broadcastable with the rightmost
dimensions of shape .

probs

Tensor. The probability of success for the
binomial distribution. Must be broadcastable with counts and
broadcastable with the rightmost dimensions of shape .

dtype

The type of the output. Default: tf.int32 
name

A name for the operation (optional). 
Returns  

samples

A Tensor of the specified shape filled with random binomial values. For each i, each samples[i, ...] is an independent draw from the binomial distribution on counts[i] trials with probability of success probs[i]. 
from_key_counter
@classmethod
from_key_counter( key, counter, alg )
Creates a generator from a key and a counter.
This constructor only applies if the algorithm is a counterbased algorithm.
See method key
for the meaning of "key" and "counter".
Args  

key

the key for the RNG, a scalar of type STATE_TYPE. 
counter

a vector of dtype STATE_TYPE representing the initial counter for the RNG, whose length is algorithmspecific., 
alg

the RNG algorithm. If None, it will be autoselected. See
__init__ for its possible values.

Returns  

The new generator. 
from_non_deterministic_state
@classmethod
from_non_deterministic_state( alg=None )
Creates a generator by nondeterministically initializing its state.
The source of the nondeterminism will be platform and timedependent.
Args  

alg

(optional) the RNG algorithm. If None, it will be autoselected. See
__init__ for its possible values.

Returns  

The new generator. 
from_seed
@classmethod
from_seed( seed, alg=None )
Creates a generator from a seed.
A seed is a 1024bit unsigned integer represented either as a Python integer or a vector of integers. Seeds shorter than 1024bit will be padded. The padding, the internal structure of a seed and the way a seed is converted to a state are all opaque (unspecified). The only semantics specification of seeds is that two different seeds are likely to produce two independent generators (but no guarantee).
Args  

seed

the seed for the RNG. 
alg

(optional) the RNG algorithm. If None, it will be autoselected. See
__init__ for its possible values.

Returns  

The new generator. 
from_state
@classmethod
from_state( state, alg )
Creates a generator from a state.
See __init__
for description of state
and alg
.
Args  

state

the new state. 
alg

the RNG algorithm. 
Returns  

The new generator. 
make_seeds
make_seeds(
count=1
)
Generates seeds for stateless random ops.
For example:
seeds = get_global_generator().make_seeds(count=10)
for i in range(10):
seed = seeds[:, i]
numbers = stateless_random_normal(shape=[2, 3], seed=seed)
...
Args  

count

the number of seed pairs (note that stateless random ops need a pair of seeds to invoke). 
Returns  

A tensor of shape [2, count] and dtype int64. 
normal
normal(
shape,
mean=0.0,
stddev=1.0,
dtype=tf.dtypes.float32
,
name=None
)
Outputs random values from a normal distribution.
Args  

shape

A 1D integer Tensor or Python array. The shape of the output tensor. 
mean

A 0D Tensor or Python value of type dtype . The mean of the normal
distribution.

stddev

A 0D Tensor or Python value of type dtype . The standard
deviation of the normal distribution.

dtype

The type of the output. 
name

A name for the operation (optional). 
Returns  

A tensor of the specified shape filled with random normal values. 
reset
reset(
state
)
Resets the generator by a new state.
See __init__
for the meaning of "state".
Args  

state

the new state. 
reset_from_key_counter
reset_from_key_counter(
key, counter
)
Resets the generator by a new keycounter pair.
See from_key_counter
for the meaning of "key" and "counter".
Args  

key

the new key. 
counter

the new counter. 
reset_from_seed
reset_from_seed(
seed
)
Resets the generator by a new seed.
See from_seed
for the meaning of "seed".
Args  

seed

the new seed. 
skip
skip(
delta
)
Advance the counter of a counterbased RNG.
Args  

delta

the amount of advancement. The state of the RNG after
skip(n) will be the same as that after normal([n])
(or any other distribution). The actual increment added to the
counter is an unspecified implementation detail.

Returns  

A Tensor of type int64 .

split
split(
count=1
)
Returns a list of independent Generator
objects.
Two generators are independent of each other in the sense that the
randomnumber streams they generate don't have statistically detectable
correlations. The new generators are also independent of the old one.
The old generator's state will be changed (like other randomnumber
generating methods), so two calls of split
will return different
new generators.
For example:
gens = get_global_generator().split(count=10)
for gen in gens:
numbers = gen.normal(shape=[2, 3])
# ...
gens2 = get_global_generator().split(count=10)
# gens2 will be different from gens
The new generators will be put on the current device (possible different from the old generator's), for example:
with tf.device("/device:CPU:0"):
gen = Generator(seed=1234) # gen is on CPU
with tf.device("/device:GPU:0"):
gens = gen.split(count=10) # gens are on GPU
Args  

count

the number of generators to return. 
Returns  

A list (length count ) of Generator objects independent of each other.
The new generators have the same RNG algorithm as the old one.

truncated_normal
truncated_normal(
shape,
mean=0.0,
stddev=1.0,
dtype=tf.dtypes.float32
,
name=None
)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and repicked.
Args  

shape

A 1D integer Tensor or Python array. The shape of the output tensor. 
mean

A 0D Tensor or Python value of type dtype . The mean of the
truncated normal distribution.

stddev

A 0D Tensor or Python value of type dtype . The standard
deviation of the normal distribution, before truncation.

dtype

The type of the output. 
name

A name for the operation (optional). 
Returns  

A tensor of the specified shape filled with random truncated normal values. 
uniform
uniform(
shape,
minval=0,
maxval=None,
dtype=tf.dtypes.float32
,
name=None
)
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range
[minval, maxval)
. The lower bound minval
is included in the range, while
the upper bound maxval
is excluded. (For float numbers especially
lowprecision types like bfloat16, because of
rounding, the result may sometimes include maxval
.)
For floats, the default range is [0, 1)
. For ints, at least maxval
must
be specified explicitly.
In the integer case, the random integers are slightly biased unless
maxval  minval
is an exact power of two. The bias is small for values of
maxval  minval
significantly smaller than the range of the output (either
2**32
or 2**64
).
For fullrange random integers, pass minval=None
and maxval=None
with an
integer dtype
(for integer dtypes, minval
and maxval
must be both
None
or both not None
).
Args  

shape

A 1D integer Tensor or Python array. The shape of the output tensor. 
minval

A Tensor or Python value of type dtype , broadcastable with
shape (for integer types, broadcasting is not supported, so it needs
to be a scalar). The lower bound (included) on the range of random
values to generate. Pass None for fullrange integers. Defaults to 0.

maxval

A Tensor or Python value of type dtype , broadcastable with
shape (for integer types, broadcasting is not supported, so it needs
to be a scalar). The upper bound (excluded) on the range of random
values to generate. Pass None for fullrange integers. Defaults to 1
if dtype is floating point.

dtype

The type of the output. 
name

A name for the operation (optional). 
Returns  

A tensor of the specified shape filled with random uniform values. 
Raises  

ValueError

If dtype is integral and maxval is not specified.

uniform_full_int
uniform_full_int(
shape,
dtype=tf.dtypes.uint64
,
name=None
)
Uniform distribution on an integer type's entire range.
This method is the same as setting minval
and maxval
to None
in the
uniform
method.
Args  

shape

the shape of the output. 
dtype

(optional) the integer type, default to uint64. 
name

(optional) the name of the node. 
Returns  

A tensor of random numbers of the required shape. 