|View source on GitHub|
See the Variables Guide.
tf.compat.v1.Variable( initial_value=None, trainable=None, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None, constraint=None, use_resource=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE, shape=None )
Used in the notebooks
|Used in the tutorials|
A variable maintains state in the graph across calls to
run(). You add a
variable to the graph by constructing an instance of the class
Variable() constructor requires an initial value for the variable,
which can be a
Tensor of any type and shape. The initial value defines the
type and shape of the variable. After construction, the type and shape of
the variable are fixed. The value can be changed using one of the assign
If you want to change the shape of a variable later you have to use an
assign Op with
Just like any
Tensor, variables created with
Variable() can be used as
inputs for other Ops in the graph. Additionally, all the operators
overloaded for the
Tensor class are carried over to variables, so you can
also add nodes to the graph by just doing arithmetic on variables.
import tensorflow as tf # Create a variable. w = tf.Variable(<initial-value>, name=<optional-name>) # Use the variable in the graph like any Tensor. y = tf.matmul(w, ...another variable or tensor...) # The overloaded operators are available too. z = tf.sigmoid(w + y) # Assign a new value to the variable with `assign()` or a related method. w.assign(w + 1.0) w.assign_add(1.0)
When you launch the graph, variables have to be explicitly initialized before
you can run Ops that use their value. You can initialize a variable by
running its initializer op, restoring the variable from a save file, or
simply running an
assign Op that assigns a value to the variable. In fact,
the variable initializer op is just an
assign Op that assigns the
variable's initial value to the variable itself.
# Launch the graph in a session. with tf.compat.v1.Session() as sess: # Run the variable initializer. sess.run(w.initializer) # ...you now can run ops that use the value of 'w'...
The most common initialization pattern is to use the convenience function
global_variables_initializer() to add an Op to the graph that initializes
all the variables. You then run that Op after launching the graph.
# Add an Op to initialize global variables. init_op = tf.compat.v1.global_variables_initializer() # Launch the graph in a session. with tf.compat.v1.Session() as sess: # Run the Op that initializes global variables. sess.run(init_op) # ...you can now run any Op that uses variable values...
If you need to create a variable with an initial value dependent on another
variable, use the other variable's
initialized_value(). This ensures that
variables are initialized in the right order.
All variables are automatically collected in the graph where they are
created. By default, the constructor adds the new variable to the graph
GraphKeys.GLOBAL_VARIABLES. The convenience function
global_variables() returns the contents of that collection.
When building a machine learning model it is often convenient to distinguish
between variables holding the trainable model parameters and other variables
such as a
global step variable used to count training steps. To make this
easier, the variable constructor supports a
trainable=<bool> parameter. If
True, the new variable is also added to the graph collection
GraphKeys.TRAINABLE_VARIABLES. The convenience function
trainable_variables() returns the contents of this collection. The
Optimizer classes use this collection as the default list of
variables to optimize.
v = tf.Variable(True) tf.cond(v, lambda: v.assign(False), my_false_fn) # Note: this is broken.
use_resource=True when constructing the variable will
fix any nondeterminism issues:
v = tf.Variable(True, use_resource=True) tf.cond(v, lambda: v.assign(False), my_false_fn)
To use the replacement for variables which does not have these issues: