tf.keras.Variable

Represents a backend-agnostic variable in Keras.

A Variable acts as a container for state. It holds a tensor value and can be updated. With the JAX backend, variables are used to implement "functionalization", the pattern of lifting stateful operations out of a piece of computation to turn it into a stateless function.

initializer Initial value or callable for initialization. If a callable is used, it should take the arguments shape and dtype.
shape Optional. Tuple for the variable's shape. Required if initializer is a callable.
dtype Optional. Data type of the variable. Defaults to the global float dtype type ("float32" if never configured).
trainable Optional. Boolean indicating if variable is trainable. Defaults to True.
name Optional. A unique name for the variable. Automatically generated if not set.

Examples:

Initializing a Variable with a NumPy array:

import numpy as np
import keras
initial_array = np.ones((3, 3))
variable_from_array = keras.Variable(initializer=initial_array)

Using a Keras initializer to create a Variable:

from keras.src.initializers import Ones
variable_from_initializer = keras.Variable(
    initializer=Ones(), shape=(3, 3), dtype="float32"
)

Updating the value of a Variable:

new_value = np.zeros((3, 3), dtype="float32")
variable_from_array.assign(new_value)

Marking a Variable as non-trainable:

non_trainable_variable = keras.Variable(
    initializer=np.ones((3, 3), dtype="float32"), trainable=False
)

name The name of the variable (string).
path The path of the variable within the Keras model or layer (string).
dtype The data type of the variable (string).
shape The shape of the variable (tuple of integers).
ndim The number of dimensions of the variable (integer).
trainable Whether the variable is trainable (boolean).
value The current value of the variable (NumPy array or tensor).
aggregation

constraint

handle

overwrite_with_gradient Whether this variable should be overwritten by the gradient.

This property is designed for a special case where we want to overwrite the variable directly with its computed gradient. For example, in float8 training, new scale and amax_history are computed as gradients, and we want to overwrite them directly instead of following the typical procedure such as gradient descent with a learning rate, gradient clipping and weight decaying.

regularizer

Methods

assign

View source

assign_add

View source

assign_sub

View source

numpy

View source

__abs__

View source

__add__

View source

__and__

View source

__array__

View source

__bool__

View source

__eq__

View source

Return self==value.

__floordiv__

View source

__ge__

View source

Return self>=value.

__getitem__

View source

__gt__

View source

Return self>value.

__invert__

View source

__le__

View source

Return self<=value.

__lt__

View source

Return self<value.

__matmul__

View source

__mod__

View source

__mul__

View source

__ne__

View source

Return self!=value.

__neg__

View source

__or__

View source

__pos__

View source

__pow__

View source

__radd__

View source

__rand__

View source

__rfloordiv__

View source

__rmatmul__

View source

__rmod__

View source

__rmul__

View source

__ror__

View source

__rpow__

View source

__rsub__

View source

__rtruediv__

View source

__rxor__

View source

__sub__

View source

__truediv__

View source

__xor__

View source