View source on GitHub
|
A Layer that calculates unitary matrices of circuits.
tfq.layers.Unitary(
**kwargs
)
The Unitary layer can function in several different ways. The first is: Given an input circuit and set of parameter values, Calculate the unitary matrices for each parameter setting and output it to the Tensorflow graph.
a_symbol = sympy.Symbol('alpha')qubit = cirq.GridQubit(0, 0)my_circuit = cirq.Circuit(cirq.H(qubit) ** a_symbol)some_values = np.array([[0.5], [3.2]])unitary = tfq.layers.Unitary()unitary(my_circuit, symbol_names=[a_symbol], symbol_values=some_values)<tf.RaggedTensor [[[(0.85355+0.14645j), (0.35355-0.35355j)],[(0.35355-0.35355j), (0.14644+0.85355j)]],[[(0.73507-0.08607j), (0.63958+0.20781j)],[(0.63958+0.20781j), (-0.54409-0.50171j)]]]>
The second use case doesn't leverage batch computation or input tensors, but is very useful for testing and quick debugging:
quick_verify = cirq.Circuit(cirq.X(cirq.GridQubit(0, 0)))tfq.layers.Unitary()(quick_verify)<tf.RaggedTensor [[[0, 1], [1, 0]]]>
The last and most complex supported use case is one that handles batches of circuits in addition to batches of parameter values. The only constraint is that values be supplied for all symbols in all circuits:
a_smybol = sympy.Symbol('beta')q = cirq.GridQubit(0, 0)first_circuit = cirq.Circuit(cirq.X(q) ** a_symbol)second_circuit = cirq.Circuit(cirq.Y(q) ** a_symbol)some_values = np.array([[1.0], [0.5]])unitary = tfq.layers.Unitary()# Calculates the unitary for X^1 and Y**0.5unitary([first_circuit, second_circuit],symbol_names=[a_symbol], symbol_values=some_values)<tf.RaggedTensor [[[0, 1], [1, 0]],[[(0.5+0.5j), (-0.5-0.5j)], [(0.5+0.5j), (0.5+0.5j)]]]>
Methods
add_loss
add_loss(
loss
)
Can be called inside of the call() method to add a scalar loss.
Example:
class MyLayer(Layer):
...
def call(self, x):
self.add_loss(ops.sum(x))
return x
add_metric
add_metric(
*args, **kwargs
)
add_variable
add_variable(
shape,
initializer,
dtype=None,
trainable=True,
autocast=True,
regularizer=None,
constraint=None,
name=None
)
Add a weight variable to the layer.
Alias of add_weight().
add_weight
add_weight(
shape=None,
initializer=None,
dtype=None,
trainable=True,
autocast=True,
regularizer=None,
constraint=None,
aggregation='none',
overwrite_with_gradient=False,
name=None
)
Add a weight variable to the layer.
| Args | |
|---|---|
shape
|
Shape tuple for the variable. Must be fully-defined
(no None entries). Defaults to () (scalar) if unspecified.
|
initializer
|
Initializer object to use to populate the initial
variable value, or string name of a built-in initializer
(e.g. "random_normal"). If unspecified, defaults to
"glorot_uniform" for floating-point variables and to "zeros"
for all other types (e.g. int, bool).
|
dtype
|
Dtype of the variable to create, e.g. "float32". If
unspecified, defaults to the layer's variable dtype
(which itself defaults to "float32" if unspecified).
|
trainable
|
Boolean, whether the variable should be trainable via
backprop or whether its updates are managed manually. Defaults
to True.
|
autocast
|
Boolean, whether to autocast layers variables when
accessing them. Defaults to True.
|
regularizer
|
Regularizer object to call to apply penalty on the
weight. These penalties are summed into the loss function
during optimization. Defaults to None.
|
constraint
|
Contrainst object to call on the variable after any
optimizer update, or string name of a built-in constraint.
Defaults to None.
|
aggregation
|
Optional string, one of None, "none", "mean",
"sum" or "only_first_replica". Annotates the variable with
the type of multi-replica aggregation to be used for this
variable when writing custom data parallel training loops.
Defaults to "none".
|
overwrite_with_gradient
|
Boolean, whether to overwrite the variable
with the computed gradient. This is useful for float8 training.
Defaults to False.
|
name
|
String name of the variable. Useful for debugging purposes. |
build
build(
input_shape
)
build_from_config
build_from_config(
config
)
Builds the layer's states with the supplied config dict.
By default, this method calls the build(config["input_shape"]) method,
which creates weights based on the layer's input shape in the supplied
config. If your config contains other information needed to load the
layer's state, you should override this method.
| Args | |
|---|---|
config
|
Dict containing the input shape associated with this layer. |
call
call(
inputs, *, symbol_names=None, symbol_values=None
)
Keras call function.
| Input options | |
|---|---|
inputs, symbol_names, symbol_values:
see input_checks.expand_circuits
|
| Output shape | |
|---|---|
tf.RaggedTensor with shape:
[batch size of symbol_values, |
compute_mask
compute_mask(
inputs, previous_mask
)
compute_output_shape
compute_output_shape(
*args, **kwargs
)
compute_output_spec
compute_output_spec(
*args, **kwargs
)
count_params
count_params()
Count the total number of scalars composing the weights.
| Returns | |
|---|---|
| An integer count. |
from_config
@classmethodfrom_config( config )
Creates an operation from its config.
This method is the reverse of get_config, capable of instantiating the
same operation from the config dictionary.
if "dtype" in config and isinstance(config["dtype"], dict):
policy = dtype_policies.deserialize(config["dtype"])
| Args | |
|---|---|
config
|
A Python dictionary, typically the output of get_config.
|
| Returns | |
|---|---|
| An operation instance. |
get_build_config
get_build_config()
Returns a dictionary with the layer's input shape.
This method returns a config dict that can be used by
build_from_config(config) to create all states (e.g. Variables and
Lookup tables) needed by the layer.
By default, the config only contains the input shape that the layer was built with. If you're writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.
| Returns | |
|---|---|
| A dict containing the input shape associated with the layer. |
get_config
get_config()
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
get_weights
get_weights()
Return the values of layer.weights as a list of NumPy arrays.
load_own_variables
load_own_variables(
store
)
Loads the state of the layer.
You can override this method to take full control of how the state of
the layer is loaded upon calling keras.models.load_model().
| Args | |
|---|---|
store
|
Dict from which the state of the model will be loaded. |
quantize
quantize(
mode, type_check=True
)
quantized_build
quantized_build(
input_shape, mode
)
quantized_call
quantized_call(
*args, **kwargs
)
rematerialized_call
rematerialized_call(
layer_call, *args, **kwargs
)
Enable rematerialization dynamically for layer's call method.
| Args | |
|---|---|
layer_call
|
The original call method of a layer.
|
| Returns | |
|---|---|
Rematerialized layer's call method.
|
save_own_variables
save_own_variables(
store
)
Saves the state of the layer.
You can override this method to take full control of how the state of
the layer is saved upon calling model.save().
| Args | |
|---|---|
store
|
Dict where the state of the model will be saved. |
set_weights
set_weights(
weights
)
Sets the values of layer.weights from a list of NumPy arrays.
stateless_call
stateless_call(
trainable_variables,
non_trainable_variables,
*args,
return_losses=False,
**kwargs
)
Call the layer without any side effects.
| Args | |
|---|---|
trainable_variables
|
List of trainable variables of the model. |
non_trainable_variables
|
List of non-trainable variables of the model. |
*args
|
Positional arguments to be passed to call().
|
return_losses
|
If True, stateless_call() will return the list of
losses created during call() as part of its return values.
|
**kwargs
|
Keyword arguments to be passed to call().
|
| Returns | |
|---|---|
A tuple. By default, returns (outputs, non_trainable_variables).
If return_losses = True, then returns
(outputs, non_trainable_variables, losses).
|
Example:
model = ...
data = ...
trainable_variables = model.trainable_variables
non_trainable_variables = model.non_trainable_variables
# Call the model with zero side effects
outputs, non_trainable_variables = model.stateless_call(
trainable_variables,
non_trainable_variables,
data,
)
# Attach the updated state to the model
# (until you do this, the model is still in its pre-call state).
for ref_var, value in zip(
model.non_trainable_variables, non_trainable_variables
):
ref_var.assign(value)
symbolic_call
symbolic_call(
*args, **kwargs
)
__call__
__call__(
*args, **kwargs
)
Call self as a function.
View source on GitHub