View source on GitHub
|
Noisy Parametrized Quantum Circuit (PQC) Layer.
tfq.layers.NoisyPQC(
model_circuit,
operators,
*,
repetitions=None,
sample_based=None,
differentiator=None,
initializer=tf.keras.initializers.RandomUniform(0, 2 * np.pi),
regularizer=None,
constraint=None,
**kwargs
)
Used in the notebooks
| Used in the tutorials |
|---|
This layer is for training noisy parameterized quantum models. Given a parameterized circuit, this layer initializes the parameters and manages them in a Keras native way.
We start by defining a simple quantum circuit on one qubit. This circuit parameterizes an arbitrary rotation on the Bloch sphere in terms of the three angles a, b, and c, along with some noise:
q = cirq.GridQubit(0, 0)(a, b, c) = sympy.symbols("a b c")circuit = cirq.Circuit(cirq.rz(a)(q),cirq.rx(b)(q),cirq.rz(c)(q),cirq.rx(-b)(q),cirq.rz(-a)(q),cirq.depolarize(0.01)(q))
In order to extract information from our circuit, we must apply measurement
operators. For now we choose to make a Z measurement. In order to observe
an output, we must also feed our model quantum data (NOTE: quantum data
means quantum circuits with no free parameters). Though the output values
will depend on the default random initialization of the angles in our model,
one will be the negative of the other since cirq.X(q) causes a bit flip:
outputs = tfq.layers.NoisyPQC(circuit,cirq.Z(q),repetitions=1000,sample_based=False)quantum_data = tfq.convert_to_tensor([cirq.Circuit(),cirq.Circuit(cirq.X(q))])res = outputs(quantum_data)res<tf.Tensor: id=577, shape=(2, 1), dtype=float32, numpy=array([[ 0.8722095],[-0.8722095]], dtype=float32)>
In the above example we estimate the value of the expectation using
monte-carlo trajectory simulations and analytic expectation calculation.
To emulate the process used when sampling from a truly noisy device, we
set sampled_based=True to estimate the expectation value via noisy
bitstring sampling.
measurement = [cirq.X(q), cirq.Y(q), cirq.Z(q)]outputs = tfq.layers.NoisyPQC(circuit,measurement,repetitions=5000,sample_based=True)quantum_data = tfq.convert_to_tensor([cirq.Circuit(),cirq.Circuit(cirq.X(q))])res = outputs(quantum_data)res<tf.Tensor: id=808, shape=(2, 3), dtype=float32, numpy=array([[-0.38, 0.9 , 0.14],[ 0.19, -0.95, -0.35]], dtype=float32)>
Unlike tfq.layers.PQC no value for backend can be supplied in the
layer constructor. If you want to use a custom backend please use
tfq.layers.PQC instead. A value for differentiator can also be
supplied in the constructor to indicate the differentiation scheme this
NoisyPQC layer should use. Here's how you would take the gradients of
the above example using a tfq.layers.ParameterShift differentiator.
measurement = [cirq.X(q), cirq.Y(q), cirq.Z(q)]outputs = tfq.layers.NoisyPQC(circuit,measurement,repetitions=5000,sample_based=True,differentiator=tfq.differentiators.ParameterShift())quantum_data = tfq.convert_to_tensor([cirq.Circuit(),cirq.Circuit(cirq.X(q))])res = outputs(quantum_data)res<tf.Tensor: id=891, shape=(2, 3), dtype=float32, numpy=array([[-0.5956, -0.2152, 0.7756],[ 0.5728, 0.1944, -0.7848]], dtype=float32)>
Lastly, like all layers in TensorFlow the NoisyPQC layer can be called on
any tf.Tensor as long as it is the right shape. This means you could
replace replace quantum_data with values fed in from a tf.keras.Input.
Methods
add_loss
add_loss(
loss
)
Can be called inside of the call() method to add a scalar loss.
Example:
class MyLayer(Layer):
...
def call(self, x):
self.add_loss(ops.sum(x))
return x
add_metric
add_metric(
*args, **kwargs
)
add_variable
add_variable(
shape,
initializer,
dtype=None,
trainable=True,
autocast=True,
regularizer=None,
constraint=None,
name=None
)
Add a weight variable to the layer.
Alias of add_weight().
add_weight
add_weight(
shape=None,
initializer=None,
dtype=None,
trainable=True,
autocast=True,
regularizer=None,
constraint=None,
aggregation='none',
overwrite_with_gradient=False,
name=None
)
Add a weight variable to the layer.
| Args | |
|---|---|
shape
|
Shape tuple for the variable. Must be fully-defined
(no None entries). Defaults to () (scalar) if unspecified.
|
initializer
|
Initializer object to use to populate the initial
variable value, or string name of a built-in initializer
(e.g. "random_normal"). If unspecified, defaults to
"glorot_uniform" for floating-point variables and to "zeros"
for all other types (e.g. int, bool).
|
dtype
|
Dtype of the variable to create, e.g. "float32". If
unspecified, defaults to the layer's variable dtype
(which itself defaults to "float32" if unspecified).
|
trainable
|
Boolean, whether the variable should be trainable via
backprop or whether its updates are managed manually. Defaults
to True.
|
autocast
|
Boolean, whether to autocast layers variables when
accessing them. Defaults to True.
|
regularizer
|
Regularizer object to call to apply penalty on the
weight. These penalties are summed into the loss function
during optimization. Defaults to None.
|
constraint
|
Contrainst object to call on the variable after any
optimizer update, or string name of a built-in constraint.
Defaults to None.
|
aggregation
|
Optional string, one of None, "none", "mean",
"sum" or "only_first_replica". Annotates the variable with
the type of multi-replica aggregation to be used for this
variable when writing custom data parallel training loops.
Defaults to "none".
|
overwrite_with_gradient
|
Boolean, whether to overwrite the variable
with the computed gradient. This is useful for float8 training.
Defaults to False.
|
name
|
String name of the variable. Useful for debugging purposes. |
build
build(
input_shape
)
Keras build function.
build_from_config
build_from_config(
config
)
Builds the layer's states with the supplied config dict.
By default, this method calls the build(config["input_shape"]) method,
which creates weights based on the layer's input shape in the supplied
config. If your config contains other information needed to load the
layer's state, you should override this method.
| Args | |
|---|---|
config
|
Dict containing the input shape associated with this layer. |
call
call(
inputs
)
Keras call function.
compute_mask
compute_mask(
inputs, previous_mask
)
compute_output_shape
compute_output_shape(
*args, **kwargs
)
compute_output_spec
compute_output_spec(
*args, **kwargs
)
count_params
count_params()
Count the total number of scalars composing the weights.
| Returns | |
|---|---|
| An integer count. |
from_config
@classmethodfrom_config( config )
Creates an operation from its config.
This method is the reverse of get_config, capable of instantiating the
same operation from the config dictionary.
if "dtype" in config and isinstance(config["dtype"], dict):
policy = dtype_policies.deserialize(config["dtype"])
| Args | |
|---|---|
config
|
A Python dictionary, typically the output of get_config.
|
| Returns | |
|---|---|
| An operation instance. |
get_build_config
get_build_config()
Returns a dictionary with the layer's input shape.
This method returns a config dict that can be used by
build_from_config(config) to create all states (e.g. Variables and
Lookup tables) needed by the layer.
By default, the config only contains the input shape that the layer was built with. If you're writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.
| Returns | |
|---|---|
| A dict containing the input shape associated with the layer. |
get_config
get_config()
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
get_weights
get_weights()
Return the values of layer.weights as a list of NumPy arrays.
load_own_variables
load_own_variables(
store
)
Loads the state of the layer.
You can override this method to take full control of how the state of
the layer is loaded upon calling keras.models.load_model().
| Args | |
|---|---|
store
|
Dict from which the state of the model will be loaded. |
quantize
quantize(
mode, type_check=True
)
quantized_build
quantized_build(
input_shape, mode
)
quantized_call
quantized_call(
*args, **kwargs
)
rematerialized_call
rematerialized_call(
layer_call, *args, **kwargs
)
Enable rematerialization dynamically for layer's call method.
| Args | |
|---|---|
layer_call
|
The original call method of a layer.
|
| Returns | |
|---|---|
Rematerialized layer's call method.
|
save_own_variables
save_own_variables(
store
)
Saves the state of the layer.
You can override this method to take full control of how the state of
the layer is saved upon calling model.save().
| Args | |
|---|---|
store
|
Dict where the state of the model will be saved. |
set_weights
set_weights(
weights
)
Sets the values of layer.weights from a list of NumPy arrays.
stateless_call
stateless_call(
trainable_variables,
non_trainable_variables,
*args,
return_losses=False,
**kwargs
)
Call the layer without any side effects.
| Args | |
|---|---|
trainable_variables
|
List of trainable variables of the model. |
non_trainable_variables
|
List of non-trainable variables of the model. |
*args
|
Positional arguments to be passed to call().
|
return_losses
|
If True, stateless_call() will return the list of
losses created during call() as part of its return values.
|
**kwargs
|
Keyword arguments to be passed to call().
|
| Returns | |
|---|---|
A tuple. By default, returns (outputs, non_trainable_variables).
If return_losses = True, then returns
(outputs, non_trainable_variables, losses).
|
Example:
model = ...
data = ...
trainable_variables = model.trainable_variables
non_trainable_variables = model.non_trainable_variables
# Call the model with zero side effects
outputs, non_trainable_variables = model.stateless_call(
trainable_variables,
non_trainable_variables,
data,
)
# Attach the updated state to the model
# (until you do this, the model is still in its pre-call state).
for ref_var, value in zip(
model.non_trainable_variables, non_trainable_variables
):
ref_var.assign(value)
symbol_values
symbol_values()
Returns a Python dict containing symbol name, value pairs.
| Returns | |
|---|---|
Python dict with str keys and float values representing
the current symbol values.
|
symbolic_call
symbolic_call(
*args, **kwargs
)
__call__
__call__(
*args, **kwargs
)
Call self as a function.
View source on GitHub