![]() |
Differentiates a circuit using Central Differencing.
Inherits From: LinearCombination
, Differentiator
tfq.differentiators.CentralDifference(
error_order=2, grid_spacing=0.001
)
Central differencing computes a derivative at point x using an equal number of points before and after x. A closed form for the coefficients of this derivative for an arbitrary positive error order is used here, which is described in the following article: https://www.sciencedirect.com/science/article/pii/S0377042799000886
my_op = tfq.get_expectation_op()
linear_differentiator = tfq.differentiators.CentralDifference(2, 0.01)
# Get an expectation op, with this differentiator attached.
op = linear_differentiator.generate_differentiable_op(
analytic_op=my_op
)
qubit = cirq.GridQubit(0, 0)
circuit = tfq.convert_to_tensor([
cirq.Circuit(cirq.X(qubit) ** sympy.Symbol('alpha'))
])
psums = tfq.convert_to_tensor([[cirq.Z(qubit)]])
symbol_values_array = np.array([[0.123]], dtype=np.float32)
# Calculate tfq gradient.
symbol_values_tensor = tf.convert_to_tensor(symbol_values_array)
with tf.GradientTape() as g:
g.watch(symbol_values_tensor)
expectations = op(circuit, ['alpha'], symbol_values_tensor, psums)
# Gradient would be: -50 * f(x + 0.02) + 200 * f(x + 0.01) - 150 * f(x)
grads = g.gradient(expectations, symbol_values_tensor)
grads
tf.Tensor([[-1.1837807]], shape=(1, 1), dtype=float32)
Args | |
---|---|
error_order
|
A positive, even int specifying the error order
of this differentiator. This corresponds to the smallest power
of grid_spacing remaining in the series that was truncated
to generate this finite differencing expression.
|
grid_spacing
|
A positive float specifying how large of a
grid to use in calculating this finite difference.
|
Methods
differentiate_analytic
@tf.function
differentiate_analytic( programs, symbol_names, symbol_values, pauli_sums, forward_pass_vals, grad )
differentiate_sampled
@tf.function
differentiate_sampled( programs, symbol_names, symbol_values, pauli_sums, num_samples, forward_pass_vals, grad )
generate_differentiable_op
generate_differentiable_op(
*, sampled_op=None, analytic_op=None
)
Generate a differentiable op by attaching self to an op.
This function returns a tf.function
that passes values through to
forward_op
during the forward pass and this differentiator (self
) to
backpropagate through the op during the backward pass. If sampled_op
is provided the differentiators differentiate_sampled
method will
be invoked (which requires sampled_op to be a sample based expectation
op with num_samples input tensor). If analytic_op is provided the
differentiators differentiate_analytic
method will be invoked (which
requires analytic_op to be an analytic based expectation op that does
NOT have num_samples as an input). If both sampled_op and analytic_op
are provided an exception will be raised.
This generate_differentiable_op()
can be called only ONCE because
of the one differentiator per op
policy. You need to call refresh()
to reuse this differentiator with another op.
Args | |
---|---|
sampled_op
|
A callable op that you want to make differentiable
using this differentiator's differentiate_sampled method.
|
analytic_op
|
A callable op that you want to make differentiable
using this differentiators differentiate_analytic method.
|
Returns | |
---|---|
A callable op that who's gradients are now registered to be
a call to this differentiators differentiate_* function.
|
refresh
refresh()
Refresh this differentiator in order to use it with other ops.