|View source on GitHub|
Differentiates a circuit using Central Differencing.
tfq.differentiators.CentralDifference( error_order=2, grid_spacing=0.001 )
Central differencing computes a derivative at point x using an equal number of points before and after x. A closed form for the coefficients of this derivative for an arbitrary positive error order is used here, which is described in the following article: https://www.sciencedirect.com/science/article/pii/S0377042799000886.
my_op = tfq.get_expectation_op()
linear_differentiator = tfq.differentiators.CentralDifference(2, 0.01)
# Get an expectation op, with this differentiator attached.
op = linear_differentiator.generate_differentiable_op(
qubit = cirq.GridQubit(0, 0)
circuit = tfq.convert_to_tensor([
cirq.Circuit(cirq.X(qubit) ** sympy.Symbol('alpha'))
psums = tfq.convert_to_tensor([[cirq.Z(qubit)]])
symbol_values_array = np.array([[0.123]], dtype=np.float32)
# Calculate tfq gradient.
symbol_values_tensor = tf.convert_to_tensor(symbol_values_array)
with tf.GradientTape() as g:
expectations = op(circuit, ['alpha'], symbol_values_tensor, psums)
# Gradient would be: -50 * f(x + 0.02) + 200 * f(x + 0.01) - 150 * f(x)
grads = g.gradient(expectations, symbol_values_tensor)
tf.Tensor([[-1.1837807]], shape=(1, 1), dtype=float32)
error_order: A positive, even
intspecifying the error order of this differentiator. This corresponds to the smallest power of
grid_spacingremaining in the series that was truncated to generate this finite differencing expression.
grid_spacing: A positive
floatspecifying how large of a grid to use in calculating this finite difference.
differentiate_analytic( programs, symbol_names, symbol_values, pauli_sums, forward_pass_vals, grad )
differentiate_sampled( programs, symbol_names, symbol_values, pauli_sums, num_samples, forward_pass_vals, grad )
Generate a differentiable op by attaching self to an op.
This function returns a
tf.function that passes values through to
forward_op during the forward pass and this differentiator (
backpropagate through the op during the backward pass. If sampled_op
is provided the differentiators
differentiate_sampled method will
be invoked (which requires sampled_op to be a sample based expectation
op with num_samples input tensor). If analytic_op is provided the
differentiate_analytic method will be invoked (which
requires analytic_op to be an analytic based expectation op that does
NOT have num_samples as an input). If both sampled_op and analytic_op
are provided an exception will be raised.
generate_differentiable_op() can be called only ONCE because
one differentiator per op policy. You need to call
to reuse this differentiator with another op.
callableop that you want to make differentiable using this differentiator's
callableop that you want to make differentiable using this differentiators
callable op that who's gradients are now registered to be
a call to this differentiators
Refresh this differentiator in order to use it with other ops.