Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tfq.differentiators.LinearCombination

View source on GitHub

Differentiate a circuit with respect to its inputs by

Inherits From: Differentiator

tfq.differentiators.LinearCombination(
    weights, perturbations
)

linearly combining values obtained by evaluating the op using parameter values perturbed about their forward-pass values.

my_op = tfq.get_expectation_op() 
weights = [5, 6, 7] 
perturbations = [0, 0.5, 0.25] 
linear_differentiator = tfq.differentiators.LinearCombination( 
   weights, perturbations 
) 
# Get an expectation op, with this differentiator attached. 
op = linear_differentiator.generate_differentiable_op( 
    analytic_op=my_op 
) 
qubit = cirq.GridQubit(0, 0) 
circuit = tfq.convert_to_tensor([ 
    cirq.Circuit(cirq.X(qubit) ** sympy.Symbol('alpha')) 
]) 
psums = tfq.convert_to_tensor([[cirq.Z(qubit)]]) 
symbol_values_array = np.array([[0.123]], dtype=np.float32) 
# Calculate tfq gradient. 
symbol_values_tensor = tf.convert_to_tensor(symbol_values_array) 
with tf.GradientTape() as g: 
    g.watch(symbol_values_tensor) 
    expectations = op(circuit, ['alpha'], symbol_values_tensor, psums 
) 
# Gradient would be: 5 * f(x+0) + 6 * f(x+0.5) + 7 * f(x+0.25) 
grads = g.gradient(expectations, symbol_values_tensor) 
# Note: this gradient visn't correct in value, but showcases 
# the principle of how gradients can be defined in a very flexible 
# fashion. 
grads 
tf.Tensor([[5.089467]], shape=(1, 1), dtype=float32) 

Args:

  • weights: Python list of real numbers representing linear combination coeffecients for each perturbed function evaluation.
  • perturbations: Python list of real numbers representing perturbation values.

Methods

differentiate_analytic

View source

differentiate_analytic(
    programs, symbol_names, symbol_values, pauli_sums, forward_pass_vals, grad
)

differentiate_sampled

View source

differentiate_sampled(
    programs, symbol_names, symbol_values, pauli_sums, num_samples,
    forward_pass_vals, grad
)

generate_differentiable_op

View source

generate_differentiable_op()

Generate a differentiable op by attaching self to an op.

This function returns a tf.function that passes values through to forward_op during the forward pass and this differentiator (self) to backpropagate through the op during the backward pass. If sampled_op is provided the differentiators differentiate_sampled method will be invoked (which requires sampled_op to be a sample based expectation op with num_samples input tensor). If analytic_op is provided the differentiators differentiate_analytic method will be invoked (which requires analytic_op to be an analytic based expectation op that does NOT have num_samples as an input). If both sampled_op and analytic_op are provided an exception will be raised.

CAUTION

This generate_differentiable_op() can be called only ONCE because of the one differentiator per op policy. You need to call refresh() to reuse this differentiator with another op.

Args:

  • sampled_op: A callable op that you want to make differentiable using this differentiator's differentiate_sampled method.
  • analytic_op: A callable op that you want to make differentiable using this differentiators differentiate_analytic method.

Returns:

A callable op that who's gradients are now registered to be a call to this differentiators differentiate_* function.

refresh

View source

refresh()

Refresh this differentiator in order to use it with other ops.