Path: blob/master/site/en-snapshot/quantum/tutorials/gradients.ipynb
25118 views
Copyright 2020 The TensorFlow Authors.
Calculate gradients
This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.
Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes.
Setup
Install TensorFlow Quantum:
Now import TensorFlow and the module dependencies:
1. Preliminary
Let's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
Along with an observable:
Looking at this operator you know that
and if you define then . Let's check this:
2. The need for a differentiator
With larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the tfq.differentiators.Differentiator
class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
This can quickly compound into a serious accuracy problem when it comes to gradients:
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm.
3. Multiple observables
Let's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
If this observable is used with the same circuit as before, then you have and . Perform a quick check:
It's a match (close enough).
Now if you define then . Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to .
This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
Here you have verified that the sum of the gradients for each observable is indeed the gradient of . This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow.
4. Advanced usage
All differentiators that exist inside of TensorFlow Quantum subclass tfq.differentiators.Differentiator
. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement get_gradient_circuits
, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload differentiate_analytic
and differentiate_sampled
; the class tfq.differentiators.Adjoint
takes this route.
The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting.
Recall the circuit you defined above, . As before, you can define a function as the expectation value of this circuit against the observable, . Using parameter shift rules, for this circuit, you can find that the derivative is The get_gradient_circuits
function returns the components of this derivative.
The Differentiator
base class uses the components returned from get_gradient_circuits
to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing tfq.layer
objects:
This new differentiator can now be used to generate differentiable ops.
Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
Success: Now you can use all the differentiators that TensorFlow Quantum has to offer—and define your own.