QEC 101
Lab 2 - Stabilizers, the Shor code, and the Steane code
This lab introduces the stabilizer formalism, a powerful tool for working with more sophisticated quantum error correction (QEC) codes. After a brief introduction to the theory, the lab will walk through the Shor and Steane codes with interactive coding exercises.
This lab was motivated by content from "Quantum Error Correction: an Introductory Guide" and "Quantum Error Correction for Dummies", both excellent resources we refer readers to for additional detail. For a more technical introduction, see chapter 10 of "Quantum Computation and Quantum Information" or the PhD thesis where the concept of stabilizer codes was introduced.
This is the second lab in the QEC series. If you are not familiar with the basics of classical or quantum error correction (EC), please complete the first lab in this series.
The list below outlines what you'll be doing in each section of this lab:
2.1 Define stabilizers and why they are important
2.2 Interactively Learn and Code the Steane Code in CUDA-Q.
2.3 Perform Steane Code Capacity Analysis with CUDA-QX
2.4 Interactively Learn and Code the Shor Code in CUDA-Q.
Lab 2 Learning Objectives:
Understand what a stabilizer is, how it works, and why it is important
Understand the approach of the Shor and Steane codes
Understand logical operators
Code the Shor and Steane codes in CUDA-Q
Execute the cells below to load all the necessary packages for this lab.
2.1 Stabilizers and Logical Operators
An important subclass of QEC codes, known as stabilizer codes, use special operations called stabilizers to clean up errors in encoded quantum information, and thus "stabilize" the state.
An operation acting on a state is said to be a stabilizer of the state if the state is a +1 eigenstate of the operation . The high-level intuiton here is that if small errors have accumulated in a logically encoded state, the action of applying this stabilizer is to project the state back to a perfectly error-free state, and we measure . Sometimes larger errors occur, and we do not measure , which informs us something has gone wrong.
In lab 1, the codespace was defined by the set of basis codewords, such as and for the 3-qubit quantum repetition code. In that lab the codewords were provided to you for each code, but in a stabilizer code, we can equivalently define the codespace by providing the stabilizers which stabilize each basis codeword. In practice, this process of defining a code by the stabilizers is much more efficient and scalable as the codes grow larger.
The codespace can be defined as formed by all such that for each , where these are stabilizers which form a group (note: in some texts this group is called the stabilizer, not the elements). That is, the codespace is the joint +1 eigenspace fixed by the stabilizers.
Again in lab 1, we were given the codespace and error space for the 3-qubit quantum repetition code up front. However, let's think about working backwards from the computational basis for the 3-qubit Hilbert space. These can be sorted based on the eigenvalues returned when operated on by all elements of the stabilizer group :
| Basis state | Eigenvalue for | Eigenvalue for | | ----------- | ----------- | ---------- | | | 1 | 1 | | | 1 | -1 | | | -1 | -1 | | | -1 | 1 | | | -1 | 1 | | | -1 | -1 | | | 1 | -1 | | | 1 | 1 |
The basis states that have an eigenvalue of 1 for both and make up the codespace and . There are other valid stabilizers in this code, such as , but any stabilizer group can be boiled down into a minimal set which can be multiplied together to generate all of the others.
This is a really powerful approach, because it eliminates the need to derive and document the basis of the codespace in advance. Instead, one can simply define an appropriate set of stabilizers to establish the codespace.
Stabilizer codes are usually characterized as (double brackets for quantum codes) where is the number of physical qubits encoding logical qubits with distance . It is always the case that these codes require stabilizers. The reason for this is that each stabilizer splits the original Hilbert space in two (the +1 and -1 eigenspace), with degrees of freedom remaining to define the logical qubit.
The trick then becomes finding good sets of stabilizers that correspond to QEC codes with favorable properties.
Stabilizer Properties
Three key properties for stabilizers:
Here we consider only to Pauli product stabilizers, that is, needs to be a Pauli-group element. The n-qubit Pauli group is a special group constructed from the Pauli matrices:
The group consists of elements and is built by forming a group that begins with all possible length Pauli words. For example has terms like , , , etc. The group is then closed by including the possible scalar cofficients which arise from multiplying these terms . So would be
Each must be able to operate on every logical state . Furthermore, this action should leave each fixed (i.e., the eigenvalue of should be +1 for each logical state).
Stabilizers need to be measurable in any order. This means each stabilizer needs to commute with every other stabilizer (that is, , or equivalently ).
Logical Operators
In addition to stabilizers, each code has logical operators ( and ) which perform and operations on the logical states. For example:
These logical operators must satisfy two properties.
and commute with all stabilizers.
and must anticommute with one another if acting on the same logical qubit (i.e., for all ).
The next few sections will make use of stabilizers to solidify these concepts and enable coding of QEC codes far more interesting than the quantum repetition code.
2.2 The Steane Code
The Steane code is a famous QEC code that is the quantum version of the [7,4,3] Hamming code introduced in the first QEC lab. One immediate difference is that the Steane code encodes a single logical qubit making it a [[7,1,3]] code.
Remember, that the Hamming code adds additional parity bits that help "triangulate" where an error occurred. In the lab 1 exercises you constructed the generator matrix and used it to produce the logical codewords in the classical Hamming code. For example, was encoded as
Any logically encoded state, , could then be multiplied by the parity check matrix () to determine if any syndromes were triggered or not.
What was not discussed in the Hamming code section was the fact that the parity check matrix () can be used to define the codespace. A valid logical codeword is any that satisfies the relationship . As there are 7 data bits, that means there are possible encoded states between the codespace and error space. It turns out, 16 of these fall within the codespace and 112 are in the error space. Of the 16 in the codespace, 8 have even parity (even number of 1's) while the other half has odd parity.
| Even Bitstrings in Codespace | Odd Bitstrings in Codespace | | ----------- | ----------- | | 0000000 | 1111111 | | 0001111 | 1110000 | | 0110110 | 1001001 | | 0111001 | 1000110 | | 1010101 | 0101010 | | 1011010 | 0100101 | | 1100011 | 0011100 | | 1101100 | 0010011 |
This provides us a way to define the logical code words: and . The logical codewords, and , for the Steane code are superpositions over the states corresponding to the classic even and odd codewords, respectively.
You might notice by inspection, that . That is to say, flipping all the bits swaps logical states. Similarly, will flip the phase, transforming to
The encoding circuit to produce the logical codewords is shown below, and is based off the constraints imposed by the parity check matrix.

Exercise 1 - The Steane Code:
In the cell below, build a CUDA-Q kernel to encode the logical 0 state using the Steane code. Sample the circuit to prove that you indeed created the appropriate superposition. In the cells following, complete the entire Steane code by adding stabilizer checks and code to measure the logical state. Complete the numbered tasks as well to confirm your code works as expected.
The Steane code is a member of an important family of stabilizer codes known as Calderbank-Shor-Steane (CSS) codes. A CSS code is characterized by the property that Z and X errors can be detected and corrected independently. A benefit of this, is that fewer ancilla qubits are required to produce the syndromes, and they can be reset and reused for each error. However, this procedure is also slower.
The stabilizers for -type errors are while the stabilizers for -type errors are of similar form: .
Just as with the classical Hamming code, there is a nice way to visualize the syndrome results. The diagram below places each data qubit on each vertex. They are arranged such that each of the three sections or plaquettes corresponds to one of the stabilizers.
The syndromes can be visually interpreted by putting a colored X on the syndromes that are flagged. Each coloring of this graph uniquely corresponds to an error on a specific qubit which is why the Steane code is often referred to as a color code.

You are now ready to code the rest of the Steane code. After encoding, introduce an error and error on the qubits of your choice. Try performing the and syndrome measurements using the same three ancilla qubits and resetting them in between. Make your code such that you can measure the data qubits and confirm the state of the logical qubit.
Now, test your code! Just measure in the basis as the same procedure could be performed with the basis.
Try adding single errors, guess which stabilizers should flag and confrm they do.
Add two errors. Confirm the code cannot correct the errors and a logical bitflip occurs.
It turns out that like the Shor code, there are alternate choices for . Modify your counting code above and test if or are valid choices for .
2.3 Steane Code Capacity Analysis with CUDA-Q QEC
CUDA-QX is set of libraries that enable easy acceleration of quantum application development. One of the libraries, CUDA-Q QEC, is focused on error correction and can help expedite much of the work done above. This final section will demonstrate how to run a code capacity memory experiment with the Steane code.
A memory experiment is a procedure to test how well a protocol can preserve quantum information. Such an experiment can help assess the quality of a QEC code but is often limited by assumptions that deviate from a realistic noise model. One such example is a code capacity experiment. A code capacity procedure determines the logical error rate of a QEC code under strict assumptions such as perfect gates or measurement. Code capacity experiments can help put an upper bound on a procedure's threshold and is therefore a good starting place to compare new codes.
The process is outlined in the diagram below. Assume the 0000000 bitstring is the baseline (no error). Bitflips are then randomly introduced and produce errors in the data vector to produce results like 0100010. If this were a real test on a physical quantum device, the data vector would not be known and a user could only proceed through the bottom path in the figure - performing syndrome extraction and then decoding the result to see if a logical flip occurred. In a code capacity experiment, the data vector with errors is known, so it can be used to directly compute if a logical state flip occurred or not. Dividing the number of times the actual (top path) and predicted (bottom path) results agree by the total number of rounds provides an estimate of the logical error rate for the code being tested.

Exercise 2 - CUDA-Q QEC Code Capacity Experiment:
CUDA-Q QEC allows researchers to streamline experiments like this with just a few lines of code. Try running the cells below to compute the logical error rate of the Steane code under code capacity assumptions given probability of error .
Next, load the Steane code, which is already implemented in CUDA-Q QEC.
The parity check matrices and observables can also be extracted from the Steane code.
A decoder can then be specified which takes the parity check matrix as an input.
Then, sample_code_capacity can be called and provided with p, the probability of any bit flipping, and the number of shots for the analysis.
Notice how the Steane code is already defined within CUDA-Q QEC, along with a selection of decoders, and the sample_code_capacity API to automatically run the procedure. Otherwise, you would need to code the entire process from scratch like you did in section 2.2 for each QEC code you want to test!
If the experiment is repeated many times with different values, a plot can be generated like the one shown below. The purple line is the and corresponds to the case that the logical error rate is identical to the physical error rate. Anywhere the green line is below the purple line indicates that the Steane code was able to produce a logical error rate that is less than the physical error rate of the data qubits. When the green line is above the purple, the Steane code produced a worse logical error rate indicating that it would have been better to just use the data qubits and avoid the QEC procedure. The crossover point is an estimate for the code's threshold. Refining this estimate would require more sophisticated circuit level noise models that more accurately represent the performance of the Steane code under realistic conditions.

Though code capacity has much room to improve, it is a great example of the utility of CUDA-Q QEC and how simple procedures can be streamlined so users can focus on testing codes rather than coding up the details of each test.
2.4 The Shor Code
The first QEC code was proposed by Peter Shor in 1995, known as the Shor code. The Shor code is a [[9,1,3]] code which uses 9 qubits to encode a single qubit, but can correct single or -type errors.
The motivation for the code, is that the 3-qubit repetition code can correct bit flip errors but not phase flip errors. We can consider why this is by examining the encoded state, which looks like the following:
If a , , or a error occurs, the state is transformed to , another valid codeword. This means there is no way to tell if a phase flip error occurred or not. One could produce the repetition code in the and basis to correct errors, but then the same problem would persist for errors.
The ingenuity behind the Shor code is to concatenate two 3-bit repetition codes into a 9-qubit code that can detect both types of errors. The encoding process begins with the 3-bit encoding of the state.
Then, is encoded by taking a tensor product of three states.
The same process is completed for the state, this time using as the starting point.
This encoding of can be implemented with the following quantum circuit:

The next consideration is to define the logical operators so that they behave as we expect, namely: and In other words, we need the following equations to hold:
Can you see what the logical operators need to be?
For a logical bit flip to occur () the phase of each block needs to change. This is accomplished by performing a operation on one of the qubits in each block, thus is a valid choice, though not the only choice as others like or even also work. Similarly, for to take to (and to itself) all of the bits need to flip, thus . The curious reader can confirm that the anticommutativity holds between these logical operators and that they commute with each stabilizer discussed below.
Now, consider what happens when and -type errors corrupt a state encoded with the Shor code. If a bitflip error occurs on qubit 8 of the state.
The error only corrupts the third block of the code which houses the 8th qubit. So, this means stabilizers that perform parity checks on that block only ( and ) are sufficient to determine which position experienced an error. Extending this, all bit flip errors can be corrected with the following six stabilizers, two for each block. Because each block is an independent repetition code, the Shor code can handle three bit flip errors, as long as they occur in distinct blocks.
Now consider the impact of a a phase flip error that acts on the 6th qubit, for example.
The phase of the second block is changed which means the entire state can be rewritten as . This "zoomed out" view makes it clear how the repetition code is leveraged again. This time a stabilizer is needed which can test the parity of block 1 with block 2 and block 2 with block 3.
The stabilizer can be used for this which will return 1 if the first two blocks have the same phase and -1 if they differ. Work this out by hand to convince yourself this works if it is not obvious why this is the case. Similarly, can test the parity of the second two blocks completing the stabilizers necessary to detect phase flip errors.
All 8 stabilizers can correct any single-qubit or error as summarized in the table below. Note that the Shor code is a redundant code, meaning that certain syndromes correspond to multiple errors. At first this may seem problematic, but each error is fixed by the same correction, so knowing the specific source of the error is not always necessary.
| Error Type | Syndrome (Stabilizer Measurements) | | ----------- | ----------- | | No Error | 0 0 0 0 0 0 0 0 | | | 1 0 0 0 0 0 0 0 | | | 1 1 0 0 0 0 0 0 | | | 0 1 0 0 0 0 0 0 | | | 0 0 1 0 0 0 0 0 | | | 0 0 1 1 0 0 0 0 | | | 0 0 0 1 0 0 0 0 | | | 0 0 0 0 1 0 0 0 | | | 0 0 0 0 1 1 0 0 | | | 0 0 0 0 0 1 0 0 | | | 0 0 0 0 0 0 1 0 | | | 0 0 0 0 0 0 1 0 | | | 0 0 0 0 0 0 1 0 | | | 0 0 0 0 0 0 1 1 | | | 0 0 0 0 0 0 1 1 | | | 0 0 0 0 0 0 1 1 | | | 0 0 0 0 0 0 0 1 | | | 0 0 0 0 0 0 0 1 | | | 0 0 0 0 0 0 0 1 |
Exercise 3 - The Shor Code:
Now you have all of the backgound necessary to code the Shor code in CUDA-Q. Fill in the sections below to build up a kernel that performs Shor code encoding and syndrome checks. The kernel should be constructed such that you can apply errors and select mesurement in the or basis. Complete the tasks listed below to ensure your code works.
The final Hadamard is necessary because the code is concatenated and the second layer is in the basis. Specifying the measurement basis allows you to confirm that the errors were or were not fixed.
You will also need to post process the results. In the case of basis measurement (where you see the impact of logical errors), you need to compute the parity of the logical operator , by measuring all the qubits in the basis and computing the parity (Sum them and then mod 2) of the results.
The same can be done for an basis measurement (where you see the impact of logical errors). In this case you need to compute the parity of the logical operator by measuring all of the qubits in the basis and computing thier parity.
Write a postprocessing function below which takes results, computes the parity of each measurment, prints the number of 1's and 0's, and prints the results.
Now, run your code through the following tests and confirm it is working well.
Prepare in the state. Sample your kernel with no errors in the and basis. Do you see a 100/0 and 50/50 distribution for each respectively? What do you notice about the bitstrings when you measure in each basis?
Now, comment out the part of your code that fixes errors. Add a single error and measure in the and basis. Do you observe a bitflip in the basis results? Note, because the Shor code has an extra layer of Hadamard gates, a error impacts the observable which is not the usual case.
Now prepare in the state. Comment out the part of your code that fixes errors, add a single error, and measure in the and basis. Do you observe a bitflip in the basis results now?
Uncomment the part of your code that fixes the errors and run the same samples in 2 and 3. Did the correct syndrome flag and was the impact of the error ameliorated?
Prepare in the state again. With your full code, add multiple errors and note that the stabilizer checks cannot properly fix them as the code is only distance 3.