Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
NVIDIA
GitHub Repository: NVIDIA/cuda-q-academic
Path: blob/main/qec101/07_QEC_qLDPC.ipynb
1132 views
Kernel: Python 3 (ipykernel)

QEC 101 Lab 7: qLDPC Codes

One of the most promising classes of QEC codes are so called quantum low density parity check (qLDPC) codes. These codes are quite general and include well known codes like the surface code. This lab will walk through the basics of classical LDPC codes, the challenges that arise when moving to qLDPC codes, and how to construct valid qLDPC codes with favorable properties. You will eventually implement techniques from "Lift-Connected Surface Codes" connecting what you have leaned to state-of-the-art research.

Prerequisites: This lab assumes you have a moderate knowledge of QEC and have completed the core QEC 101 courses (labs 1-4), especially the labs covering stabilizers and decoders.

The list below outlines what you'll be doing in each section of this lab:

  • 7.1 Learn the basics of classical LDPC codes and how to analyze their properties.

  • 7.2 Learn why quantum LDPC codes are challenging to construct and how to build hypergraph product (HGP) codes.

  • 7.3 Extend the HGP procedure to produce larger qLDPC codes with improved properties.

  • 7.4 Compare the quality of the codes you created using the NVIDIA BP+OSD decoder.

Terminology you will use:

  • low density parity check, encoding rate, degree

  • hypergraph product

  • lifted product

  • circulants

qLDPC codes have a number of favorable properties that make them promising for deployment within nearer term fault tolerant workflows.

First run the cell below to prepare the necessary libraries.

import sys try: import time import cudaq_qec as qec import galois import cudaq_qec import ipywidgets as widgets import numpy as np from IPython.display import display from itertools import combinations from scipy.sparse import csr_matrix except ImportError: print("Tools not found, installing. Please restart your kernel after this is done.") !{sys.executable} -m pip install --upgrade pip !{sys.executable} -m pip install galois !{sys.executable} -m pip install cudaq-qec !{sys.executable} -m pip install ipywidgets print("\nNew libraries have been installed. Please restart your kernel!")

7.1 Classical LDPC Codes

Robert Gallager first conceived of low density parity check (LDPC) codes in his 1960 MIT dissertation but it was underappreciated at the time and interest resurged in the 90's as other error correction codes rose in prominence. LDPC codes are now used widely in telecommunications and computer memory applications

An LDPC code is a classical parity check error correction code with a sparse parity check matrix HH. The parity check matrix is often represented by a Tanner graph, which was introduced in lab 4 on decoders. A Tanner graph is drawn with check nodes on the top row and variable nodes on the bottom. The Tanner graph for the Steane code is shown below.

Drawing

A sparse HH means that each variable and check node only connects to a limited number of other nodes. The variable node degree characterizes the maximum number of checks any (q)bit is involved in while the and check node degree characterizes the maximum number of (q)bits involved in any given check. Ideally, these two values are as small as possible to maintain low density.

A second important property is the encoding rate (rr).

r=kn+cr = \frac{k}{n+c}

Where, kk is the number of encoded logical bits, nn is the number of data bits, and cc is the number of check bits. A high encoding rate is good and means that many logical bits can be encoded with a lower overhead. However, this competes with other properties like the code distance - i.e. the ability to correctly capture errors.

What kk is depends on the number of linearly independent constraints. To determine this, perform Gaussian elimination over GF(2). GF(2) comes from the world of abstract algebra and corresponds to a field of two elements. Essentially, this just means integer math governed by mod 2 arithmetic. The Gaussian elimination result can be used to determine rank(HH) which is related to kk by

k=nrank(H)k = n - \mathrm{rank(}H\mathrm{})

A final characteristic of a desirable LDPC code is how suited it is for decoding. Common decoders like belief propagation (BP) can struggle when the Tanner graph has 4-cycles. These form local loops (see image below) which can make it hard for the decoder to converge to a solution.

Drawing

In most cases, it turns out that LDPC codes are very easy to generate. Random generation of HH usually produces a good LDPC code. This also provides flexibility as new codes can be generated as needed depending on the problem at hand. Randomly generated codes also perform well and produce results close to the Shannon limit, that is the theoretical maximum of information that can pass through a noisy channel.

Exercise 1:

Given the three parity check matrices below, write a function to analyze them and determine the check and variable node degrees, the encoding rate, the indices of any four cycles, and if any nodes are unchecked.

H1 = np.array([ [1,1,0,0,1,0,0,0], [1,1,0,0,0,1,0,0], [0,0,1,1,0,0,1,0], [0,0,1,0,1,0,0,1], [0,0,0,1,0,1,1,0]], dtype=int) H2 = np.array([ [1,0,1,0,0,1,0], [0,1,1,1,0,0,0], [1,1,0,0,1,0,0], [0,0,0,1,1,1,0]], dtype=int) H3 = np.array([ [1,0,0,1,0,1,0,0,0,1, 0, 0, 1, 0, 0, 0], [0,1,0,0,1,0,1,0,0,0, 1, 0, 0, 0, 0, 0], [0,0,1,0,0,0,0,1,0,0, 0, 1, 0, 1, 1, 0], [1,0,0,0,1,0,0,0,1,0, 0, 0, 0, 1, 0, 0], [0,0,0,0,0,0,1,0,1,0, 0, 1, 1, 0, 0, 0], [0,0,1,0,0,1,0,0,0,0, 1, 0, 0, 0, 0, 1] ], dtype=int) def degrees(H): """ function which computes the degrees of a parity check matrix Args: H (np.array): parity check matrix Returns: (list): list of degrees for each variable bit (list): list of degrees for each check bit """ #TODO - Complete the function def unchecked_vars(H): """ function which identifies any unchecked variable bit Args: H (np.array): parity check matrix Returns: (list): list of unchecked variable bits """ #TODO - complete the function def four_cycles(H): """ function which identifies any four-cycles in a parity check matrix Args: H (np.array): parity check matrix Returns: (list): list of nodes involved in a 4-cycle. """ #TODO - complete the function def encoding_rate(H): """ function which computes the encoding rate based on rank of H. Note: Must use galois for GF2 field definition to ensure computation is correct Args: H (np.array): parity check matrix Returns: (float): encoding rate """ GF2 = galois.GF(2) Hgf2 = GF2(H) n = Hgf2.shape[1] rank = np.linalg.matrix_rank(Hgf2) k = n - rank return k / n, k def analyze(H, name): """ Function that organizes and prints results from previous functions Args: H (np.array): parity check matrix name (str): name of the parity chexk matrix Returns: """ vdeg, cdeg = degrees(H) R, k = encoding_rate(H) cycles = four_cycles(H) unchk = unchecked_vars(H) print(f'\n{name}:') print(' variable degrees:', vdeg.tolist()) print(' check degrees:', cdeg.tolist()) print(f' rate = {R:.3f} (k = {k})') if cycles: print(' 4‑cycles:') for i, j, p, q in cycles: print(f' vars ({i},{j}) rows ({p},{q})') else: print(' no 4‑cycles') if unchk.size: print(' unchecked variables:', unchk.tolist()) else: print(' all variables are checked') for H, name in [(H1, 'H1'), (H2, 'H2'), (H3, 'H3')]: analyze(H, name)

7.2 Quantum LDPC

qLDPC codes have many similarities to their classical counterparts, particularly with respect to terms like encoding rate and degree. Unfortunately, a major difference is that valid qLDPC codes with favorable properties cannot be produced by randomly generating parity check matrices. This is because the ZZ and XX parity check matrices (HZH_Z and HXH_X) must commute (HZHXT=0H_ZH^T_X=0) for a valid CSS code that can correct both types of errors.

The probability of randomly producing parity check matrices that commute is vanishingly small, let alone exhibit favorable properties. Cutting edge research focused on qLDPC codes is determined to find clever ways to produce quality parity check matrices that meet these constraints.

One particularly insightful approach is using so called hypergraph product codes. The idea is to take two "good" ( in this case a technical term meaning the codes distance scales as nn) classical parity check matrices H1H_1 (m1×n1m_1\times n_1) and H2H_2 (m2×n2m_2\times n_2) and combine them in such a way that HZH_Z and HXH_X commute (i.e. HZHXT=0H_ZH_X^T=0) and the resulting codes have a constant encoding rate and a distance that scales proportionally to the square root of the number of data qubits.

The procedure works by defining the final parity check matrix HH as a block encoding of HZH_Z and HXH_X.

(0HZHX0)\begin{pmatrix} 0 & H_Z \\ H_X & 0 \end{pmatrix}

Each quantum parity check matrix is then defined in terms of the the two classical base matrices.

HX=(1n1H2H1T1m2)H_X = \begin{pmatrix} \mathbf{1_{n_1}} \otimes H_2 & H_1^T \otimes \mathbf{1_{m_2}} \end{pmatrix}HZ=(H11n21m1H2T)H_Z = \begin{pmatrix} H_1 \otimes \mathbf{1_{n_2}} & \mathbf{1_{m_1}} \otimes H_2^T \end{pmatrix}

This construction ensures that HZH_Z and HXH_X commute. The proof is as follows:

HZHXT=(H11n21m1H2T)((1n1H2)T(H1T1m2)T)H_ZH_X^T = \begin{pmatrix} H_1 \otimes \mathbf{1_{n_2}} & \mathbf{1_{m_1}} \otimes H_2^T \end{pmatrix} \begin{pmatrix} (\mathbf{1_{n_1}} \otimes H_2)^T \\ (H_1^T \otimes \mathbf{1_{m_2}})^T \end{pmatrix}=(H11n2)(1n1H2T)+(1m1H2T)(H11m2)= (H_1 \otimes \mathbf{1_{n_2}})(\mathbf{1_{n_1}} \otimes H_2^T) + (\mathbf{1_{m_1}} \otimes H_2^T)(H_1 \otimes \mathbf{1_{m_2}})=H1H2T+H1H2T= H_1 \otimes H_2^T + H_1 \otimes H_2^T=2(H1H2T)=0= 2(H_1 \otimes H_2^T) = 0

It may not be clear at first why the final term equals zero. Recall that all operations with parity check matrices occur mod 2. So, taking any binary matrix and multiplying it by 2, will make every entry 0 or 2 = 0 mod 2.

Exercise 2:

Construct a hypergraph product code using a pair of three-qubit repetition code base matrices. That is, begin with:

H1=H2=(110011)H_1 =H_2 =\begin{pmatrix} 1&1&0\\ 0&1&1 \end{pmatrix}

Build the parity check matrices for HZH_Z and HXH_X and confim they commute. Note: the galois package is used to define the matrices over a Galois field (GF2), which ensures modular arithmetic is baked in to your computations. All operations can be performed just like you would with numpy.

H = np.array([[1,1,0], [0,1,1]]) #Using H as H1 = H2 def HGP(H): """ Function which takes classical base parity check matricies and performs hypergraph product construction Args: H (np.array): Base parity check matrix Returns: Hz (np.array): Hz matrix from HGP construction Hx (np.array): Hx matrix from HGP construction """ #TODO build the I matricies # Constructs a Galois field and updates you matricies at Galois field. GF2 = galois.GF(2) H = GF2(H) I_rows = GF2(I_rows) I_cols = GF2(I_cols) print("First term in Hx") Hx_a = #TODO print(Hx_a) print("\n Second term in Hx") Hx_b = #TODO print(Hx_b) print("\n Full Hx") Hx = #TODO print(Hx) print("\nFirst term in Hz") Hz_a = #TODO print(Hz_a) print("\n Second term in Hz") Hz_b = #TODO print(Hz_b) print("\n Full Hz") Hz = #TODO print(Hz) print("\n Hz times HxT") print(Hz @ Hx.T) return Hz, Hx HGP(H)

It turns out there is a nice visual interpretation of the hypergraph product code you just generated if the Tanner graphs form a multiplication table of sorts. Each node of the product tanner graph that is the product of a check qubit with a check qubit or a data qubit with a data qubit produces a data qubit. If the top Tanner graph is a circle (data qubit) and the left Tanner graph a square (check qubit), the result is an XX stabilizer check. Likewise, if the top Tanner graph is a square and the left Tanner graph a circle, the result is a ZZ parity check, producing the tanner graph below. Does it look familiar?

Drawing

Remarkably, it turns out the product of two size ll classical repetition codes is a [[ (l+1)2+l2(l+1)^2 + l^2, 11, l+1)l+1)]] surface code! This is a great example demonstrating how two very simple classical codes can construct a more sophisticated quantum code which obeys the required commutativity constraints.

7.3 Generalizing HGP - Lifted Product Codes

It is possible to build upon the HGP method in a more general manner where products are taken between two parity check matrices that have non-integer entries. Such an approach is called a lifted product (LP) as the elements of the parity check matrix are "lifted" to higher order elements. LP codes can often retain the degree of checks and provide higher distance codes with a smaller qubit overhead.

A LP construction still needs to ensure that HZHXT=0H_ZH_X^T=0 holds as parity check matrices are modified to have non-integer elements. One way to ensure this is to replace parity check matrix elements with a commutative matrix ring, that is, a set of mathematical objects with properties that ensure multiplication of any elements commute, ensuring HZHXT=0H_ZH_X^T=0 remains true (see the second term in the original proof of commutativity a few cells above). One example is L×LL \times L circulant matrices which are defined as:

C=i=0L1ciP(i)C = \sum_{i=0}^{L-1} c_iP^{(i)}

Where P(i)P^{(i)} are cyclic permutation matrices that shift columns of the identity matrix by ii spaces to the right and where cic_i can be either 0 or 1. The notation BL(P(i))B_L(P^{(i)}) indicates the binary representation of matrix size LL.

Exercise 3:

Build the following binary matrix representations:

  • B4(P(2))B_4(P^{(2)})

  • B5(P(1)+P(2))B_5(P^{(1)}+P^{(2)})

  • B3(I00P(2))B_3\begin{pmatrix} I&0\\ 0&P^{(2)} \end{pmatrix}

arr1 = #TODO arr2 = #TODO arr3 = #TODO

A LP code is built by taking a HGP of two parity check matrices where each 1 is, in this case, a circulant matrix CC. A LP code of size LL will increase the number of qubits and checks by a factor of L. This is because each entry in the parity check matrix is "lifted" and replaced by a set of LL checks and qubits.

The same HGP equations from the previous section hold, but instead, the parity check matrices are denoted with a tilde to note that their elements are circulants where each entry is otherwise a 1. That is to say, H=LP(H~1,H~2)=BL(H~)H=LP(\tilde{H}_1,\tilde{H}_2) = B_L(\tilde{H})

The figure below, based on figure 2 of Lift-Connected Surface Codes, is helpful for understanding what the LP construction is doing. The overall procedure is similar. First, the base matrices are selected and are used in the same HGP procedure you did previously to form H~Z\tilde{H}_Z and H~X\tilde{H}_X. Then, each 1 in each parity check matrix is replaced with a circulant CC. This simple swap takes select connections from the original Tanner graph, and lifts it to be replaced with a set of check and data qubits.

Drawing

If CC is simply the identity matrix II, the resulting LP codes is the result of the HGP code duplicated trivially LL times. Adding permutations to CC such as I+P(1)I + P^{(1)} adds non-trivial checks between the HGP code copies. Notice the figure below (adapted figure 3 from Lift-Connected Surface Codes) begins with the HGP that you performed earlier. Then, The LP construction creates four copies of the surface code and interconnects them with parity checks.

Drawing

It turns out that the LP surface code is a [[52,4,4]] code whereas four copies of the surface code would be [[52,4,3]]. This means that the encoding rate is the same, but the code distance improves thanks to the LP procedure. These sorts of clever constructions are driving qLDPC code research to continually improve code properties while maintaining the commutativity properties.

Generally speaking, a LP surface code is parameterized with ll and LL, where ll is the size of the base repetition code and LL is the number of copies produced by the lift procedure.

Exercise 4:

Build the [[52,4,4]] ( l=2l =2 and L=4L =4) LP surface code by performing the following steps. First, use the base matrix below which can be conveniently split into HcopyH_{copy}, which produces trivial copies of the surface code and (HintH_{int}), which interacts these surface code copies.

(II+P(1)000II+P(1)000II+P(1)000II+P(1))=(II000II000II000II)+(0P(1)0000P(1)0000P(1)0000P(1)) \begin{pmatrix} I & I + P^{(1)} & 0 & \cdots & \cdots &0 \\ 0 & I & I + P^{(1)} & \cdots & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots &\vdots \\ 0 & \cdots & 0 & I & I + P^{(1)} & 0\\ 0 & \cdots & \cdots & 0 & I &I+ P^{(1)} \end{pmatrix}= \begin{pmatrix} I & I & 0 & \cdots & \cdots &0 \\ 0 & I & I & \cdots & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots &\vdots \\ 0 & \cdots & 0 & I & I & 0\\ 0 & \cdots & \cdots & 0 & I &I \end{pmatrix} + \begin{pmatrix} 0 & P^{(1)} & 0 & \cdots & \cdots &0 \\ 0 & 0 & P^{(1)} & \cdots & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots &\vdots \\ 0 & \cdots & 0 & 0 & P^{(1)} & 0\\ 0 & \cdots & \cdots & 0 & 0 & P^{(1)} \end{pmatrix}

To produce the [[52,4,4]] code, we can perform the HGP procedure twice using H=(110011) H =\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ \end{pmatrix} and H=(010001)H =\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} . Lifting the first with B4(I)B_4(I) and the second with B4(P(1))B_4(P^{(1)}) and summing the results will produce the parity check matrices for the [[52,4,4]] code.

Modify your HGP function to lift Hx_a and Hz_a with an arbitrary BB and Hx_b and Hz_b with the transpose of BB.

Then build the [[52,4,4]] code by lifting H_copy and H_int (defined below) with B4(I)B_4(I) and B4(P(1))B_4(P^{(1)}) , respectively. Confirm that Hz and Hx of the final result commute.

H_copy = np.array([[1,1,0], [0,1,1]]) B_I_4 = np.array([[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]]) H_int = np.array([[0,1,0], [0,0,1]]) B_P1_4 = np.array([[0,1,0,0], [0,0,1,0], [0,0,0,1], [1,0,0,0]]) def LP(H, B): """ Function which perfoms lifted product construction of base matrices H with lift matrix B Args: H (np.array): Base parity check matrix B (np.array): Binary representation of lift matrix Returns: Hz (np.array): Hz matrix from HGP construction Hx (np.array): Hx matrix from HGP construction """ rows, cols = H.shape I_rows = np.eye(rows, dtype=int) I_cols = np.eye(cols, dtype=int) GF2 = galois.GF(2) # allows mod 2 math. H = GF2(H) I_rows = GF2(I_rows, dtype=int) I_cols = GF2(I_cols, dtype=int) B = GF2(B) print("First term in Hx") Hx_a = np.kron(I_cols, H) print(Hx_a) print("First term in Hx lifted") Hx_a = # TODO print(Hx_a) print("\n Second term in Hx") Hx_b = np.kron(H.T,I_rows) print(Hx_b) print("Second term in Hx lifted") Hx_b = # TODO print(Hx_b) print("\n Full Lifted Hx") Hx = # TODO print(Hx) print("\nFirst term in Hz") Hz_a = np.kron(H, I_cols) print(Hz_a) print("First term in Hz lifted") Hz_a = # TODO print(Hz_a) print("\n Second term in Hz") Hz_b = np.kron(I_rows, H.T) print(Hz_b) print("Second term in Hz lifted") Hz_b = # TODO print(Hz_b) print("\n Full Hz") Hz = # TODO print(Hz) print("\n Hz times HxT") print(Hz @ Hx.T) return Hz, Hx Hz_lifted_copy, Hx_lifted_copy = LP(H_copy, B_I_4)
Hz_lifted_int, Hx_lifted_int = LP(H_int, B_P1_4)
Hz_lifted_total = Hz_lifted_copy + Hz_lifted_int Hx_lifted_total = Hx_lifted_copy + Hx_lifted_int

Now, analyze the Hz and Hx parity check matrices to 1) make sure they commute and 2) confirm the degrees are as expected. Each stabilizer should act on maximum 6 qubits and each qubit should be involved in no more than 6 checks (summing Z and X checks as the full parity check matrix would be a concatenation of both HxH_x and HzH_z.).

#Confirm Hz and Hx still commute print("\n Hz times HxT") # TODO #Confirm [[52,4,4]] code was created print("\n Number of logical qubits encoded (data qubits minus checks)") # TODO #Confirm degree of code is correct (should be 6 and 6) # TODO

7.4 Decoding with CUDA-Q QEC

💻 Just a heads-up: This notebook is designed to be run on an environment with a GPU. If you don't have access to a GPU, feel free to read through the cells and explore the content without executing them. Enjoy learning! ⭐

qLDPC codes are well suited for decoding with CUDA-Q's accelerated BP+OSD decoder found in the CUDA-Q QEC library. If you want to learn more about BP+OSD decoding, complete lab 4 on decoding.

As we have not discussed logical observables for these codes, the code below will randomly generate an error vector with a 5% chance of an error on each qubit (Only assuming bit flip errors for now). The decoder will produce a logical error if the decoder cannot identify all of the errors given the syndrome. However, note that each of these errors might not produce a logical flip in practice, so this considers a worst case scenario.

In the cell below, run the decoder using the HzH_z matrix you produced for the [[52,4,4]] LP surface code. Carefully read the code and see if you can spot where errors and syndromes are generated, where decoder options are specified, and how the decoder is called. Run the code and note what the logical error rate is.

# Define parameters n = Hz_lifted_total.shape[1] # number of physical qubits k = n - Hz_lifted_total.shape[0] # number of logical qubits num_shots = 10000 # Define error rates (can be uniform or varied per qubit) error_rate_vec = np.ones(n) * 0.05 # error rate for each qubit # Create decoder nv_dec_args = { "max_iterations": 50, "error_rate_vec": error_rate_vec, "use_sparsity": True, "use_osd": True, "osd_order": 0, "osd_method": 0, "bp_batch_size": 1000 } decoder = qec.get_decoder("nv-qldpc-decoder", Hz_lifted_total, **nv_dec_args) # Generate random errors and decode decoding_time = 0 bp_converged_flags = [] num_logical_errors = 0 # Generate batch of errors errors = np.random.binomial(1, error_rate_vec, (num_shots, n)) syndromes = (Hz_lifted_total.view(np.ndarray) @ errors.T % 2).T # Batched decoding t0 = time.time() results = decoder.decode_batch(syndromes.tolist()) t1 = time.time() decoding_time = t1 - t0 # Process results for i, r in enumerate(results): bp_converged_flags.append(r.converged) decoded_error = np.array(r.result, dtype=np.uint8) # Check if error was corrected if not np.array_equal(decoded_error, errors[i]): num_logical_errors += 1 # Print statistics print(f"{num_logical_errors} logical errors in {num_shots} shots") print(f"Number of shots that converged with BP: {sum(bp_converged_flags)}") print(f"Average decoding time: {1e3 * decoding_time / num_shots:.2f} ms per shot") # Optional: Single shot example single_syndrome = syndromes[0] bp_converged, decoded_result, *_ = decoder.decode(single_syndrome.tolist())

Now, run the same code but use the Hz_lifted_copy parity check matrix. This code is simply four non-interacting copies of the surface code and hence a [[52,4,3]] code. How does the logical error rate compare? Can you see the benefit of the LP surface code as it adds one to the distance and outperforms copies of the surface code significantly.

# Define parameters n = Hz_lifted_copy.shape[1] # number of physical qubits k = n - Hz_lifted_copy.shape[0] # number of logical qubits num_shots = 10000 # Define error rates (can be uniform or varied per qubit) error_rate_vec = np.ones(n) * 0.05 # error rate for each qubit # Create decoder nv_dec_args = { "max_iterations": 50, "error_rate_vec": error_rate_vec, "use_sparsity": True, "use_osd": True, "osd_order": 0, "osd_method": 0, "bp_batch_size": 1000 } decoder = qec.get_decoder("nv-qldpc-decoder", Hz_lifted_copy, **nv_dec_args) # Generate random errors and decode decoding_time = 0 bp_converged_flags = [] num_logical_errors = 0 # Generate batch of errors errors = np.random.binomial(1, error_rate_vec, (num_shots, n)) syndromes = (Hz_lifted_copy.view(np.ndarray) @ errors.T % 2).T # Batched decoding t0 = time.time() results = decoder.decode_batch(syndromes.tolist()) t1 = time.time() decoding_time = t1 - t0 # Process results for i, r in enumerate(results): bp_converged_flags.append(r.converged) decoded_error = np.array(r.result, dtype=np.uint8) # Check if error was corrected if not np.array_equal(decoded_error, errors[i]): num_logical_errors += 1 # Print statistics print(f"{num_logical_errors} logical errors in {num_shots} shots") print(f"Number of shots that converged with BP: {sum(bp_converged_flags)}") print(f"Average decoding time: {1e3 * decoding_time / num_shots:.2f} ms per shot") # Optional: Single shot example single_syndrome = syndromes[0] bp_converged, decoded_result, *_ = decoder.decode(single_syndrome.tolist())

Each choice of ll and LL will produce a different LP surface code. It varies case by case if the LP approach is better than copies of the surface code. Examine the table below from Lift-Connected Surface Codes where the code parameters were obtained via numerical simulation. Note, how the entries highlighted in green are cases where the LP construction results (top entry) in a code with the same overhead but a higher code distance compared to surface code copies (bottom entry).

Drawing

Summary

You now have a foundational understanding of qLDPC codes and how they differ from their classical counterparts. qLDPC codes are quite promising and will continue to be an active field of research. The methods covered in this work are just a sample of the different ways to construct qLDPC code parity check matrices yet lay the groundwork for you to understand other state of the art techniques like bivariate bicycle codes.