Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
duyuefeng0708
GitHub Repository: duyuefeng0708/Cryptography-From-First-Principle
Path: blob/main/frontier/08-lattices-post-quantum/sage/08d-learning-with-errors.ipynb
483 views
unlisted
Kernel: SageMath 10.5

Learning With Errors

Module 08 | 08-lattices-post-quantum

LWE definition, noise, search vs decision

Objectives

By the end of this notebook you will be able to:

  1. Define the Learning With Errors (LWE) problem and explain why adding noise to a linear system transforms it from trivially solvable to conjecturally hard.

  2. Construct LWE instances in SageMath and experimentally verify that Gaussian elimination fails in the presence of noise.

  3. Distinguish between Search-LWE and Decision-LWE, and explain their relationship.

  4. Analyze how the parameters (n,q,σ)(n, q, \sigma) control security, and connect LWE hardness to the lattice problems (SVP/CVP) studied in earlier notebooks.

  5. Foreshadow how LWE underpins Kyber (ML-KEM) and other post-quantum schemes.

Prerequisites

  • Completion of The LLL Algorithm.

  • Familiarity with lattices, SVP/CVP, and the fact that LLL provides approximate solutions but cannot solve worst-case SVP/CVP exactly.

  • Comfort with matrices and vectors over Zq\mathbb{Z}_q (modular arithmetic from Module 01).

Motivating Question

Solving a system of linear equations is easy --- Gaussian elimination does it in O(n3)O(n^3) time. What if I add a tiny bit of noise to every equation? Suddenly it becomes one of the hardest problems in mathematics.

This is the central miracle of the Learning With Errors problem. A system b=As\mathbf{b} = A\mathbf{s} is trivial. A system b=As+e\mathbf{b} = A\mathbf{s} + \mathbf{e} (where e\mathbf{e} is "small") appears to be almost the same thing --- yet no known algorithm can solve it efficiently when the parameters are chosen correctly.

Bridge from 08c: In the previous notebook, you saw that LLL can find short lattice vectors and approximately solve CVP. You might hope that LLL could strip away the noise in LWE. We will see that for properly chosen parameters, even LLL (and its stronger variants like BKZ) cannot recover the secret. This is precisely why LWE is the foundation of post-quantum cryptography.

1. Setup: Linear Systems Without Noise

Let us start with something familiar. We pick a secret vector sZqn\mathbf{s} \in \mathbb{Z}_q^n and a random matrix AZqm×nA \in \mathbb{Z}_q^{m \times n}, and compute b=As(modq)\mathbf{b} = A\mathbf{s} \pmod{q}.

This is just a system of mm linear equations in nn unknowns over a finite field (when qq is prime). Gaussian elimination solves it instantly.

# Parameters n = 6 # dimension (number of unknowns) m = 10 # number of equations (samples) q = 101 # modulus (prime) Zq = Zmod(q) # Secret vector set_random_seed(42) # reproducibility s = random_vector(Zq, n) print(f'Secret s = {s}') print(f'Parameters: n={n}, m={m}, q={q}')
# Generate a random matrix A and compute b = A*s (NO noise) A = random_matrix(Zq, m, n) b_clean = A * s print('A (first 4 rows):') print(A[:4]) print(f'\nb = A*s = {b_clean}')
# Solve using Gaussian elimination: A \ b gives s # We use the first n rows (a square, invertible subsystem) A_square = A[:n] b_square = b_clean[:n] s_recovered = A_square.solve_right(b_square) print(f'Recovered s = {s_recovered}') print(f'Original s = {s}') print(f'Match: {s_recovered == s}')

No surprise: a linear system over Zq\mathbb{Z}_q is easy. Gaussian elimination recovers s\mathbf{s} exactly.

Now let us see what happens when we add noise.

2. The LWE Problem: Adding Noise

Definition (LWE). Fix parameters nn (dimension), qq (modulus), and a noise distribution χ\chi (typically a discrete Gaussian with standard deviation σ\sigma). The LWE problem is:

Given (A,b) where b=As+e(modq),\text{Given } (A, \mathbf{b}) \text{ where } \mathbf{b} = A\mathbf{s} + \mathbf{e} \pmod{q},

with A$Zqm×nA \xleftarrow{$} \mathbb{Z}_q^{m \times n}, s$Zqn\mathbf{s} \xleftarrow{$} \mathbb{Z}_q^n, and eχm\mathbf{e} \xleftarrow{} \chi^m (each entry is small), find s\mathbf{s}.

The key point: e\mathbf{e} is small relative to qq. Each entry of e\mathbf{e} is typically in the range [σ2π,+σ2π][-\sigma\sqrt{2\pi}, +\sigma\sqrt{2\pi}] with high probability, while qq can be much larger.

# Generate noise: small errors from a discrete Gaussian-like distribution # We use a simple rounded Gaussian for illustration from sage.stats.distributions.discrete_gaussian_integer import DiscreteGaussianDistributionIntegerSampler sigma = 3.0 # noise standard deviation D = DiscreteGaussianDistributionIntegerSampler(sigma=sigma) # Sample error vector e = vector(Zq, [D() for _ in range(m)]) print(f'Error vector e = {e}') print(f'(Interpreting as signed: {[int(ei) if int(ei) < q/2 else int(ei)-q for ei in e]})') # Compute noisy b b_noisy = A * s + e print(f'\nb_clean = A*s = {b_clean}') print(f'b_noisy = A*s + e = {b_noisy}')

Checkpoint

Predict before running the next cell: If we apply Gaussian elimination to the noisy system (A,bnoisy)(A, \mathbf{b}_{\text{noisy}}), will we recover s\mathbf{s}?

Think about it: the system is As+e=bA\mathbf{s} + \mathbf{e} = \mathbf{b}, but Gaussian elimination "thinks" it is solving Ax=bA\mathbf{x} = \mathbf{b}. It will find some solution, but will it be s\mathbf{s}?

# Try to solve the NOISY system with Gaussian elimination A_square = A[:n] b_noisy_square = b_noisy[:n] s_attempt = A_square.solve_right(b_noisy_square) print(f'Gaussian elim gives: {s_attempt}') print(f'Actual secret s: {s}') print(f'Match: {s_attempt == s}') print(f'\nGaussian elimination FAILS! The noise has corrupted the solution.') print(f'The "solution" is a meaningless vector in Z_{q}^{n}.')

This is the core insight:

SystemDifficulty
b=As\mathbf{b} = A\mathbf{s}Trivial (Gaussian elimination)
b=As+e\mathbf{b} = A\mathbf{s} + \mathbf{e}Conjectured hard (even for quantum computers)

A "tiny" perturbation transforms the problem from O(n3)O(n^3) to (conjectured) exponential.

Misconception alert: "LWE is just solving noisy equations, so just round to remove the noise." Rounding works in very low dimensions (try it for n=2n=2!), but in high dimensions the errors accumulate through the matrix operations. Gaussian elimination amplifies the noise catastrophically --- by the time you finish back-substitution, the errors have grown to fill all of Zq\mathbb{Z}_q, leaving you with a random vector.

3. Visualizing the Noise

Let us see what LWE "looks like." We will generate many LWE samples and plot the residuals bAs(modq)\mathbf{b} - A\mathbf{s} \pmod{q}. Without noise, these are all zero. With noise, they form a cluster around zero.

# Generate many LWE samples and visualize the noise M = 2000 # number of samples A_big = random_matrix(Zq, M, n) e_big = vector(Zq, [D() for _ in range(M)]) b_big = A_big * s + e_big # Compute residuals: b - A*s (mod q) # If we knew s, these would reveal the noise pattern residuals = b_big - A_big * s # Convert to signed representation centered around 0 def to_signed(x, q): x = ZZ(x) % q return x if x <= q//2 else x - q residuals_signed = [to_signed(r, q) for r in residuals] # Histogram of residuals p = histogram(residuals_signed, bins=range(-15, 16), color='steelblue', edgecolor='white', density=True) p += text(f'Noise distribution (sigma={sigma})', (0, 0.15), fontsize=12, color='black') p.axes_labels(['Error value', 'Density']) show(p, figsize=(8, 4))
# Now compare: what does b look like vs A*s? # Plot the first component of b vs the first component of A*s As_vals = [ZZ((A_big * s)[i]) for i in range(min(500, M))] b_vals = [ZZ(b_big[i]) for i in range(min(500, M))] p = scatter_plot(list(zip(As_vals, b_vals)), markersize=4, alpha=0.5, facecolor='steelblue') # Perfect line y=x for reference p += plot(x, (x, 0, q), color='red', linestyle='--', legend_label='b = As (no noise)') p.axes_labels(['A*s (mod q)', 'b = A*s + e (mod q)']) show(p, figsize=(6, 6), title='LWE samples cluster around the line y = x')

The scatter plot shows that each noisy observation bib_i is close to the true value (As)i(A\mathbf{s})_i, but not exactly equal. The red dashed line is the noiseless case. The blue dots scatter around it --- that scatter IS the LWE noise.

4. Decision-LWE: Can You Tell Noise from Random?

There is a second, equally important formulation of LWE:

Decision-LWE. Given (A,b)(A, \mathbf{b}), distinguish between:

  • b=As+e(modq)\mathbf{b} = A\mathbf{s} + \mathbf{e} \pmod{q} (LWE samples), and

  • b$Zqm\mathbf{b} \xleftarrow{$} \mathbb{Z}_q^m (uniformly random).

If the noise is large enough and qq is large enough, the LWE samples "look random" --- no efficient algorithm can tell them apart from uniform.

Let us build a distinguisher and see when it works and when it fails.

def generate_lwe_samples(n, m, q, sigma): """Generate an LWE instance: (A, b = A*s + e mod q).""" Zq = Zmod(q) D = DiscreteGaussianDistributionIntegerSampler(sigma=sigma) s = random_vector(Zq, n) A = random_matrix(Zq, m, n) e = vector(Zq, [D() for _ in range(m)]) b = A * s + e return A, b, s, e def generate_random_samples(n, m, q): """Generate uniform random (A, b) --- no LWE structure.""" Zq = Zmod(q) A = random_matrix(Zq, m, n) b = random_vector(Zq, m) return A, b print('Helper functions defined.')
# A naive distinguisher: try to solve A*x = b and check the residual # If it's LWE, the residual b - A*x_hat should be "small" # If it's random, the residual will be uniformly distributed def naive_distinguisher(A, b, q, threshold): """ Attempt to distinguish LWE from random. Solve A*x = b over Z_q, compute residual, check if residual is 'small'. Returns 'LWE' or 'Random'. """ n = A.ncols() try: # Use first n rows to solve x_hat = A[:n].solve_right(b[:n]) # Check residual on ALL rows residual = b - A * x_hat # Convert to signed and compute average absolute value res_signed = [abs(to_signed(ZZ(r), q)) for r in residual] avg_residual = sum(res_signed) / len(res_signed) return 'LWE' if avg_residual < threshold else 'Random' except Exception: return 'Unknown' print('Distinguisher defined.')
# Test the distinguisher with VERY SMALL noise (sigma=1) # It should work here because the noise is tiny relative to q print('=== Small noise (sigma=1, q=101) ===') print('The noise barely perturbs the system, so structure is detectable.\n') correct = 0 trials = 20 for _ in range(trials): # LWE instance A_lwe, b_lwe, _, _ = generate_lwe_samples(n=6, m=20, q=101, sigma=1.0) guess_lwe = naive_distinguisher(A_lwe, b_lwe, 101, threshold=q/4) # Random instance A_rand, b_rand = generate_random_samples(n=6, m=20, q=101) guess_rand = naive_distinguisher(A_rand, b_rand, 101, threshold=q/4) if guess_lwe == 'LWE': correct += 1 if guess_rand == 'Random': correct += 1 print(f'Distinguisher accuracy: {correct}/{2*trials} = {100*correct/(2*trials):.0f}%') print('With tiny noise, the distinguisher works well!')
# Now try with PROPER noise (sigma large relative to q) # Use a larger q so sigma/q ratio is meaningful print('=== Proper noise (sigma=8, q=101) ===') print('Noise fills a significant fraction of Z_q. Can we still distinguish?\n') correct = 0 for _ in range(trials): A_lwe, b_lwe, _, _ = generate_lwe_samples(n=6, m=20, q=101, sigma=8.0) guess_lwe = naive_distinguisher(A_lwe, b_lwe, 101, threshold=q/4) A_rand, b_rand = generate_random_samples(n=6, m=20, q=101) guess_rand = naive_distinguisher(A_rand, b_rand, 101, threshold=q/4) if guess_lwe == 'LWE': correct += 1 if guess_rand == 'Random': correct += 1 print(f'Distinguisher accuracy: {correct}/{2*trials} = {100*correct/(2*trials):.0f}%') print('With larger noise, the distinguisher degrades toward random guessing (50%)!')

Takeaway: When σ\sigma is small relative to qq, the LWE distribution has detectable structure and a simple distinguisher works. When σ\sigma is chosen appropriately (large enough to mask the structure, but small enough that decryption still works), LWE samples become indistinguishable from random.

This is the Decision-LWE assumption: for appropriate parameters, no polynomial-time algorithm can distinguish (A,As+e)(A, A\mathbf{s} + \mathbf{e}) from (A,u)(A, \mathbf{u}) with non-negligible advantage.

5. Search-LWE: Recovering the Secret

Search-LWE asks: given (A,b=As+e)(A, \mathbf{b} = A\mathbf{s} + \mathbf{e}), find s\mathbf{s}.

A classical result due to Regev (2005) shows that Search-LWE and Decision-LWE are polynomially equivalent when qq is polynomial in nn. So if you can decide, you can search, and vice versa.

Let us try brute force search in a tiny instance to see the structure, then observe how quickly it becomes infeasible.

# Brute-force search for TINY parameters n_tiny, q_tiny, sigma_tiny = 3, 17, 1.5 m_tiny = 8 A_t, b_t, s_t, e_t = generate_lwe_samples(n_tiny, m_tiny, q_tiny, sigma_tiny) print(f'Tiny LWE: n={n_tiny}, q={q_tiny}, sigma={sigma_tiny}') print(f'Secret: s = {s_t}') print(f'Search space: q^n = {q_tiny}^{n_tiny} = {q_tiny^n_tiny} candidates\n') # Try every possible s in Z_q^n Zq_tiny = Zmod(q_tiny) best_score = Infinity best_candidate = None for s_candidate in VectorSpace(Zq_tiny, n_tiny): residual = b_t - A_t * s_candidate # Score: sum of squared (signed) residuals score = sum(to_signed(ZZ(r), q_tiny)^2 for r in residual) if score < best_score: best_score = score best_candidate = s_candidate print(f'Best candidate: {best_candidate} (score = {best_score})') print(f'Actual secret: {s_t}') print(f'Match: {best_candidate == s_t}') print(f'\nBrute force works for q^n = {q_tiny^n_tiny}, but real LWE uses n=512+, q~3329...')

6. Connection to Lattices: LWE as CVP

Why is LWE a "lattice" problem? Consider the lattice

Λq(A)={xZm:xAs(modq) for some sZqn}.\Lambda_q(A) = \{\mathbf{x} \in \mathbb{Z}^m : \mathbf{x} \equiv A\mathbf{s} \pmod{q} \text{ for some } \mathbf{s} \in \mathbb{Z}_q^n\}.

The vector As(modq)A\mathbf{s} \pmod{q} is a lattice point, and b=As+e\mathbf{b} = A\mathbf{s} + \mathbf{e} is a point near the lattice (displaced by the small error e\mathbf{e}). So solving LWE is equivalent to solving a Closest Vector Problem (CVP) on this lattice: find the lattice point closest to b\mathbf{b}.

We already know from 08b-08c that CVP is hard in general, and LLL only gives approximate solutions. LWE inherits this hardness.

Let us verify this connection concretely.

# Build the q-ary lattice from A # Lattice Lambda_q(A) = { y in Z^m : y = A*s (mod q) for some s } # Basis: columns of [A^T | qI] give the lattice after transposing n_lat, m_lat, q_lat = 4, 8, 31 sigma_lat = 2.0 Zq_lat = Zmod(q_lat) A_lat, b_lat, s_lat, e_lat = generate_lwe_samples(n_lat, m_lat, q_lat, sigma_lat) # Construct the lattice basis (Kannan embedding style) # Rows of the basis matrix generate the lattice A_int = matrix(ZZ, A_lat) # lift to integers # q-ary lattice basis: stack A^T on top of q*I basis_top = A_int.transpose() # n x m basis_bot = q_lat * identity_matrix(ZZ, m_lat) # m x m L_basis = block_matrix([[basis_top], [basis_bot]]) # (n+m) x m print(f'LWE instance: n={n_lat}, m={m_lat}, q={q_lat}') print(f'Secret s = {s_lat}') print(f'Error e = {e_lat} (signed: {[to_signed(ZZ(ei), q_lat) for ei in e_lat]})') print(f'\nLattice basis has {L_basis.nrows()} rows in Z^{L_basis.ncols()}') print(f'\nThe target vector b is close to A*s in this lattice.') print(f'Distance (error norm) = {sqrt(sum(to_signed(ZZ(ei), q_lat)^2 for ei in e_lat)).n(digits=4)}')
# Try using LLL to approximately solve CVP (Babai's nearest plane) # For these small parameters, it might work! b_int = vector(ZZ, b_lat) # LLL-reduce the basis L_reduced = L_basis.LLL() # Babai's nearest plane algorithm def babai_cvp(basis, target): """Babai's nearest plane algorithm for approximate CVP.""" B = basis.change_ring(QQ) t = vector(QQ, target) G, _ = B.gram_schmidt() b = t for i in range(B.nrows()-1, -1, -1): c = (b * G[i]) / (G[i] * G[i]) b = b - round(c) * B[i] return target - b # closest lattice vector closest = babai_cvp(L_reduced, b_int) error_found = b_int - closest # The closest lattice vector should be A*s (mod q) As_int = vector(ZZ, A_lat * s_lat) print(f'Target b: {b_int}') print(f'Closest lattice point: {vector(ZZ(x) % q_lat for x in closest)}') print(f'True A*s mod q: {As_int}') print(f'Recovered error: {error_found}') print(f'True error (signed): {[to_signed(ZZ(ei), q_lat) for ei in e_lat]}') print(f'\nFor small parameters, LLL+Babai can crack LWE!') print(f'But as n grows and sigma/q is tuned, this approach fails.')

7. Parameter Space: When Is LWE Hard?

LWE security depends on three parameters:

ParameterRoleEffect on security
nnDimensionLarger nn = harder (exponential in nn)
qqModulusMust be large enough for correctness, but not too large
σ\sigmaNoise widthLarger noise = harder to solve, but too large breaks decryption

The critical ratio is σ/q\sigma / q: if σ\sigma is too small relative to qq, the noise is negligible and the system is easy. If σ\sigma is too large, decryption errors become likely.

Regev's reduction (2005): Worst-case lattice problems (like GapSVP) reduce to average-case LWE when σn\sigma \geq \sqrt{n}. This is why LWE is so compelling: breaking any LWE instance (even a random one) is as hard as solving worst-case lattice problems.

Let us experimentally see how the dimension nn affects the difficulty of a lattice attack.

# Experiment: try LLL-based attack for increasing n # For each n, generate LWE and see if Babai's algorithm recovers s print('n q sigma LLL recovers s? time (s)') for n_exp in [4, 6, 8, 12, 16, 20]: q_exp = next_prime(n_exp^2 * 10) # q ~ O(n^2) sigma_exp = max(2.0, sqrt(n_exp)) # sigma ~ sqrt(n) m_exp = 2 * n_exp Zq_exp = Zmod(q_exp) A_exp, b_exp, s_exp, e_exp = generate_lwe_samples(n_exp, m_exp, q_exp, float(sigma_exp)) # Build lattice and try LLL + Babai A_exp_int = matrix(ZZ, A_exp) basis_exp = block_matrix([[A_exp_int.transpose()], [q_exp * identity_matrix(ZZ, m_exp)]]) t0 = walltime() try: L_red = basis_exp.LLL() closest_exp = babai_cvp(L_red, vector(ZZ, b_exp)) # Check: does closest mod q equal A*s mod q? closest_mod = vector(Zq_exp, [ZZ(x) % q_exp for x in closest_exp]) As_mod = A_exp * s_exp success = (closest_mod == As_mod) except Exception: success = False elapsed = walltime() - t0 print(f'{n_exp} {q_exp} {float(sigma_exp):>6.1f} {str(success)} {elapsed:>10.3f}')

As nn increases, the LLL-based attack starts to fail. For the parameters used in real-world schemes (like Kyber, where n=256n = 256 per polynomial), the lattice dimension is in the hundreds or thousands, and no known algorithm --- classical or quantum --- can solve LWE.

Crypto foreshadowing: Kyber (ML-KEM), the NIST-selected post-quantum key encapsulation mechanism, is built on Module-LWE --- a structured variant of LWE where the matrix AA has a special block structure using polynomial rings. The noise e\mathbf{e} IS the security: without it, Kyber would be trivially breakable by linear algebra. The next notebooks (08e, 08f) show how the ring structure makes this efficient enough for real-world use.

Exercises

Exercise 1: Noise Threshold for Gaussian Elimination (Worked)

Goal: Experimentally find the noise level at which Gaussian elimination stops recovering the secret.

Setup: Fix n=5n = 5, q=101q = 101, m=10m = 10. For increasing σ\sigma values, generate LWE instances and try to solve with Gaussian elimination. Measure the fraction of correct recoveries.

# EXERCISE 1 - FULLY WORKED SOLUTION n_ex, q_ex, m_ex = 5, 101, 10 Zq_ex = Zmod(q_ex) num_trials = 50 sigma_values = [0.0, 0.5, 1.0, 2.0, 3.0, 5.0, 8.0, 12.0] success_rates = [] for sig in sigma_values: successes = 0 for _ in range(num_trials): s_ex = random_vector(Zq_ex, n_ex) A_ex = random_matrix(Zq_ex, m_ex, n_ex) if sig == 0: e_ex = zero_vector(Zq_ex, m_ex) else: D_ex = DiscreteGaussianDistributionIntegerSampler(sigma=float(sig)) e_ex = vector(Zq_ex, [D_ex() for _ in range(m_ex)]) b_ex = A_ex * s_ex + e_ex # Try Gaussian elimination on first n rows try: s_guess = A_ex[:n_ex].solve_right(b_ex[:n_ex]) if s_guess == s_ex: successes += 1 except Exception: pass # singular matrix rate = successes / num_trials success_rates.append(rate) print(f'sigma = {sig:>5.1f}: success rate = {rate:.0%}') # Plot p = list_plot(list(zip(sigma_values, success_rates)), plotjoined=True, marker='o', color='steelblue', thickness=2) p.axes_labels(['Noise sigma', 'Gaussian elim. success rate']) show(p, figsize=(7, 4), title=f'n={n_ex}, q={q_ex}: noise kills Gaussian elimination')

Interpretation: At σ=0\sigma = 0, Gaussian elimination always succeeds. By σ1\sigma \geq 1, the success rate drops sharply. Even tiny noise (relative to q=101q = 101) is enough to destroy exact linear algebra. This is the fundamental principle behind LWE security.

Exercise 2: Build a Better Distinguisher (Guided)

The naive distinguisher from Section 4 uses average residual magnitude. A better approach uses the chi-squared statistic: if the residuals are LWE noise, they cluster near zero; if random, they are uniform over Zq\mathbb{Z}_q.

Task: Complete the function below to implement a chi-squared distinguisher.

Hints:

  1. Compute residuals bAs^(modq)\mathbf{b} - A\hat{\mathbf{s}} \pmod{q} where s^\hat{\mathbf{s}} is the Gaussian elimination "solution."

  2. Bin the signed residuals into a histogram.

  3. Compare against the uniform distribution using chi-squared: χ2=(OiEi)2Ei\chi^2 = \sum \frac{(O_i - E_i)^2}{E_i}.

  4. A large χ2\chi^2 means the distribution is NOT uniform, suggesting LWE structure.

# EXERCISE 2 - GUIDED (fill in the marked sections) def chi_squared_distinguisher(A, b, q, chi2_threshold): """ Distinguisher based on chi-squared test of residuals. Returns 'LWE' if residuals show non-uniform structure, else 'Random'. """ n = A.ncols() m = A.nrows() Zq = Zmod(q) # Step 1: Solve A[:n]*x = b[:n] to get candidate x_hat try: x_hat = A[:n].solve_right(b[:n]) except Exception: return 'Unknown' # Step 2: Compute residuals on ALL m equations residual = b - A * x_hat res_signed = [to_signed(ZZ(r), q) for r in residual] # Step 3: Build histogram of residuals # ---- FILL IN ---- # Count how many residuals fall in each bin from -(q//2) to +(q//2) # Then compute chi-squared against the uniform expectation (m/q per bin) # # counts = {} # bin -> count # for r in res_signed: # ... # expected = m / q # uniform expectation per bin # chi2 = sum((count - expected)^2 / expected for count in counts.values()) # ---- END FILL IN ---- # Step 4: Large chi2 means non-uniform => LWE # return 'LWE' if chi2 > chi2_threshold else 'Random' pass # Replace with your implementation # Test your distinguisher: # A_test, b_test, _, _ = generate_lwe_samples(6, 30, 101, 2.0) # print(chi_squared_distinguisher(A_test, b_test, 101, chi2_threshold=150))

Exercise 3: LWE Parameter Exploration (Independent)

Task: Write a complete experiment that explores the "security frontier" of LWE.

  1. For n{4,8,12,16}n \in \{4, 8, 12, 16\} and σ{1,2,4,8}\sigma \in \{1, 2, 4, 8\}, with qq the smallest prime 4nσ\geq 4n\sigma:

    • Generate 20 LWE instances.

    • For each, attempt to recover s\mathbf{s} using the LLL + Babai approach from Section 6.

    • Record the success rate.

  2. Produce a heatmap (or table) showing success rate as a function of (n,σ)(n, \sigma).

  3. Answer these questions:

    • At what point does the LLL attack stop working?

    • How does this relate to the ratio σ/q\sigma / q?

    • What is the minimum nn for which the attack never succeeds (for any σ\sigma tested)?

# EXERCISE 3 - INDEPENDENT # Write your solution here. # # Skeleton: # results = {} # for n_val in [4, 8, 12, 16]: # for sigma_val in [1, 2, 4, 8]: # q_val = next_prime(4 * n_val * sigma_val) # ... # results[(n_val, sigma_val)] = success_rate # # Print or plot results as a heatmap.

Summary

ConceptKey idea
Noise transforms difficultyWithout noise, b=As\mathbf{b} = A\mathbf{s} is trivially solvable. With noise, b=As+e\mathbf{b} = A\mathbf{s} + \mathbf{e} becomes conjectured hard, even for quantum computers.
Decision vs. Search LWEDistinguishing LWE from random and finding s\mathbf{s} are polynomially equivalent. Both are as hard as worst-case lattice problems via Regev's reduction.
LWE as a lattice problemSolving LWE corresponds to finding the closest lattice point (CVP) in a qq-ary lattice. LLL-based attacks work for toy parameters but fail as nn grows.
Parameter balanceThe triple (n,q,σ)(n, q, \sigma) must be balanced. Too little noise means insecure, too much noise means decryption fails. The sweet spot gives both security and correctness.
Foundation for post-quantumSchemes like Kyber (ML-KEM) are built on structured variants of LWE. The noise is not a bug, it IS the security.

Next: Ring-LWE --- where we add algebraic structure to make LWE practical.