Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
duyuefeng0708
GitHub Repository: duyuefeng0708/Cryptography-From-First-Principle
Path: blob/main/frontier/11-homomorphic-encryption/sage/11e-ckks-approximate-arithmetic.ipynb
483 views
unlisted
Kernel: SageMath 10.5

Notebook 11e: CKKS, Approximate Arithmetic

Module 11: Homomorphic Encryption


Motivating Question. BGV and BFV encrypt integers, decryption is exact or completely wrong. But machine learning, statistics, and signal processing work with real numbers. Can we build an FHE scheme for approximate computation, where small errors in the output are acceptable? CKKS (Cheon-Kim-Kim-Song, 2017) does exactly this: it encodes real numbers, allows homomorphic operations, and treats the noise as... an acceptable rounding error.


Prerequisites. You should be comfortable with:

  • BFV encryption and the scaling factor Δ\Delta (Notebook 11d)

  • Ring-LWE and polynomial rings (Module 08)

  • Noise budgets and why multiplication is expensive (Notebooks 11a, 11c)

Learning objectives. By the end of this notebook you will be able to:

  1. Encode real numbers as scaled integer polynomials.

  2. Encrypt and decrypt with CKKS, accepting approximate results.

  3. Perform homomorphic addition and multiplication on encrypted reals.

  4. Apply rescaling to manage the scale after multiplication.

  5. Track precision loss through a chain of operations.

1. The CKKS Insight: Noise Is the Rounding Error

Bridge from Notebook 11d. In BFV, decryption is exact: you either recover the correct integer plaintext, or (if noise exceeds the budget) you get garbage. There's no middle ground. CKKS flips this: decryption always gives an approximate result, and the noise simply adds to the approximation error.

The key insight:

SchemePlaintext typeNoise treatmentDecryption
BGVExact integers (Zt\mathbb{Z}_t)Must stay below thresholdExact or fail
BFVExact integers (Zt\mathbb{Z}_t)Must stay below thresholdExact or fail
CKKSApproximate reals (R\mathbb{R})Part of the approximationAlways approximate

When we encode a real number z=3.14159z = 3.14159 at scale Δ=1000\Delta = 1000, we store Δz=3142\lfloor \Delta \cdot z \rceil = 3142. The rounding error (0.00041\approx 0.00041) is already an approximation. CKKS says: if we're already tolerating rounding errors, we can absorb the encryption noise into that same error budget.

# The CKKS encoding idea: scale, round, compute, unscale # Demonstrate with a scalar (no polynomials yet) Delta_demo = 1000 # scaling factor z1 = 3.14159 # a real number z2 = 2.71828 # another real number # Encode: scale and round m1 = round(Delta_demo * z1) # 3142 m2 = round(Delta_demo * z2) # 2718 print("=== CKKS Encoding (Scalar Demo) ===") print(f"z1 = {z1}, encoded as m1 = round({Delta_demo} × {z1}) = {m1}") print(f"z2 = {z2}, encoded as m2 = round({Delta_demo} × {z2}) = {m2}") print() # Addition: just add the scaled values m_sum = m1 + m2 z_sum = m_sum / Delta_demo print(f"Addition: m1 + m2 = {m_sum}, decode: {m_sum}/{Delta_demo} = {z_sum}") print(f"Exact: {z1} + {z2} = {z1+z2:.5f}, error = {abs(z_sum - (z1+z2)):.5f}") print() # Multiplication: scale DOUBLES! m_prod = m1 * m2 z_prod_wrong = m_prod / Delta_demo # wrong! scale is now Δ² z_prod_right = m_prod / (Delta_demo * Delta_demo) # correct: divide by Δ² print(f"Multiplication: m1 × m2 = {m_prod}") print(f"If we decode at scale Δ: {m_prod}/{Delta_demo} = {z_prod_wrong} ← WRONG!") print(f"Must decode at scale Δ²: {m_prod}/{Delta_demo}² = {z_prod_right:.5f} ← correct") print(f"Exact: {z1} × {z2} = {z1*z2:.5f}, error = {abs(z_prod_right - z1*z2):.5f}")
# The problem: after multiplication the scale is Δ², after two multiplications Δ⁴, etc. # CKKS rescaling: divide by Δ to bring the scale back down print("=== The Rescaling Problem ===") print() print(f"After encoding at scale Δ = {Delta_demo}:") print(f" m1 = {m1} (represents {z1})") print(f" m2 = {m2} (represents {z2})") print() print(f"After multiplication:") print(f" m1 × m2 = {m_prod} (scale is now Δ² = {Delta_demo}²)") print() print(f"Rescale: divide by Δ and round:") m_rescaled = round(m_prod / Delta_demo) z_rescaled = m_rescaled / Delta_demo print(f" round({m_prod} / {Delta_demo}) = {m_rescaled} (scale is back to Δ)") print(f" Decode: {m_rescaled} / {Delta_demo} = {z_rescaled}") print(f" Exact: {z1*z2:.5f}, error = {abs(z_rescaled - z1*z2):.5f}") print() print("Rescaling introduces a small rounding error, but keeps the scale manageable.") print("This is the CKKS key innovation!")

Checkpoint 1. Multiplication doubles the scale: ΔΔ2\Delta \to \Delta^2. Rescaling divides by Δ\Delta to bring it back. Each rescaling introduces a small error, but this is acceptable in approximate computation. This is exactly analogous to floating-point arithmetic, where every operation has a small rounding error.

2. CKKS Setup: Polynomial Rings

Like BGV and BFV, CKKS works in the ring Rq=Zq[x]/(xn+1)R_q = \mathbb{Z}_q[x]/(x^n+1). But the plaintext space is different:

SchemePlaintext ring
BGV/BFVRt=Zt[x]/(xn+1)R_t = \mathbb{Z}_t[x]/(x^n+1) (integer coefficients mod tt)
CKKSR=Z[x]/(xn+1)R = \mathbb{Z}[x]/(x^n+1) (integer coefficients, scaled reals)

In CKKS, there is no plaintext modulus tt. Instead, real numbers are encoded by scaling by Δ\Delta and rounding to integers.

# CKKS parameters (toy-sized) n = 4 # polynomial degree q = 2^40 # ciphertext modulus (large, power of 2 for easy rescaling) Delta = 2^20 # initial scaling factor noise_bound = 3 # small noise for keygen/encryption # We'll work with integers mod q Rq.<x> = PolynomialRing(Zmod(q)) Phi = x^n + 1 Sq = Rq.quotient(Phi, 'X') print(f"=== CKKS Parameters ===") print(f"Ring dimension: n = {n}") print(f"Ciphertext modulus: q = 2^40 = {q}") print(f"Scaling factor: Δ = 2^20 = {Delta}") print(f"No plaintext modulus t, reals are scaled and rounded!") print(f"") print(f"Precision: ~{RR(log(Delta, 10)):.0f} decimal digits ({RR(log(Delta, 2)):.0f} bits)")

3. CKKS Encoding and Decoding

In our simplified CKKS, we encode a vector of real numbers as polynomial coefficients:

Encode(z0,z1,,zn1)=Δz0+Δz1x+\text{Encode}(z_0, z_1, \ldots, z_{n-1}) = \lfloor \Delta \cdot z_0 \rceil + \lfloor \Delta \cdot z_1 \rceil \cdot x + \ldotsDecode(m0+m1x+)=(m0/Δ,  m1/Δ,  )\text{Decode}(m_0 + m_1 x + \ldots) = (m_0/\Delta, \; m_1/\Delta, \; \ldots)

Misconception alert. Real CKKS uses the canonical embedding (related to the DFT), which maps n/2n/2 complex numbers into the nn polynomial coefficients. Our coefficient packing is a simplification that captures the essential ideas.

def ckks_encode(reals, scale): """Encode a list of real numbers into a polynomial. Scale each real by Δ and round to the nearest integer. """ coeffs = [round(scale * float(z)) for z in reals] # Pad to n coefficients while len(coeffs) < n: coeffs.append(0) return Sq(Rq(coeffs)), scale def ckks_decode(poly, scale): """Decode a polynomial back to real numbers.""" coeffs = centered_coeffs_ckks(poly.lift(), q) return [RR(c) / RR(scale) for c in coeffs] def centered_coeffs_ckks(poly, mod): """Get centered coefficients of a polynomial.""" coeffs = [] for c in poly.list(): c_int = ZZ(c) if c_int > mod // 2: c_int -= mod coeffs.append(c_int) while len(coeffs) < n: coeffs.append(0) return coeffs # Encode some real numbers z = [3.14159, 2.71828, 1.41421, 1.73205] m, scale = ckks_encode(z, Delta) m_coeffs = centered_coeffs_ckks(m.lift(), q) print(f"Original reals: {z}") print(f"Encoded polynomial coefficients: {m_coeffs}") print(f"Scale: Δ = {scale}") print() # Decode back z_decoded = ckks_decode(m, scale) print(f"Decoded reals: {[f'{v:.5f}' for v in z_decoded]}") errors = [abs(z[i] - z_decoded[i]) for i in range(len(z))] print(f"Encoding errors: {[f'{e:.1e}' for e in errors]}") print(f"\nAll errors ≈ 1/Δ = {RR(1/Delta):.1e} (rounding error from encoding)")

4. CKKS Encryption and Decryption

CKKS encryption is similar to BFV:

Key generation: Same as BFV, ss small, aa random, b=(as+e)b = -(a \cdot s + e)

Encrypt encoded polynomial mm:

  • ct=(bu+e1+m,  au+e2)\text{ct} = (b \cdot u + e_1 + m, \; a \cdot u + e_2)

Decrypt: v=c0+c1sm+(small noise)v = c_0 + c_1 \cdot s \approx m + (\text{small noise})

Then decode: divide coefficients by Δ\Delta.

The crucial difference: there's no "mod tt" or "scale and round" in decryption. We just get back the noisy encoded polynomial, and the noise becomes part of the approximation error.

def rand_poly_ckks(ring, bound): """Random polynomial with small coefficients.""" coeffs = [randint(-bound, bound) for _ in range(n)] return ring(coeffs) def ckks_keygen(): """Generate CKKS keys.""" s = rand_poly_ckks(Sq, 1) # ternary secret a = Sq(Rq([randint(0, int(q)-1) for _ in range(n)])) e = rand_poly_ckks(Sq, noise_bound) b = -(a * s + e) return s, (b, a) def ckks_encrypt(m_poly, pk): """Encrypt an encoded polynomial.""" b, a = pk u = rand_poly_ckks(Sq, 1) e1 = rand_poly_ckks(Sq, noise_bound) e2 = rand_poly_ckks(Sq, noise_bound) c0 = b * u + e1 + m_poly c1 = a * u + e2 return (c0, c1) def ckks_decrypt(ct, s): """Decrypt a CKKS ciphertext (returns encoded polynomial).""" c0, c1 = ct return c0 + c1 * s # Generate keys sk, pk = ckks_keygen() print("CKKS keys generated.") print(f"Secret key: {centered_coeffs_ckks(sk.lift(), q)}")
# Full pipeline: encode → encrypt → decrypt → decode z = [3.14159, 2.71828, 1.41421, 1.73205] # Encode m, scale = ckks_encode(z, Delta) # Encrypt ct = ckks_encrypt(m, pk) # Decrypt m_dec = ckks_decrypt(ct, sk) # Decode z_dec = ckks_decode(m_dec, scale) print(f"Original: {z}") print(f"Decrypted: {[f'{v:.5f}' for v in z_dec]}") errors = [abs(z[i] - z_dec[i]) for i in range(len(z))] print(f"Errors: {[f'{e:.1e}' for e in errors]}") print(f"\nMax error: {max(errors):.1e}") print(f"Encoding precision: 1/Δ = {RR(1/Delta):.1e}") print(f"\nThe errors are ~ 1/Δ (encoding) + noise/Δ (encryption).") print(f"Since noise ≈ {noise_bound} and Δ = {Delta}, noise/Δ ≈ {RR(noise_bound/Delta):.1e}") print(f"The encryption noise is negligible compared to the encoding precision.")

Checkpoint 2. In CKKS, the encryption noise (B/Δ\sim B / \Delta) is absorbed into the encoding's approximation error (1/Δ\sim 1 / \Delta). Unlike BGV/BFV where decryption either succeeds exactly or fails catastrophically, CKKS decryption always gives an approximate result. The quality degrades gradually as noise grows.

5. Homomorphic Addition

Addition is straightforward, add ciphertext components. The scale stays the same.

def ckks_add(ct1, ct2): """Homomorphic addition.""" return (ct1[0] + ct2[0], ct1[1] + ct2[1]) # Add two encrypted real vectors z1 = [3.14, 2.71, 1.41, 1.73] z2 = [1.00, 0.50, 2.00, 0.25] expected = [z1[i] + z2[i] for i in range(n)] m1, s1 = ckks_encode(z1, Delta) m2, s2 = ckks_encode(z2, Delta) ct1 = ckks_encrypt(m1, pk) ct2 = ckks_encrypt(m2, pk) ct_sum = ckks_add(ct1, ct2) z_sum = ckks_decode(ckks_decrypt(ct_sum, sk), Delta) print(f"z1 = {z1}") print(f"z2 = {z2}") print(f"Expected: {expected}") print(f"Decrypted: {[f'{v:.5f}' for v in z_sum]}") errors = [abs(expected[i] - z_sum[i]) for i in range(n)] print(f"Errors: {[f'{e:.1e}' for e in errors]}") print(f"\nScale after addition: still Δ = {Delta} (unchanged)")

6. Homomorphic Multiplication and the Scale Problem

This is where CKKS gets interesting. When we multiply two ciphertexts:

Enc(Δz1)×Enc(Δz2)Enc(Δ2z1z2)\text{Enc}(\Delta \cdot z_1) \times \text{Enc}(\Delta \cdot z_2) \approx \text{Enc}(\Delta^2 \cdot z_1 z_2)

The scale has doubled from Δ\Delta to Δ2\Delta^2! If we do another multiplication, it becomes Δ4\Delta^4, then Δ8\Delta^8, etc. The scale grows exponentially.

Rescaling is the solution: after multiplication, divide the ciphertext by Δ\Delta to bring the scale back down to Δ\Delta.

def ckks_mul(ct1, ct2): """Homomorphic multiplication (produces degree-2 ciphertext). After multiplication, scale goes from Δ to Δ². """ c0, c1 = ct1 c0p, c1p = ct2 d0 = c0 * c0p d1 = c0 * c1p + c1 * c0p d2 = c1 * c1p return (d0, d1, d2) def ckks_decrypt_deg2(ct, s): """Decrypt a degree-2 CKKS ciphertext.""" d0, d1, d2 = ct return d0 + d1 * s + d2 * s * s # Multiply two encrypted values z1 = [3.0, 2.0, 0.0, 0.0] z2 = [4.0, 5.0, 0.0, 0.0] m1, _ = ckks_encode(z1, Delta) m2, _ = ckks_encode(z2, Delta) ct1 = ckks_encrypt(m1, pk) ct2 = ckks_encrypt(m2, pk) ct_prod = ckks_mul(ct1, ct2) m_prod = ckks_decrypt_deg2(ct_prod, sk) # Decode at scale Δ² (since multiplication doubled the scale) z_prod = ckks_decode(m_prod, Delta * Delta) # Polynomial multiplication in R[x]/(x^n+1): # (3+2x)(4+5x) = 12+15x+8x+10x² = 12+23x+10x² # In x^4+1: this stays as is since degree < 4 # Coefficient-wise: [12, 23, 10, 0] # But we encode element-wise, so we need to compute in the ring print(f"z1 = {z1}") print(f"z2 = {z2}") print(f"Decrypted product: {[f'{v:.4f}' for v in z_prod]}") print(f"\nScale before: Δ = {Delta}") print(f"Scale after multiplication: Δ² = {Delta * Delta}") print(f"We HAD to decode at scale Δ² to get correct results.") print(f"\nProblem: if we multiply again, scale becomes Δ⁴ = {Delta^4}") print(f"This grows exponentially! We need rescaling.")

7. Rescaling: The CKKS Innovation

After multiplication, we rescale by dividing the ciphertext coefficients by Δ\Delta (and rounding):

Rescale(ct,Δ)=ct/Δ\text{Rescale}(\text{ct}, \Delta) = \lfloor \text{ct} / \Delta \rceil

This:

  • Reduces the scale from Δ2\Delta^2 back to Δ\Delta

  • Reduces the ciphertext modulus from qq to q/Δq/\Delta

  • Introduces a small rounding error

Just like BFV's modulus switching, but simpler, always divide by the same factor Δ\Delta.

def ckks_rescale_poly(poly, scale, old_q, new_q): """Rescale a polynomial: divide coefficients by scale, working in new_q.""" coeffs = centered_coeffs_ckks(poly.lift(), old_q) scaled_coeffs = [round(RR(c) / RR(scale)) for c in coeffs] # Now create polynomial in the new (smaller) ring Rq_new.<x_new> = PolynomialRing(Zmod(new_q)) Sq_new = Rq_new.quotient(x_new^n + 1, 'X_new') return Sq_new(Rq_new(scaled_coeffs)), Sq_new, new_q # Demonstrate rescaling with a scalar example first print("=== Rescaling (Scalar Walkthrough) ===") print() z_a, z_b = 3.5, 2.0 m_a = round(Delta * z_a) # 3670016 m_b = round(Delta * z_b) # 2097152 m_product = m_a * m_b # scale is Δ² print(f"z_a = {z_a}, encoded: {m_a} (scale Δ = {Delta})") print(f"z_b = {z_b}, encoded: {m_b} (scale Δ = {Delta})") print(f"Product: {m_a} × {m_b} = {m_product} (scale Δ² = {Delta^2})") print() print(f"Rescale: round({m_product} / {Delta}) = {round(RR(m_product) / RR(Delta))}") m_rescaled = round(RR(m_product) / RR(Delta)) print(f"Decoded: {m_rescaled} / {Delta} = {RR(m_rescaled) / RR(Delta):.4f}") print(f"Expected: {z_a * z_b}") print(f"Error: {abs(RR(m_rescaled) / RR(Delta) - z_a * z_b):.1e}") print() print(f"After rescaling:") print(f" Scale: Δ² → Δ (back to original)") print(f" Modulus: effectively reduced by factor Δ") print(f" Small rounding error introduced")
# Full CKKS rescaling on degree-1 ciphertexts # For simplicity, we'll demonstrate with degree-1 cts (simulating relinearization) def ckks_rescale_ct(ct, scale, old_q, new_q): """Rescale a degree-1 ciphertext.""" c0, c1 = ct Rq_new.<x_new> = PolynomialRing(Zmod(new_q)) Sq_new = Rq_new.quotient(x_new^n + 1, 'X_new') c0_coeffs = centered_coeffs_ckks(c0.lift(), old_q) c1_coeffs = centered_coeffs_ckks(c1.lift(), old_q) c0_new = Sq_new(Rq_new([round(RR(c) / RR(scale)) for c in c0_coeffs])) c1_new = Sq_new(Rq_new([round(RR(c) / RR(scale)) for c in c1_coeffs])) return (c0_new, c1_new), Sq_new, new_q # Demonstrate the full chain: encode → encrypt → multiply → rescale → decrypt → decode z1 = [3.5, 0.0, 0.0, 0.0] z2 = [2.0, 0.0, 0.0, 0.0] m1, _ = ckks_encode(z1, Delta) m2, _ = ckks_encode(z2, Delta) ct1 = ckks_encrypt(m1, pk) ct2 = ckks_encrypt(m2, pk) print(f"Encoded z1={z1[0]}: coeffs = {centered_coeffs_ckks(m1.lift(), q)[:2]}") print(f"Encoded z2={z2[0]}: coeffs = {centered_coeffs_ckks(m2.lift(), q)[:2]}") print() # Before multiplication: scale = Δ, modulus = q print(f"Before multiplication: scale = Δ = {Delta}, modulus = q = 2^40") # Multiply (degree-2 ct) ct_prod_d2 = ckks_mul(ct1, ct2) print(f"After multiplication: scale = Δ² = {Delta^2}, modulus = q = 2^40") # Decrypt at scale Δ² (before rescaling) m_before_rescale = ckks_decrypt_deg2(ct_prod_d2, sk) z_before = ckks_decode(m_before_rescale, Delta * Delta) print(f"Decrypt at scale Δ²: {z_before[0]:.6f} (expected {z1[0]*z2[0]})")

Checkpoint 3. Rescaling is the heart of CKKS: after every multiplication, divide the ciphertext by Δ\Delta to restore the original scale. This consumes one level of the modulus (from qq to q/Δq/\Delta), just like BGV consumes one modulus in its chain. The number of multiplications is limited by logΔ(q)\log_\Delta(q).

8. Precision Through a Chain of Operations

Let's track how precision degrades as we chain multiple operations. Each operation adds a little error.

# Track precision through a chain of additions print("=== Precision vs Number of Additions ===") print() values = [random.uniform(0.5, 5.0) for _ in range(50)] cts_chain = [] for v in values: m_v, _ = ckks_encode([v, 0.0, 0.0, 0.0], Delta) cts_chain.append(ckks_encrypt(m_v, pk)) ct_running = cts_chain[0] running_sum = values[0] for i in range(1, len(values)): ct_running = ckks_add(ct_running, cts_chain[i]) running_sum += values[i] if i in [1, 5, 10, 25, 49]: m_r = ckks_decrypt(ct_running, sk) z_r = ckks_decode(m_r, Delta) err = abs(z_r[0] - running_sum) print(f"{i} | {running_sum:>10.4f} | {z_r[0]:>12.4f} | {err:>10.1e}") print(f"\nPrecision stays excellent through additions, noise adds linearly.")
# Show what happens with multiplications (scale doubling) print("=== Scale Growth Without Rescaling ===") print() print(f"Starting scale: Δ = {Delta} = 2^{int(log(Delta,2))}") print(f"Modulus: q = {q} = 2^{int(log(q,2))}") print() scale_current = Delta bits_used = int(log(Delta, 2)) bits_total = int(log(q, 2)) for i in range(5): fits = "Yes" if bits_used <= bits_total else "OVERFLOW" bits_used *= 2 print(f"\nWithout rescaling, scale doubles with each multiplication.") print(f"With rescaling, we consume {int(log(Delta,2))} bits of modulus per multiplication.") print(f"Max multiplications with rescaling: q_bits / Δ_bits = {bits_total} / {int(log(Delta,2))} = {bits_total // int(log(Delta,2))} - 1 = {bits_total // int(log(Delta,2)) - 1}")

9. The CKKS Modulus Chain

Like BGV, CKKS uses a modulus chain, but for a different reason. Each rescaling divides the modulus by Δ\Delta:

qLrescaleqL1=qL/ΔrescaleqL2=qL/Δ2rescaleq_L \xrightarrow{\text{rescale}} q_{L-1} = q_L / \Delta \xrightarrow{\text{rescale}} q_{L-2} = q_L / \Delta^2 \xrightarrow{\text{rescale}} \ldots

With q=240q = 2^{40} and Δ=220\Delta = 2^{20}, we get exactly 1 multiplication (one rescale from 2402^{40} to 2202^{20}). For deeper computations, we need larger qq.

# Visualize CKKS modulus chain for different depths print("=== CKKS Modulus Chain ===") print() Delta_bits = 20 # bits per level for depth in [1, 2, 4, 8]: q_bits_needed = Delta_bits * (depth + 1) # +1 for the final level print(f"Depth {depth}: need q ≥ 2^{q_bits_needed} ({q_bits_needed} bits)") chain = [] for i in range(depth + 1): level_bits = q_bits_needed - i * Delta_bits chain.append(level_bits) print(f" Chain: {' → '.join(f'2^{b}' for b in chain)}") print() print("Each arrow represents one rescaling (after a multiplication).") print("Deeper computations need exponentially larger starting modulus q.") print("This is the fundamental cost of homomorphic computation.")

10. The FHE Landscape: BGV, BFV, CKKS

With all three schemes under our belt, let's see the full picture.

print("="*75) print("The Three Pillars of Modern FHE") print("="*75) print() table = [ ("Plaintext type", "Exact integers", "Exact integers", "Approximate reals"), ("Message encoding", "m + t·e (LSBs)", "Δ·m + e (MSBs)", "Δ·z + e (scaled)"), ("Decryption", "v mod t", "round(t·v/q)", "v / Δ (approx)"), ("Noise handling", "Mod switching", "Scale-invariant", "Rescaling"), ("After multiply", "Noise grows, switch q", "Rescale by t/q", "Rescale by 1/Δ"), ("Best for", "Integer circuits", "Integer circuits", "ML, statistics"), ("Key library", "HElib", "Microsoft SEAL", "SEAL, OpenFHE, Lattigo"), ] print(f" {'':.<20s} {'BGV':^20s} {'BFV':^20s} {'CKKS':^20s}") print(f" {'':20s} {'─'*20} {'─'*20} {'─'*20}") for label, bgv, bfv, ckks in table: print(f" {label:.<20s} {bgv:^20s} {bfv:^20s} {ckks:^20s}") print() print("All three are based on Ring-LWE and support leveled FHE.") print("The choice depends on your application's data types.")
# Real-world CKKS applications print("=== CKKS in the Real World ===") print() apps = [ ("ML inference", "Classify encrypted medical images without seeing them", "Approximate arithmetic on model weights"), ("Genomic analysis", "Compute on encrypted DNA sequences for disease risk", "Statistical correlations on encrypted data"), ("Financial analytics", "Aggregate encrypted portfolio data across institutions", "Sums, averages, variances on encrypted numbers"), ("Signal processing", "Filter encrypted sensor data in IoT applications", "FFT, convolution on encrypted signals"), ] for name, use_case, why_ckks in apps: print(f" {name}") print(f" Use: {use_case}") print(f" Why CKKS: {why_ckks}") print()

Crypto foreshadowing. CKKS is the backbone of privacy-preserving machine learning. Combined with techniques from Module 12 (MPC), it enables scenarios where multiple parties can jointly train models or run inference on sensitive data, without any party seeing the others' data.

11. Exercises

Exercise 1 (Worked): Precision vs Scale

Problem. Encode π=3.14159\pi = 3.14159 at three different scales: Δ=10\Delta = 10, Δ=1000\Delta = 1000, Δ=106\Delta = 10^6. What is the encoding error at each scale?

Solution:

# Exercise 1: Worked solution z_pi = 3.14159 print(f"Encoding π = {z_pi} at different scales:") print() for scale_ex in [10, 1000, 10^6]: encoded = round(scale_ex * z_pi) decoded = RR(encoded) / RR(scale_ex) error = abs(decoded - z_pi) bits = RR(log(scale_ex, 2)) print(f" Δ = {scale_ex}: encoded = {encoded}, decoded = {float(decoded):.6f}, error = {float(error):.1e} ({bits:.0f} bits of precision)") print() print("Larger Δ → more precision (smaller encoding error)") print("But larger Δ → fewer multiplications before q runs out") print("This is the precision-depth tradeoff in CKKS.")

Exercise 2 (Guided): Weighted Average on Encrypted Data

Problem. Compute the weighted average of three encrypted values: zˉ=(w1z1+w2z2+w3z3)/(w1+w2+w3)\bar{z} = (w_1 z_1 + w_2 z_2 + w_3 z_3) / (w_1 + w_2 + w_3) where the weights are public (not encrypted).

Fill in the TODOs:

# Exercise 2: fill in the TODOs # Values and weights # z_vals = [10.5, 20.3, 15.7] # weights = [0.2, 0.5, 0.3] # TODO 1: Encode and encrypt each z_val # cts_ex = [] # for z_v in z_vals: # m_v, _ = ckks_encode([z_v, 0.0, 0.0, 0.0], Delta) # cts_ex.append(ckks_encrypt(m_v, pk)) # TODO 2: Multiply each ciphertext by its weight (plaintext-ciphertext multiplication) # Hint: Encode the weight as a polynomial and multiply component-wise # For plaintext-ct multiplication: (w*c0, w*c1) # def ckks_mul_plain(ct, w): # w_poly, _ = ckks_encode([w, 0.0, 0.0, 0.0], 1) # no extra scaling # return (ct[0] * w_poly, ct[1] * w_poly) # TODO 3: Sum the weighted ciphertexts using ckks_add # TODO 4: Decrypt, decode, and divide by sum of weights # expected = sum(w*z for w, z in zip(weights, z_vals)) / sum(weights) # print(f"Expected weighted average: {expected:.4f}")

Exercise 3 (Independent): Precision-Depth Tradeoff

Problem.

  1. For a modulus q=2200q = 2^{200} and scale Δ=240\Delta = 2^{40}, how many multiplications (with rescaling) can you perform?

  2. If instead you use Δ=220\Delta = 2^{20}, how does this change the number of multiplications? What happens to precision?

  3. A neural network inference requires 10 multiplication depths. What minimum log2(q)\log_2(q) do you need for Δ=230\Delta = 2^{30}? How does this compare to BFV's parameter requirements for the same circuit?

# Exercise 3: write your solution here

Summary

ConceptKey Fact
CKKS plaintextReal/complex numbers, encoded by scaling by Δ\Delta and rounding
Noise treatmentNoise becomes part of the approximation error, not a failure mode
Encodingm=Δzm = \lfloor \Delta \cdot z \rceil, precision is 1/Δ\sim 1/\Delta
MultiplicationScale doubles: ΔΔ2\Delta \to \Delta^2, must rescale
RescalingDivide ciphertext by Δ\Delta, reducing scale and modulus
Modulus chainq=ΔL+1q = \Delta^{L+1}; each rescaling consumes one level
TradeoffLarger Δ\Delta = more precision but fewer multiplications

CKKS is the only major FHE scheme designed for approximate real-number arithmetic. By treating noise as rounding error rather than a catastrophic failure, it enables privacy-preserving machine learning, statistical analysis, and signal processing on encrypted data.


Module 11 complete! You've now seen the full FHE landscape: from partially homomorphic schemes (Paillier, ElGamal) through the integer FHE schemes (BGV, BFV) to approximate arithmetic (CKKS). Next: Module 12: Multi-Party Computation