Path: blob/main/reddit_content.json
51 views
unlisted
{1"generated": "2025-12-18T04:56:27.998075",2"total_posts": 404,3"subreddit_counts": {4"CoCalc": 391,5"SageMath": 136},7"posts": [8{9"source": "radiative_transfer",10"content_type": "template",11"subreddit": "CoCalc",12"title": "Learn Radiative Transfer Beer-Lambert Law and Greenhouse Effect with Interactive Python",13"body": "## What You'll Learn\n\nAnalysis of radiative transfer in the atmosphere including absorption, scattering, and the greenhouse effect.\n\n\n**Key equations you'll work with:**\n- I(z) = I₀ e^-τ\n- \\τ\n- I/I₀\n\n\n## What You'll See\n\n**Visualization:** absorption spectra and transmission curves\n\n- See how different gases absorb at specific wavelengths\n- Observe the greenhouse effect in spectral data\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/atmospheric-science/radiative_transfer.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",14"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/atmospheric-science/radiative_transfer.tex",15"category": "templates",16"date": "2025-12-19",17"time": "09:00"18},19{20"source": "segmentation",21"content_type": "template",22"subreddit": "CoCalc",23"title": "Learn Image Segmentation: Thresholding, Clustering, and Graph-Based Methods with Interactive Python",24"body": "## What You'll Learn\n\na comprehensive analysis of image segmentation techniques including thresholding methods (Otsu's method, adaptive thresholding), region-based approaches (region growing, split-and-merge), clustering algorithms (K-means, mean-shift, SLIC superpixels), and graph-based methods (graph cuts, normalized cuts, watershed transform). We evaluate segmentation quality using Intersection over Union (IoU), Dice coefficient, and boundary F-score metrics. Computational experiments demonstrate the strengths and limitations of each approach on synthetic and realistic test images.\n\n\n**Key equations you'll work with:**\n- I: Ω → R^d\n- Ω Z²\n- d\n\n\n## What You'll See\n\n**Visualization:** segmented regions and boundary contours\n\n- See image partitioned into meaningful regions\n- Observe threshold and clustering effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/segmentation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",25"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/segmentation.tex",26"category": "templates",27"date": "2025-12-19",28"time": "09:00"29},30{31"source": "text_analysis",32"content_type": "template",33"subreddit": "CoCalc",34"title": "Learn Text Analysis: TF-IDF Vectorization and Document Similarity with Interactive Python",35"body": "## What You'll Learn\n\nA hands-on exploration of Text Analysis: TF-IDF Vectorization and Document Similarity.\n\n\n**Key equations you'll work with:**\n- f_t,d\n- t\n- d\n\n\n## What You'll See\n\n**Visualization:** word frequency and document similarity matrices\n\n- See text patterns and clustering\n- Observe topic structures in documents\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nlp/text_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",36"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nlp/text_analysis.tex",37"category": "templates",38"date": "2025-12-20",39"time": "09:00"40},41{42"source": "vibration_analysis",43"content_type": "template",44"subreddit": "CoCalc",45"title": "Learn Vibration Analysis: SDOF Systems and Modal Analysis with Interactive Python",46"body": "## What You'll Learn\n\ncomputational analysis of mechanical vibrations including free and forced response of single degree of freedom (SDOF) systems, damping effects, frequency response, and modal analysis. Python-based computations provide quantitative analysis with dynamic visualization.\n\n\n**Key equations you'll work with:**\n- ωₙ\n- fₙ\n- ζ\n\n\n## What You'll See\n\n**Visualization:** frequency response and mode shapes\n\n- See resonance peaks and damping effects\n- Observe natural frequencies and modes\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/vibration_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",47"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/vibration_analysis.tex",48"category": "templates",49"date": "2025-12-20",50"time": "09:00"51},52{53"source": "risk_management",54"content_type": "template",55"subreddit": "CoCalc",56"title": "Learn Quantitative Risk Management: Value at Risk and Coherent Risk Measures with Interactive Python",57"body": "## What You'll Learn\n\na comprehensive analysis of modern risk measurement techniques for financial portfolios. We implement three approaches to Value at Risk (VaR) calculation—historical simulation, variance-covariance (parametric), and Monte Carlo methods—and examine the coherent risk measure Expected Shortfall (CVaR). We model time-varying volatility using GARCH(1,1) processes, analyze portfolio risk decomposition, and conduct stress testing under historical crisis scenarios. The analysis demonstrates that CVaR provides superior risk characterization for heavy-tailed distributions and that dynamic volatility models substantially improve risk forecasts during turbulent periods.\n\n\n**Key equations you'll work with:**\n- F\n- α (0,1)\n- F^-1\n\n\n## What You'll See\n\n**Visualization:** VaR distributions and tail risk measures\n\n- See loss distributions and extreme events\n- Observe fat-tailed risk behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/risk_management.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",58"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/risk_management.tex",59"category": "templates",60"date": "2025-12-21",61"time": "09:00"62},63{64"source": "boundary_element_method_acoustics",65"content_type": "template",66"subreddit": "CoCalc",67"title": "TITLE: I built a Boundary Element Method solver for acoustic scattering in Python - here's what I learned",68"body": "BODY:\n\n**ELI5 Version:**\nImagine you drop a pebble in a pond and watch the ripples hit a rock. Some waves bounce back, some bend around - that's scattering! I built a computer simulation that does this for sound waves hitting a cylinder.\n\n**The Technical Stuff:**\n\nThe Boundary Element Method (BEM) is clever because instead of calculating the entire space (like Finite Element Method does), it only looks at the boundary of the obstacle. This reduces a 2D problem to a 1D problem!\n\n**How it works:**\n1. Sound waves follow the Helmholtz equation\n2. We use Green's functions (mathematical \"building blocks\" for wave solutions)\n3. The Kirchhoff-Helmholtz integral lets us compute the field anywhere from just the boundary values\n\n**What I found:**\n- Clear acoustic shadow behind the cylinder\n- Standing wave pattern where incident and reflected waves interfere\n- Diffraction fringes showing waves bending around the obstacle\n- My numerical solution matched the analytical solution within 0.5% error!\n\n**Code highlights:**\n- Used scipy.special for Hankel functions (they describe outgoing waves)\n- Collocation method with constant elements for discretization\n- Dense matrix solve (O(N cubed) - this is a known BEM limitation)\n\nThe interactive notebook with full code and visualizations is here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/boundary_element_method_acoustics.ipynb\n\nHappy to answer questions!",69"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/boundary_element_method_acoustics.ipynb",70"category": "general",71"date": "2025-12-21",72"time": "09:00"73},74{75"source": "time_series",76"content_type": "template",77"subreddit": "CoCalc",78"title": "Learn Time Series Analysis and Forecasting with Interactive Python",79"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of time series data, including decomposition into trend, seasonality, and residual components, autocorrelation analysis, ARIMA modeling, and forecasting with prediction intervals. The analysis demonstrates statistical tests for stationarity and model selection criteria.\n\n\n**Key equations you'll work with:**\n- \\y_t\\\n- T_t\n- S_t\n\n\n## What You'll See\n\n**Visualization:** autocorrelation plots and forecasts with uncertainty\n\n- See trend, seasonality, and residual decomposition\n- Observe temporal patterns and prediction intervals\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/data-science/time_series.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",80"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/data-science/time_series.tex",81"category": "templates",82"date": "2025-12-22",83"time": "09:00"84},85{86"source": "thermodynamics_cycle",87"content_type": "template",88"subreddit": "CoCalc",89"title": "Learn Thermodynamic Cycles: Efficiency Analysis with Interactive Python",90"body": "## What You'll Learn\n\ncomputational analysis of thermodynamic power cycles including Carnot, Otto, Diesel, and Rankine cycles. We examine ideal and actual cycle efficiencies, P-v and T-s diagrams, and parametric studies. Python-based computations provide quantitative analysis with dynamic visualization.\n\n\n**Key equations you'll work with:**\n- T_H\n- T_L\n- s\n\n\n## What You'll See\n\n**Visualization:** P-V and T-s diagrams for power cycles\n\n- See Carnot and Rankine cycle processes\n- Observe thermal efficiency calculations\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/thermodynamics_cycle.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",91"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/thermodynamics_cycle.tex",92"category": "templates",93"date": "2025-12-22",94"time": "09:00"95},96{97"source": "point_processes",98"content_type": "template",99"subreddit": "CoCalc",100"title": "Learn Point Processes: Poisson and Hawkes Process Analysis with Interactive Python",101"body": "## What You'll Learn\n\na comprehensive computational analysis of temporal point processes, focusing on homogeneous and inhomogeneous Poisson processes and self-exciting Hawkes processes. We implement simulation algorithms for each process type, analyze the statistical properties of inter-arrival times and counting processes, estimate intensity functions from observed data, and examine the clustering behavior characteristic of Hawkes processes. The analysis demonstrates goodness-of-fit testing through residual analysis and time-rescaling methods, with applications to modeling earthquake aftershock sequences and financial market microstructure.\n\n\n**Key equations you'll work with:**\n- \\t₁, t₂, t₃, \\\n- 0 < t₁ < t₂ < t₃ < ·s\n- N(t) = ∑_i 1_tᵢ ≤ t\n\n\n## What You'll See\n\n**Visualization:** event sequences and intensity functions\n\n- See arrival patterns and clustering\n- Observe Poisson and self-exciting processes\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/point_processes.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",102"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/point_processes.tex",103"category": "templates",104"date": "2025-12-23",105"time": "09:00"106},107{108"source": "nonlinear_optics",109"content_type": "template",110"subreddit": "CoCalc",111"title": "Learn Nonlinear Optics: Second Harmonic Generation, Kerr Effect, and Optical Solitons with Interactive Python",112"body": "## What You'll Learn\n\na comprehensive computational analysis of nonlinear optical phenomena in χ⁽²⁾ and χ⁽³⁾ media. We examine second harmonic generation (SHG) with phase matching considerations, the optical Kerr effect and self-phase modulation, four-wave mixing processes, and optical soliton propagation governed by the nonlinear Schrödinger equation. Numerical simulations demonstrate conversion efficiencies, phase matching curves, soliton dynamics, and the interplay between dispersion and nonlinearity that enables lossless pulse propagation in optical fibers.\n\n\n**Key equations you'll work with:**\n- E\n- χ^(1)\n- χ^(2)\n\n\n## What You'll See\n\n**Visualization:** harmonic generation and phase matching\n\n- See frequency conversion effects\n- Observe intensity-dependent phenomena\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/photonics/nonlinear_optics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",113"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/photonics/nonlinear_optics.tex",114"category": "templates",115"date": "2025-12-23",116"time": "09:00"117},118{119"source": "evolutionary_dynamics",120"content_type": "template",121"subreddit": "CoCalc",122"title": "Learn Evolutionary Dynamics: Selection, Drift, and Fitness Landscapes with Interactive Python",123"body": "## What You'll Learn\n\nThis chapter explores the fundamental forces driving evolutionary change in populations. We analyze natural selection under various fitness schemes, examine the stochastic effects of genetic drift in finite populations, and investigate the interplay between mutation and selection. Computational simulations illustrate fitness landscapes, fixation probabilities, and the dynamics of allele frequency change under different evolutionary scenarios.\n\n\n**Key equations you'll work with:**\n- w\n- A\n- a\n\n\n## What You'll See\n\n**Visualization:** fitness landscapes and allele frequency trajectories\n\n- See selection driving allele frequencies over generations\n- Observe genetic drift and selection balance\n\n## Make It Yours\n\nChange population parameters, add new species, or simulate your biological system.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/evolutionary_dynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",124"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/evolutionary_dynamics.tex",125"category": "templates",126"date": "2025-12-24",127"time": "09:00"128},129{130"source": "weber_fechner",131"content_type": "template",132"subreddit": "CoCalc",133"title": "Learn Weber-Fechner Law JND with Interactive Python",134"body": "## What You'll Learn\n\nThis document presents a comprehensive computational analysis of classical psychophysical laws governing the relationship between physical stimulus intensity and perceived sensation magnitude. We implement Weber's Law for just noticeable differences, Fechner's logarithmic law, Stevens' power law, psychometric functions, and the method of constant stimuli. The analysis demonstrates how different sensory modalities exhibit distinct mathematical relationships between objective stimulus intensity and subjective sensation, with implications for sensory system design and perceptual modeling.\n\n\n**Key equations you'll work with:**\n- Δ I = k · I\n- k\n- S = c (I/I₀)\n\n\n## What You'll See\n\n**Visualization:** just-noticeable-difference curves\n\n- See perceptual scaling laws\n- Observe logarithmic perception\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/psychophysics/weber_fechner.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",135"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/psychophysics/weber_fechner.tex",136"category": "templates",137"date": "2025-12-24",138"time": "09:00"139},140{141"source": "image_filtering",142"content_type": "template",143"subreddit": "CoCalc",144"title": "Learn Image Filtering and Denoising: Comparative Analysis of Spatial and Frequency Domain Methods with Interactive Python",145"body": "## What You'll Learn\n\na comprehensive analysis of image filtering techniques for noise reduction and enhancement. We examine linear filters (Gaussian, box, Laplacian), frequency domain methods (Fourier-based filtering), and non-linear approaches (median, bilateral, non-local means). Computational analysis compares filtering performance using quantitative metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), and Structural Similarity Index (SSIM). Results demonstrate the trade-offs between noise suppression and edge preservation across different filter classes, with bilateral filtering achieving the optimal balance (PSNR improvement of 8.2 dB while preserving 94\\% edge contrast).\n\n\n**Key equations you'll work with:**\n- f(x,y)\n- g(x,y)\n- F\n\n\n## What You'll See\n\n**Visualization:** filtered images and frequency spectra\n\n- See noise removal and enhancement effects\n- Observe spatial vs frequency domain filtering\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/image_filtering.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",146"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/image_filtering.tex",147"category": "templates",148"date": "2025-12-25",149"time": "09:00"150},151{152"source": "chaos_control_ogy_method",153"content_type": "template",154"subreddit": "CoCalc",155"title": "Implementing OGY Chaos Control: Taming the Hénon Map with Python",156"body": "I built a Python implementation of the OGY (Ott-Grebogi-Yorke) method for controlling chaos, and wanted to share what I learned.\n\n**The Core Idea**\n\nChaotic systems seem unpredictable, but they contain infinitely many unstable periodic orbits embedded within them. The OGY method (1990) showed that with small, well-timed parameter nudges, you can stabilize any of these orbits.\n\n**The Hénon Map**\n\nI used the classic Hénon map as a test case:\n- xn₊₁ = 1 - ax² + y\n- yn₊₁ = bx\n\nWith a=1.4 and b=0.3, this produces a chaotic strange attractor.\n\n**How the Control Works**\n\n1. Find the fixed point (period-1 orbit): x_F ≈ 0.6314\n2. Compute the Jacobian matrix at that point\n3. Get eigenvalues: one stable (|λ_s| < 1), one unstable (|λ_u| > 1)\n4. When trajectory gets close to the target, apply: δa = -K · (x - x_F)\n\nThe control gain K comes from the eigenstructure of the system.\n\n**Results**\n\n- Control kicks in when distance < 0.15 from fixed point\n- Parameter perturbations stay within ±0.05\n- System rapidly stabilizes after ~few hundred iterations\n- Mean control effort was tiny (|δa| ≈ 0.001)\n\n**What I Learned**\n\nThe OGY method exploits the fact that sensitivity cuts both ways. The same property that makes chaos \"chaotic\" also means small inputs have large effects—which you can use for control rather than suffering from it.\n\nThis has real applications: laser stabilization, cardiac arrhythmia control, chemical reactions.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/chaos_controlₒgy_method.ipynb\n\nHappy to answer questions about the implementation!\n\n---",157"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/chaos_controlₒgy_method.ipynb",158"category": "mathematics",159"date": "2025-12-25",160"time": "09:00"161},162{163"source": "fluid_flow",164"content_type": "template",165"subreddit": "CoCalc",166"title": "Learn Pipe Flow Analysis: Darcy-Weisbach and Friction Factors with Interactive Python",167"body": "## What You'll Learn\n\ncomputational analysis of pipe flow using the Darcy-Weisbach equation. We examine Reynolds number regimes, friction factor correlations including the Colebrook-White equation, the Moody diagram, minor losses, and pipe network analysis. Python-based computations provide quantitative analysis with dynamic visualization.\n\n\n**Key equations you'll work with:**\n- ρ\n- V\n- D\n\n\n## What You'll See\n\n**Visualization:** pressure drops and velocity profiles in pipes\n\n- See laminar vs turbulent flow patterns\n- Observe Reynolds number transition\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/fluid_flow.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",168"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/fluid_flow.tex",169"category": "templates",170"date": "2025-12-26",171"time": "09:00"172},173{174"source": "random_walks",175"content_type": "template",176"subreddit": "CoCalc",177"title": "Learn Random Walks: From Discrete Paths to Brownian Motion with Interactive Python",178"body": "## What You'll Learn\n\na comprehensive analysis of random walks, examining the transition from discrete stochastic processes to continuous Brownian motion. We investigate simple random walks in one, two, and three dimensions, establishing recurrence in dimensions d ≤ 2 and transience for d ≥ 3 through computational verification of P\\'olya's theorem. The gambler's ruin problem demonstrates exact calculation of absorption probabilities and expected ruin times. First passage time distributions reveal the arc-sine law and scaling properties that connect discrete walks to Wiener processes via Donsker's invariance principle.\n\n\n**Key equations you'll work with:**\n- d ≤ 2\n- d ≥ 3\n- Z\n\n\n## What You'll See\n\n**Visualization:** walk trajectories and diffusion patterns\n\n- See random motion accumulating over time\n- Observe scaling of displacement with steps\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/random_walks.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",179"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/random_walks.tex",180"category": "templates",181"date": "2025-12-26",182"time": "09:00"183},184{185"source": "binding_energy",186"content_type": "template",187"subreddit": "CoCalc",188"title": "Learn Nuclear Binding Energy: Semi-Empirical Mass Formula and Nuclear Stability with Interactive Python",189"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of nuclear binding energies using the semi-empirical mass formula (SEMF). We implement the Bethe-Weizs\\\"acker model with volume, surface, Coulomb, asymmetry, and pairing terms to predict nuclear masses and stability. The analysis includes the valley of stability, Q-values for nuclear reactions, and separation energies. Applications span nuclear structure, stellar nucleosynthesis, and nuclear energy production.\n\n\n**Key equations you'll work with:**\n- B(A,Z)\n- A = Z + N\n- Z\n\n\n## What You'll See\n\n**Visualization:** binding energy per nucleon curve\n\n- See nuclear stability across elements\n- Observe iron peak and nuclear forces\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nuclear-physics/binding_energy.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",190"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nuclear-physics/binding_energy.tex",191"category": "templates",192"date": "2025-12-27",193"time": "09:00"194},195{196"source": "information_theory",197"content_type": "template",198"subreddit": "CoCalc",199"title": "Learn Information Theory: Entropy, Coding, and Channel Capacity with Interactive Python",200"body": "## What You'll Learn\n\nA hands-on exploration of Information Theory: Entropy, Coding, and Channel Capacity.\n\n\n**Key equations you'll work with:**\n- X\n- p(x)\n- p\n\n\n## What You'll See\n\n**Visualization:** entropy distributions and channel capacity\n\n- See information content of messages\n- Observe Shannon limits\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/other/information_theory.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",201"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/other/information_theory.tex",202"category": "templates",203"date": "2025-12-27",204"time": "09:00"205},206{207"source": "aqueous_geochemistry",208"content_type": "template",209"subreddit": "CoCalc",210"title": "Learn Aqueous Geochemistry Speciation with Interactive Python",211"body": "## What You'll Learn\n\nThis template provides a comprehensive computational framework for aqueous geochemistry, covering chemical equilibria, speciation diagrams, activity corrections, and mineral saturation. We implement the Henderson-Hasselbalch equation for acid-base systems, Debye-H\\\"uckel activity coefficients for ionic solutions, carbonate speciation across pH gradients, and saturation indices for mineral dissolution. The template also includes pe-pH (Pourbaix) diagrams for redox speciation, enabling prediction of thermodynamically stable species in natural water systems.\n\n\n**Key equations you'll work with:**\n- K_a1 = 10^-6.35\n- K_a2 = 10^-10.33\n- ₂\n\n\n## What You'll See\n\n**Visualization:** speciation diagrams and pH-Eh stability fields\n\n- See mineral stability across water chemistry\n- Observe equilibrium reactions in solution\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geochemistry/aqueous_geochemistry.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",212"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geochemistry/aqueous_geochemistry.tex",213"category": "templates",214"date": "2025-12-28",215"time": "09:00"216},217{218"source": "seismic_waves",219"content_type": "template",220"subreddit": "CoCalc",221"title": "Learn Seismic Wave Propagation: Earth Structure and Travel Time Analysis with Interactive Python",222"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of seismic wave propagation through Earth's interior. We examine P-wave and S-wave velocities in different Earth layers, compute travel times using ray theory, and analyze seismograms to determine Earth structure. The analysis includes velocity-depth profiles based on the PREM model, Snell's law for ray tracing, and the interpretation of seismic shadow zones that reveal Earth's liquid outer core.\n\n\n**Key equations you'll work with:**\n- K\n- μ\n- ρ\n\n\n## What You'll See\n\n**Visualization:** seismograms and ray path diagrams\n\n- See P and S waves propagating through Earth\n- Observe velocity structure from travel times\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/seismic_waves.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",223"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/seismic_waves.tex",224"category": "templates",225"date": "2025-12-28",226"time": "09:00"227},228{229"source": "diffusion",230"content_type": "template",231"subreddit": "CoCalc",232"title": "Learn Diffusion in Materials Science: Computational Analysis with Interactive Python",233"body": "## What You'll Learn\n\ncomputational analysis of diffusion phenomena in materials science. We examine Fick's first and second laws, analytical solutions including error function profiles, numerical simulation of concentration evolution, and the Kirkendall effect in binary diffusion couples. Python-based computations provide quantitative analysis with dynamic visualization.\n\n\n**Key equations you'll work with:**\n- J\n- D\n- C\n\n\n## What You'll See\n\n**Visualization:** concentration profiles evolving over time\n\n- See atoms spreading through materials\n- Observe temperature effects on diffusivity\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/materials-science/diffusion.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",234"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/materials-science/diffusion.tex",235"category": "templates",236"date": "2025-12-29",237"time": "09:00"238},239{240"source": "pn_junctions",241"content_type": "template",242"subreddit": "CoCalc",243"title": "Learn PN Junction Physics: Electrostatic Analysis and Current-Voltage Characteristics with Interactive Python",244"body": "## What You'll Learn\n\na comprehensive computational analysis of PN junction physics, covering the formation of the depletion region, built-in potential, electrostatic field profiles, current-voltage characteristics governed by the Shockley diode equation, junction capacitance (both depletion and diffusion), and breakdown mechanisms. We analyze silicon PN junctions with varying doping concentrations, compute key parameters including built-in voltage (V_bi = 0.7--0.9 V), saturation current (I_s 10^-15 A), and breakdown voltage (V_BR 10--100 V), and examine the transition from thermal equilibrium to forward/reverse bias conditions.\n\n\n**Key equations you'll work with:**\n- V_bi = 0.7\n- 0.9\n- I_s 10^-15\n\n\n## What You'll See\n\n**Visualization:** band diagrams and depletion regions\n\n- See carrier distributions at junctions\n- Observe forward and reverse bias behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/semiconductor/pn_junctions.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",245"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/semiconductor/pn_junctions.tex",246"category": "templates",247"date": "2025-12-29",248"time": "09:00"249},250{251"source": "attention",252"content_type": "template",253"subreddit": "CoCalc",254"title": "Learn Computational Models of Visual Attention: From Saliency Maps to Biased Competition with Interactive Python",255"body": "## What You'll Learn\n\na computational investigation of visual attention mechanisms using models inspired by cognitive neuroscience research. We implement the feature integration theory (FIT) framework to simulate visual search tasks, construct bottom-up saliency maps based on multi-feature integration, model the attentional spotlight using Gaussian spatial weighting functions, and analyze the attentional blink phenomenon in rapid serial visual presentation (RSVP) paradigms. The computational models demonstrate how attention operates through both spatial selection (spotlight and zoom-lens mechanisms) and feature-based enhancement (biased competition), producing reaction time patterns and accuracy metrics consistent with behavioral data from human observers.\n\n\n**Key equations you'll work with:**\n- RT = RT₀ + k · n\n- n\n- k\n\n\n## What You'll See\n\n**Visualization:** reaction time distributions and attention maps\n\n- See how attention affects processing speed\n- Observe selective attention bottlenecks\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cognitive-science/attention.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",256"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cognitive-science/attention.tex",257"category": "templates",258"date": "2025-12-30",259"time": "09:00"260},261{262"source": "rsa_encryption",263"content_type": "template",264"subreddit": "CoCalc",265"title": "Learn RSA Encryption: Implementation and Security Analysis with Interactive Python",266"body": "## What You'll Learn\n\na comprehensive computational analysis of the RSA (Rivest-Shamir-Adleman) public-key cryptosystem, examining the mathematical foundations of modular exponentiation, key generation procedures, encryption and decryption operations, and security considerations. We implement RSA encryption with various key sizes (512-bit to 2048-bit), analyze the computational complexity of modular exponentiation algorithms, demonstrate the relationship between prime factorization difficulty and key security, and evaluate timing characteristics that could lead to side-channel vulnerabilities. The analysis includes visualization of the key generation process, cipher space distribution, and performance metrics across different implementation parameters.\n\n\n**Key equations you'll work with:**\n- (n, e)\n- n = pq\n- e\n\n\n## What You'll See\n\n**Visualization:** key generation and encryption/decryption flow\n\n- See modular exponentiation in action\n- Observe factoring difficulty for security\n\n## Make It Yours\n\nChange key sizes, implement different algorithms, or test your security scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cryptography/rsa_encryption.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",267"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cryptography/rsa_encryption.tex",268"category": "templates",269"date": "2025-12-30",270"time": "09:00"271},272{273"source": "decision_making",274"content_type": "template",275"subreddit": "CoCalc",276"title": "Learn Computational Models of Human Decision Making: From Rational Choice to Bounded Rationality with Interactive Python",277"body": "## What You'll Learn\n\na comprehensive computational analysis of human decision-making processes, spanning classical normative theories to modern descriptive models. We examine expected utility theory as the rational benchmark, prospect theory's account of systematic deviations from rationality, drift-diffusion models for response time distributions, Bayesian frameworks for decision making under uncertainty, and reinforcement learning models of value-based choice. Computational simulations reveal the parametric signatures that distinguish these frameworks and their ability to capture key phenomena including loss aversion, probability weighting, speed-accuracy tradeoffs, and learning dynamics.\n\n\n**Key equations you'll work with:**\n- A\n- O\n- p(o|a)\n\n\n## What You'll See\n\n**Visualization:** choice probability curves and utility functions\n\n- See decision boundaries and risk preferences\n- Observe deviations from rational choice\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cognitive-science/decision_making.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",278"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cognitive-science/decision_making.tex",279"category": "templates",280"date": "2025-12-31",281"time": "09:00"282},283{284"source": "exoplanet_transit",285"content_type": "template",286"subreddit": "CoCalc",287"title": "Learn Exoplanet Transit Photometry: Light Curves and Planetary Parameters A Comprehensive Analysis of Transit Detection Methods with Interactive Python",288"body": "## What You'll Learn\n\nThis comprehensive analysis presents the theory and practice of exoplanet detection via transit photometry. We develop analytic models for transit light curves including the effects of limb darkening, derive expressions for transit depth, duration, and impact parameter, and demonstrate parameter extraction from simulated observations. The analysis covers the Mandel-Agol model for precise transit modeling, explores different limb darkening laws, and examines secondary eclipses and phase curves. We simulate a hot Jupiter transit and extract planetary parameters including radius, orbital period, and inclination.\n\n\n**Key equations you'll work with:**\n- R_\n- R_p\n- a\n\n\n## What You'll See\n\n**Visualization:** transit light curves showing planetary dips\n\n- See the characteristic dimming as planets cross stars\n- Observe how transit depth reveals planet size\n\n## Make It Yours\n\nChange stellar parameters, simulate different objects, or model your observation.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astronomy/exoplanet_transit.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",289"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/template",290"category": "template\n\n================================================================================\nREDDIT (< 40000 chars)\n================================================================================\n**Title:** Learn Exoplanet Transit Photometry: Light Curves and Planetary Parameters A Comprehensive Analysis of Transit Detection Methods with Interactive Python\n\n**Body:**\n\n## What You'll Learn\n\nThis comprehensive analysis presents the theory and practice of exoplanet detection via transit photometry. We develop analytic models for transit light curves including the effects of limb darkening, derive expressions for transit depth, duration, and impact parameter, and demonstrate parameter extraction from simulated observations. The analysis covers the Mandel-Agol model for precise transit modeling, explores different limb darkening laws, and examines secondary eclipses and phase curves. We simulate a hot Jupiter transit and extract planetary parameters including radius, orbital period, and inclination.\n\n\n**Key equations you'll work with:**\n- R_\n- R_p\n- a\n\n\n## What You'll See\n\n**Visualization:** transit light curves showing planetary dips\n\n- See the characteristic dimming as planets cross stars\n- Observe how transit depth reveals planet size\n\n## Make It Yours\n\nChange stellar parameters, simulate different objects, or model your observation.\n\n---\n\n**Template link:** https:",291"date": "2025-12-31",292"time": "09:00"293},294{295"source": "agent_based",296"content_type": "template",297"subreddit": "CoCalc",298"title": "Learn Agent-Based Modeling: Emergent Behavior from Simple Rules with Interactive Python",299"body": "## What You'll Learn\n\nA hands-on exploration of Agent-Based Modeling: Emergent Behavior from Simple Rules.\n\n\n**Key equations you'll work with:**\n- τ\n\n\n## What You'll See\n\n**Visualization:** agent positions and emergent patterns\n\n- See collective behavior from simple rules\n- Observe self-organization phenomena\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/simulations/agent_based.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",300"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/simulations/agent_based.tex",301"category": "templates",302"date": "2026-01-01",303"time": "09:00"304},305{306"source": "wave_dynamics",307"content_type": "template",308"subreddit": "CoCalc",309"title": "Learn Ocean Wave Dynamics: Dispersion, Spectra, and Coastal Processes with Interactive Python",310"body": "## What You'll Learn\n\nA hands-on exploration of Ocean Wave Dynamics: Dispersion, Spectra, and Coastal Processes.\n\n\n**Key equations you'll work with:**\n- kh 1\n- ω² = gk\n- c = √g/k\n\n\n## What You'll See\n\n**Visualization:** wave spectra and dispersion relationships\n\n- See surface wave evolution and breaking\n- Observe wave energy distribution\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/oceanography/wave_dynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",311"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/oceanography/wave_dynamics.tex",312"category": "templates",313"date": "2026-01-01",314"time": "09:00"315},316{317"source": "standard_model",318"content_type": "template",319"subreddit": "CoCalc",320"title": "Learn Standard Model Physics: Coupling Evolution and Grand Unification with Interactive Python",321"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of the Standard Model gauge couplings and their renormalization group evolution. We implement one-loop and two-loop running of the electromagnetic, weak, and strong coupling constants, analyze gauge unification scenarios in the MSSM, and compute threshold corrections. The analysis addresses hierarchy problems, proton decay constraints, and predictions for beyond-Standard-Model physics.\n\n\n**Key equations you'll work with:**\n- SU(3)_C × SU(2)_L × U(1)_Y\n- g₁\n- α₁ = g₁²/(4π)\n\n\n## What You'll See\n\n**Visualization:** particle mass hierarchies and coupling constants\n\n- See the zoo of fundamental particles\n- Observe force unification trends\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/particle-physics/standard_model.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",322"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/particle-physics/standard_model.tex",323"category": "templates",324"date": "2026-01-02",325"time": "09:00"326},327{328"source": "radiometric_dating",329"content_type": "template",330"subreddit": "CoCalc",331"title": "Learn Radiometric Dating: Isotope Geochronology and Age Determination with Interactive Python",332"body": "## What You'll Learn\n\na comprehensive analysis of radiometric dating methods. We examine radioactive decay kinetics, implement isochron dating for Rb-Sr and U-Pb systems, analyze concordia-discordia relationships, calculate closure temperatures, and demonstrate carbon-14 dating for recent samples. All computations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- N₀\n- λ\n- t\n\n\n## What You'll See\n\n**Visualization:** decay curves and isochron plots\n\n- See parent-daughter isotope ratios evolve\n- Observe age determination from radioactive decay\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/earth-science/radiometric_dating.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",333"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/earth-science/radiometric_dating.tex",334"category": "templates",335"date": "2026-01-02",336"time": "09:00"337},338{339"source": "crystal_structure",340"content_type": "template",341"subreddit": "CoCalc",342"title": "Learn Crystal Structure Analysis: Unit Cells, Miller Indices, and X-Ray Diffraction Patterns with Interactive Python",343"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of crystal structures in materials science. We examine unit cell geometry, Miller index notation, interplanar spacing calculations, and X-ray diffraction pattern simulation. Computational analysis using Python demonstrates structure factor calculations and powder diffraction profiles for common crystal systems.\n\n\n**Key equations you'll work with:**\n- (hkl)\n- (hkl)\n- 1/d²\n\n\n## What You'll See\n\n**Visualization:** unit cells and diffraction patterns\n\n- See atomic arrangements in crystal lattices\n- Observe symmetry and Bragg reflections\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/materials-science/crystal_structure.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",344"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/materials-science/crystal_structu",345"category": "templates",346"date": "2026-01-03",347"time": "09:00"348},349{350"source": "rainfall_runoff",351"content_type": "template",352"subreddit": "CoCalc",353"title": "Learn Rainfall-Runoff Modeling: Unit Hydrograph Theory and SCS Curve Number Method with Interactive Python",354"body": "## What You'll Learn\n\nThis engineering report presents a comprehensive analysis of rainfall-runoff transformation using the unit hydrograph approach and the Soil Conservation Service (SCS) Curve Number method. We develop synthetic unit hydrographs using the Nash cascade model, compute excess rainfall from design storms using the SCS-CN method, and apply discrete convolution to generate direct runoff hydrographs. The analysis demonstrates watershed response characteristics including time of concentration, peak discharge estimation, and the effects of antecedent moisture conditions and land use on runoff generation.\n\n\n**Key equations you'll work with:**\n- D\n- u(t)\n- i_e(t)\n\n\n## What You'll See\n\n**Visualization:** hydrographs and unit response functions\n\n- See rainfall transform to streamflow\n- Observe catchment response time scales\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/hydrology/rainfall_runoff.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",355"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/hydrology/rainfall_runoff.tex",356"category": "templates",357"date": "2026-01-03",358"time": "09:00"359},360{361"source": "phylogenetics",362"content_type": "template",363"subreddit": "CoCalc",364"title": "Learn Phylogenetic Analysis: Tree Reconstruction and Evolutionary Inference Distance Methods, Maximum Likelihood, and Bootstrap Support with Interactive Python",365"body": "## What You'll Learn\n\nThis comprehensive analysis presents methods for reconstructing phylogenetic trees from molecular sequence data. We cover distance-based methods (UPGMA, Neighbor-Joining), character-based approaches, and statistical support via bootstrapping. The analysis includes distance corrections for multiple substitutions (Jukes-Cantor, Kimura), tree search algorithms, and visualization of evolutionary relationships. We demonstrate phylogenetic inference using primate mitochondrial sequences and evaluate tree topology confidence through bootstrap resampling.\n\n\n**Key equations you'll work with:**\n- p\n- s\n- v\n\n\n## What You'll See\n\n**Visualization:** phylogenetic trees with branch lengths and bootstrap values\n\n- See evolutionary relationships between species\n- Observe divergence times and common ancestors\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/bioinformatics/phylogenetics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",366"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/",367"category": "\n\n================================================================================\nREDDIT (< 40000 chars)\n================================================================================\n**Title:** Learn Phylogenetic Analysis: Tree Reconstruction and Evolutionary Inference Distance Methods, Maximum Likelihood, and Bootstrap Support with Interactive Python\n\n**Body:**\n\n## What You'll Learn\n\nThis comprehensive analysis presents methods for reconstructing phylogenetic trees from molecular sequence data. We cover distance-based methods (UPGMA, Neighbor-Joining), character-based approaches, and statistical support via bootstrapping. The analysis includes distance corrections for multiple substitutions (Jukes-Cantor, Kimura), tree search algorithms, and visualization of evolutionary relationships. We demonstrate phylogenetic inference using primate mitochondrial sequences and evaluate tree topology confidence through bootstrap resampling.\n\n\n**Key equations you'll work with:**\n- p\n- s\n- v\n\n\n## What You'll See\n\n**Visualization:** phylogenetic trees with branch lengths and bootstrap values\n\n- See evolutionary relationships between species\n- Observe divergence times and common ancestors\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https:",368"date": "2026-01-04",369"time": "09:00"370},371{372"source": "word_embeddings",373"content_type": "template",374"subreddit": "CoCalc",375"title": "Learn Word Embeddings: Skip-gram Model and Vector Semantics with Interactive Python",376"body": "## What You'll Learn\n\nA hands-on exploration of Word Embeddings: Skip-gram Model and Vector Semantics.\n\n\n**Key equations you'll work with:**\n- c\n\n\n## What You'll See\n\n**Visualization:** t-SNE projections of word vectors\n\n- See semantic relationships in vector space\n- Observe word analogies and clusters\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nlp/word_embeddings.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",377"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nlp/word_embeddings.tex",378"category": "templates",379"date": "2026-01-04",380"time": "09:00"381},382{383"source": "queueing_theory",384"content_type": "template",385"subreddit": "CoCalc",386"title": "Learn Queueing Theory: Performance Analysis of Service Systems with Interactive Python",387"body": "## What You'll Learn\n\na comprehensive analysis of queueing systems using analytical and computational methods. We examine M/M/1, M/M/c, and M/G/1 queue models, deriving performance metrics including expected queue length, waiting time, and system utilization. The analysis demonstrates Little's Law, explores the impact of service rate variability, and applies Erlang-C formulas for multi-server systems. Computational simulations validate theoretical predictions and illustrate queueing behavior under various traffic intensities and service disciplines. Applications to call center staffing, hospital emergency departments, and network packet routing are presented.\n\n\n**Key equations you'll work with:**\n- λ\n- μ\n- N\n\n\n## What You'll See\n\n**Visualization:** queue length distributions and wait times\n\n- See system utilization effects on delays\n- Observe traffic intensity thresholds\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/queueing_theory.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",388"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/queueing_theory.tex",389"category": "templates",390"date": "2026-01-05",391"time": "09:00"392},393{394"source": "fft_analysis",395"content_type": "template",396"subreddit": "CoCalc",397"title": "Learn FFT Spectral Analysis: Audio Signal Processing From Time Domain to Frequency Domain and Back with Interactive Python",398"body": "## What You'll Learn\n\nThis lab report demonstrates the application of the Fast Fourier Transform (FFT) for spectral analysis of audio signals. We synthesize a complex waveform containing multiple harmonic components, analyze its frequency content, design and apply digital filters, and investigate windowing effects on spectral leakage. The analysis includes spectrograms for time-frequency visualization and demonstrates practical signal processing techniques.\n\n\n**Key equations you'll work with:**\n- x[n]\n- N\n- O(N²)\n\n\n## What You'll See\n\n**Visualization:** time and frequency domain representations\n\n- See signal decomposed into frequencies\n- Observe spectral leakage and windowing\n\n## Make It Yours\n\nAdjust filter parameters, apply to your signals, or chain multiple transformations.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/signal-processing/fft_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",399"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/signal-processing/fft_analy",400"category": "templates",401"date": "2026-01-05",402"time": "09:00"403},404{405"source": "procedural_generation",406"content_type": "template",407"subreddit": "CoCalc",408"title": "Learn Procedural Generation Noise Functions and Algorithmic Content Creation with Interactive Python",409"body": "## What You'll Learn\n\nProcedural content generation uses algorithmic techniques to create game environments, terrains, vegetation, and structures. This report implements five fundamental algorithms: Perlin noise for natural-looking terrain, the diamond-square algorithm for heightmap generation, L-systems for botanical structures, cellular automata for cave systems, and Poisson disk sampling for spatially distributed object placement. Each method produces pseudo-random yet coherent patterns essential for creating believable game worlds at scale.\n\n\n**Key equations you'll work with:**\n- O(n² \\· k)\n- O(n²)\n- O(L)\n\n\n## What You'll See\n\n**Visualization:** noise functions and terrain generation\n\n- See algorithmically generated landscapes\n- Observe scale effects on procedural detail\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/game-development/procedural_generation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",410"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/game-development/procedural_generation.tex",411"category": "templates",412"date": "2026-01-06",413"time": "09:00"414},415{416"source": "interference_patterns",417"content_type": "template",418"subreddit": "CoCalc",419"title": "Learn Optics: Interference Patterns and Analysis with Interactive Python",420"body": "## What You'll Learn\n\nA hands-on exploration of Optics: Interference Patterns and Analysis.\n\n\n**Key equations you'll work with:**\n- E₁\n- E₂\n- δ\n\n\n## What You'll See\n\n**Visualization:** fringe patterns and visibility curves\n\n- See constructive and destructive interference\n- Observe coherence length effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/interference_patterns.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",421"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/interference_patterns.tex",422"category": "templates",423"date": "2026-01-06",424"time": "09:00"425},426{427"source": "population_genetics",428"content_type": "template",429"subreddit": "CoCalc",430"title": "Learn Marine Population Genetics: Drift, Gene Flow, and F-Statistics with Interactive Python",431"body": "## What You'll Learn\n\na comprehensive computational analysis of marine population genetics, examining fundamental evolutionary forces that shape genetic diversity in marine organisms. We simulate Hardy-Weinberg equilibrium conditions, quantify genetic drift using Wright-Fisher models, estimate effective population size (Ne) from temporal allele frequency changes, model gene flow under island and stepping-stone migration patterns, and calculate F-statistics (FST, FIS, FIT) to assess population structure. The analysis demonstrates how restricted dispersal, small population sizes, and variable larval connectivity create hierarchical genetic structure in marine metapopulations.\n\n\n**Key equations you'll work with:**\n- A\n- a\n- p\n\n\n## What You'll See\n\n**Visualization:** allele frequency distributions and Fst values\n\n- See genetic differentiation between populations\n- Observe gene flow and isolation effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/marine-biology/population_genetics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",432"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/marine-biology/population_genetics.tex",433"category": "templates",434"date": "2026-01-07",435"time": "09:00"436},437{438"source": "signal_transduction",439"content_type": "template",440"subreddit": "CoCalc",441"title": "Learn Signal Transduction Analysis MAPK Cascade Dynamics and Ultrasensitivity with Interactive Python",442"body": "## What You'll Learn\n\nSignal transduction pathways amplify and process extracellular signals through cascades of protein phosphorylation. This report analyzes the MAPK (Mitogen-Activated Protein Kinase) cascade, demonstrating ultrasensitivity through the Goldbeter-Koshland function, Michaelis-Menten enzyme kinetics, dose-response behavior, and sensitivity to kinetic parameters. The three-tier Raf→MEK→ERK cascade exhibits switch-like behavior with a Hill coefficient of n_H = 5.2, enabling binary decision-making in cellular responses to growth factors.\n\n\n**Key equations you'll work with:**\n- n_H = 5.2\n- v = V_max[S]/(K_m + [S])\n- K_m\n\n\n## What You'll See\n\n**Visualization:** signaling cascade dynamics and dose-response\n\n- See signals amplifying through pathways\n- Observe ultrasensitivity and bistability\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/systems-biology/signal_transduction.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",443"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/systems-biology/signal_transduction.tex",444"category": "templates",445"date": "2026-01-07",446"time": "09:00"447},448{449"source": "navier_stokes",450"content_type": "template",451"subreddit": "CoCalc",452"title": "Learn Navier-Stokes Equations: Viscous Flow Analysis and Boundary Layer Theory with Interactive Python",453"body": "## What You'll Learn\n\nThis technical report presents analytical and computational solutions to the Navier-Stokes equations for canonical viscous flow problems. We analyze Couette flow, Poiseuille flow, and boundary layer development using Python-based numerical methods. Results include velocity profiles, shear stress distributions, and Reynolds number effects on flow characteristics.\n\n\n**Key equations you'll work with:**\n- ^\n- ρ = rho\n- ³\n\n\n## What You'll See\n\n**Visualization:** velocity fields and streamlines\n\n- See flow patterns around obstacles\n- Observe Reynolds number effects on turbulence\n\n## Make It Yours\n\nModify Reynolds numbers, change geometries, or simulate your specific flow.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/fluid-dynamics/navier_stokes.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",454"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/fluid-dynamics/navier_stokes.tex",455"category": "templates",456"date": "2026-01-08",457"time": "09:00"458},459{460"source": "hypothesis_testing",461"content_type": "template",462"subreddit": "CoCalc",463"title": "Learn Statistical Hypothesis Testing: Theory and Applications with Interactive Python",464"body": "## What You'll Learn\n\nA hands-on exploration of Statistical Hypothesis Testing: Theory and Applications.\n\n\n**Key equations you'll work with:**\n- α\n- H₀\n- μ=mu₀\n\n\n## What You'll See\n\n**Visualization:** null distributions and rejection regions\n\n- See test statistic in relation to critical values\n- Observe Type I and II error rates\n\n## Make It Yours\n\nAdjust sample sizes, add your own datasets, or customize the analysis pipeline.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/statistics/hypothesis_testing.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",465"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/statistics/hypothesis_testing.tex",466"category": "templates",467"date": "2026-01-08",468"time": "09:00"469},470{471"source": "gravitational_waves",472"content_type": "template",473"subreddit": "CoCalc",474"title": "Learn Gravitational Wave Physics Strain, Detection, and Binary Systems with Interactive Python",475"body": "## What You'll Learn\n\nAnalysis of gravitational wave generation, propagation, and detection including chirp mass calculations and LIGO sensitivity.\n\n\n**Key equations you'll work with:**\n- m₂\n- M_\\\n- m₁\n\n\n## What You'll See\n\n**Visualization:** strain waveforms from binary mergers\n\n- See the characteristic chirp signal from inspiraling objects\n- Observe how frequency increases before merger\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/gravitational_waves.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",476"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/gravitational_waves.tex",477"category": "templates",478"date": "2026-01-09",479"time": "09:00"480},481{482"source": "color_perception",483"content_type": "template",484"subreddit": "CoCalc",485"title": "Learn Color Perception CIE Chromaticity with Interactive Python",486"body": "## What You'll Learn\n\na comprehensive computational analysis of human color perception, including trichromatic cone fundamentals, CIE chromaticity systems, opponent color channels, and color vision deficiencies. We model the spectral sensitivities of L, M, and S cones, compute chromaticity coordinates in CIE 1931 color space, analyze color discrimination thresholds via MacAdam ellipses, and simulate protanopic, deuteranopic, and tritanopic color blindness. The models integrate photoreceptor physiology with perceptual color spaces to provide quantitative frameworks for understanding human color vision.\n\n\n**Key equations you'll work with:**\n- i \\L, M, S\\\n- λ_peak,i\n- σᵢ\n\n\n## What You'll See\n\n**Visualization:** color matching functions and chromaticity diagrams\n\n- See human color vision represented\n- Observe metameric matches and gamut limits\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/psychophysics/color_perception.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",487"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/psychophysics/color_perception.tex",488"category": "templates",489"date": "2026-01-09",490"time": "09:00"491},492{493"source": "quantum_gates",494"content_type": "template",495"subreddit": "CoCalc",496"title": "Learn Quantum Gate Operations and Qubit Visualization Bloch Sphere Dynamics and Gate Sequences with Interactive Python",497"body": "## What You'll Learn\n\nThis document explores single-qubit quantum gates and their geometric representation on the Bloch sphere. We implement matrix representations of common gates (Pauli, Hadamard, phase gates), visualize state evolution, and analyze gate sequences for quantum algorithms. The analysis includes gate decomposition, fidelity calculations, and comparisons between different gate implementations.\n\n\n**Key equations you'll work with:**\n- |α|² + |β|² = 1\n- θ, φ\n- θ\n\n\n## What You'll See\n\n**Visualization:** Bloch sphere rotations and gate sequences\n\n- See quantum state transformations\n- Observe unitary evolution of qubits\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-computing/quantum_gates.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",498"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-computing/quantum_gates",499"category": "templates",500"date": "2026-01-10",501"time": "09:00"502},503{504"source": "bayesian",505"content_type": "template",506"subreddit": "CoCalc",507"title": "Learn Bayesian Inference: From Prior to Posterior Parameter Estimation with Markov Chain Monte Carlo with Interactive Python",508"body": "## What You'll Learn\n\nThis tutorial provides a comprehensive introduction to Bayesian inference for parameter estimation. We implement conjugate prior analysis for binomial data and develop a Metropolis-Hastings MCMC sampler for Bayesian linear regression. The analysis includes prior sensitivity analysis, posterior visualization, convergence diagnostics, and credible interval computation. Results demonstrate the philosophical and practical advantages of the Bayesian approach over frequentist methods.\n\n\n**Key equations you'll work with:**\n- θ\n- D\n- k\n\n\n## What You'll See\n\n**Visualization:** prior/posterior distributions and credible intervals\n\n- See beliefs updated with data\n- Observe prior influence on inference\n\n## Make It Yours\n\nAdjust sample sizes, add your own datasets, or customize the analysis pipeline.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/statistics/bayesian.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",509"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/statistics/bayesian.tex",510"category": "templates",511"date": "2026-01-10",512"time": "09:00"513},514{515"source": "pid_control",516"content_type": "template",517"subreddit": "CoCalc",518"title": "Learn Robotics: PID Motor Control and Tuning with Interactive Python",519"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of Proportional-Integral-Derivative (PID) control for robotic motor systems. We implement continuous and discrete PID controllers, analyze the effects of each gain component on system response, explore automatic tuning methods including Ziegler-Nichols and Cohen-Coon, and demonstrate applications to DC motor position and velocity control. The analysis includes stability margins, frequency response, and performance metrics for practical robotics applications.\n\n\n**Key equations you'll work with:**\n- e(t) = r(t) - y(t)\n- r(t)\n- y(t)\n\n\n## What You'll See\n\n**Visualization:** step response and error convergence\n\n- See PID controller eliminating error\n- Observe gain tuning effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/robotics/pid_control.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",520"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/robotics/pid_control.tex",521"category": "templates",522"date": "2026-01-11",523"time": "09:00"524},525{526"source": "thin_film",527"content_type": "template",528"subreddit": "CoCalc",529"title": "Learn Optics: Thin Film Interference and Coatings with Interactive Python",530"body": "## What You'll Learn\n\nA hands-on exploration of Optics: Thin Film Interference and Coatings.\n\n\n**Key equations you'll work with:**\n- δ = 2π n₁ d θ₁/λ\n- n\n- d\n\n\n## What You'll See\n\n**Visualization:** reflectance spectra and layer interference\n\n- See thin film color effects\n- Observe quarter-wave matching\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/thin_film.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",531"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/thin_film.tex",532"category": "templates",533"date": "2026-01-11",534"time": "09:00"535},536{537"source": "signal_detection",538"content_type": "template",539"subreddit": "CoCalc",540"title": "Learn Signal Detection d-prime and ROC with Interactive Python",541"body": "## What You'll Learn\n\nSignal Detection Theory (SDT) provides a mathematical framework for quantifying perceptual sensitivity and decision criteria in psychophysical tasks. This report implements computational methods to calculate detectability (d'), response bias (β and c), and receiver operating characteristic (ROC) curves. Using Gaussian signal and noise distributions, we analyze hit rates, false alarm rates, and criterion placement under varying task conditions. Applications include sensory perception, medical diagnosis, and quality control decision-making.\n\n\n**Key equations you'll work with:**\n- d'\n- β\n- c\n\n\n## What You'll See\n\n**Visualization:** ROC curves and d-prime measures\n\n- See sensitivity vs response bias tradeoff\n- Observe optimal decision criteria\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/psychophysics/signal_detection.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",542"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/psychophysics/signal_detection.tex",543"category": "templates",544"date": "2026-01-12",545"time": "09:00"546},547{548"source": "transient_stability",549"content_type": "template",550"subreddit": "CoCalc",551"title": "Learn Transient Stability Analysis Swing Equation and Equal Area Criterion with Interactive Python",552"body": "## What You'll Learn\n\na comprehensive computational analysis of power system transient stability using the swing equation framework. We examine rotor angle dynamics under large disturbances such as three-phase faults, apply the equal area criterion to assess stability margins, and determine critical clearing times for single-machine infinite-bus (SMIB) systems. Numerical simulations demonstrate the impact of inertia constants, fault locations, and power system stabilizer (PSS) damping on post-fault system recovery. The analysis provides quantitative metrics for stability assessment including maximum rotor angle deviation, settling time, and frequency of oscillation.\n\n\n**Key equations you'll work with:**\n- M = H/π f₀\n- H\n- ·\n\n\n## What You'll See\n\n**Visualization:** rotor angle swings and power oscillations\n\n- See generator response to disturbances\n- Observe critical clearing time\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/power-systems/transient_stability.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",553"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/power-systems/transient_stability.tex",554"category": "templates",555"date": "2026-01-12",556"time": "09:00"557},558{559"source": "biomechanics",560"content_type": "template",561"subreddit": "CoCalc",562"title": "Learn Biomechanics Tissue Mechanics and Viscoelasticity with Interactive Python",563"body": "## What You'll Learn\n\nAnalysis of biological tissue mechanics including stress-strain relationships, viscoelasticity, and bone mechanics.\n\n\n**Key equations you'll work with:**\n- E_\\∞\n- E'\n- E''\n\n\n## What You'll See\n\n**Visualization:** stress-strain curves for biological tissues\n\n- See nonlinear elastic behavior of muscles and tendons\n- Observe viscoelastic properties of living tissue\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/biomechanics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",564"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/biomechanics.tex",565"category": "templates",566"date": "2026-01-13",567"time": "09:00"568},569{570"source": "spatial_epidemiology",571"content_type": "template",572"subreddit": "CoCalc",573"title": "Learn Spatial Epidemiology: Reaction-Diffusion Models and Traveling Wave Solutions with Interactive Python",574"body": "## What You'll Learn\n\na comprehensive computational analysis of spatial epidemic dynamics using reaction-diffusion partial differential equations. We examine the Fisher-KPP equation and spatial SIR models, analyzing traveling wave solutions, minimum wave speeds, and spatial clustering patterns. Numerical simulations demonstrate epidemic front propagation with wave speed c = 2√Dr where D is the diffusion coefficient and r is the intrinsic growth rate. Spatial statistics including Moran's I and point pattern analysis reveal clustering patterns in disease incidence. Results show how spatial heterogeneity and dispersal kernels influence epidemic spread across geographic regions.\n\n\n**Key equations you'll work with:**\n- c = 2√Dr\n- D\n- r\n\n\n## What You'll See\n\n**Visualization:** disease prevalence maps and spatial clusters\n\n- See geographic patterns of infection\n- Observe spatial autocorrelation in disease\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/epidemiology/spatial_epidemiology.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",575"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/epidemiology/spatial_epidemiology.tex",576"category": "templates",577"date": "2026-01-13",578"time": "09:00"579},580{581"source": "load_flow",582"content_type": "template",583"subreddit": "CoCalc",584"title": "Learn Load Flow Analysis Using Newton-Raphson Method IEEE 5-Bus Test System with Interactive Python",585"body": "## What You'll Learn\n\na comprehensive load flow analysis of the IEEE 5-bus test system using the Newton-Raphson method. We implement the full power flow solution algorithm, construct the Jacobian matrix, and analyze voltage profiles, real and reactive power flows, and system losses. The Newton-Raphson method demonstrates quadratic convergence, achieving solution accuracy within 4-6 iterations for a realistic test network with slack, PV, and PQ buses. Results include bus voltage magnitudes ranging from 0.987 to 1.050 per-unit, total system losses of 3.89 MW, and verification of power balance at all nodes.\n\n\n**Key equations you'll work with:**\n- δ₁ = 0\n- |V₁|\n- δ₁ = 0\n\n\n## What You'll See\n\n**Visualization:** bus voltages and power flow diagrams\n\n- See power distribution across network\n- Observe voltage regulation and losses\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/power-systems/load_flow.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",586"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/power-systems/load_flow.tex",587"category": "templates",588"date": "2026-01-14",589"time": "09:00"590},591{592"source": "temperature_model",593"content_type": "template",594"subreddit": "CoCalc",595"title": "Learn Global Temperature Modeling: Energy Balance and Climate Sensitivity with Interactive Python",596"body": "## What You'll Learn\n\nThis study presents energy balance models for global mean surface temperature, examining radiative forcing from greenhouse gases and the response of the climate system. We analyze zero-dimensional and one-dimensional models, calculate climate sensitivity from different feedback mechanisms, and compare model projections with observations. The analysis quantifies transient and equilibrium climate response to CO₂ forcing.\n\n\n**Key equations you'll work with:**\n- ₂\n- F\n- ₂\n\n\n## What You'll See\n\n**Visualization:** global temperature projections under different scenarios\n\n- See temperature trajectories with uncertainty bands\n- Observe sensitivity to emissions pathways\n\n## Make It Yours\n\nAdjust forcing parameters, extend time scales, or model specific scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/climate-science/temperature_model.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",597"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/climate-science/temperature_model.tex",598"category": "templates",599"date": "2026-01-14",600"time": "09:00"601},602{603"source": "molecular_dynamics",604"content_type": "template",605"subreddit": "CoCalc",606"title": "Learn Molecular Dynamics Simulation: From Lennard-Jones Potentials to Thermodynamic Properties with Interactive Python",607"body": "## What You'll Learn\n\nMolecular dynamics (MD) simulations provide atomistic insight into the behavior of matter by solving Newton's equations of motion for systems of interacting particles. This document develops a complete MD simulation framework, starting from the Lennard-Jones potential for pairwise interactions, implementing the velocity Verlet integration algorithm, and extracting thermodynamic properties including temperature, pressure, and radial distribution functions. We simulate a simple Lennard-Jones fluid, demonstrating phase behavior and energy conservation.\n\n\n**Key equations you'll work with:**\n- N\n- Fᵢ = -∇ᵢ U\n- Fᵢ = mᵢ {r}ᵢ\n\n\n## What You'll See\n\n**Visualization:** radial distribution functions and energy trajectories\n\n- See molecular structure emerge from simulations\n- Observe thermodynamic equilibration over time\n\n## Make It Yours\n\nModify reaction rates, add new species, or simulate different conditions.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemistry/molecular_dynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",608"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemistry/molecular_dynamics.te",609"category": "templates",610"date": "2026-01-15",611"time": "09:00"612},613{614"source": "structural_analysis",615"content_type": "template",616"subreddit": "CoCalc",617"title": "Learn Structural Analysis Beam Deflection and Moment Distribution with Interactive Python",618"body": "## What You'll Learn\n\ncomprehensive computational analysis of fundamental structural elements including simply supported beams, statically determinate trusses, and influence lines for moving loads. Classical beam theory is applied to determine bending moments, shear forces, and elastic deflections under uniformly distributed loading. The method of joints is employed for truss analysis, computing axial forces in individual members. Stiffness matrix methods are introduced for systematic assembly of global structural equations. Influence line analysis quantifies structural response to variable load positions, essential for bridge design and moving load applications. All calculations employ direct integration of governing differential equations and validated against classical solutions from Timoshenko timoshenko1970theory and Hibbeler hibbeler2017structural.\n\n\n**Key equations you'll work with:**\n- E\n- I\n- w\n\n\n## What You'll See\n\n**Visualization:** bending moment diagrams and deflection curves\n\n- See load distribution along structural members\n- Observe maximum stress locations\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/civil-engineering/structural_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",619"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/civil-engineering/structural_analysis.tex",620"category": "templates",621"date": "2026-01-15",622"time": "09:00"623},624{625"source": "gaussian_beam",626"content_type": "template",627"subreddit": "CoCalc",628"title": "Learn Optics: Gaussian Beam Propagation with Interactive Python",629"body": "## What You'll Learn\n\nA hands-on exploration of Optics: Gaussian Beam Propagation.\n\n\n**Key equations you'll work with:**\n- _00\n- M²\n- θ\n\n\n## What You'll See\n\n**Visualization:** beam profiles and waist propagation\n\n- See laser beam focusing and divergence\n- Observe Rayleigh range effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/gaussian_beam.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",630"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/gaussian_beam.tex",631"category": "templates",632"date": "2026-01-16",633"time": "09:00"634},635{636"source": "seir_model",637"content_type": "template",638"subreddit": "CoCalc",639"title": "Learn SEIR Epidemic Model: Basic Reproduction Number and Intervention Strategies with Interactive Python",640"body": "## What You'll Learn\n\na comprehensive analysis of the SEIR (Susceptible-Exposed-Infected-Recovered) compartmental model for infectious disease dynamics. We derive the basic reproduction number R₀ from first principles, analyze disease-free and endemic equilibria, compute vaccination thresholds for herd immunity, and fit the model to synthetic outbreak data. The analysis demonstrates that for R₀ = 3.0, the critical vaccination coverage is 67\\%, and targeted intervention reduces peak infection by 58\\% compared to uncontrolled spread.\n\n\n**Key equations you'll work with:**\n- R₀\n- R₀ = 3.0\n- S(t)\n\n\n## What You'll See\n\n**Visualization:** SEIR compartment curves with exposed class\n\n- See latency period effects on epidemic waves\n- Observe incubation time impact on R0\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/epidemiology/seir_model.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",641"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/epidemiology/seir_model.tex",642"category": "templates",643"date": "2026-01-16",644"time": "09:00"645},646{647"source": "soil_mechanics",648"content_type": "template",649"subreddit": "CoCalc",650"title": "Learn Soil Mechanics Analysis Effective Stress, Bearing Capacity, and Consolidation with Interactive Python",651"body": "## What You'll Learn\n\ncomprehensive geotechnical analysis including effective stress calculations, Mohr-Coulomb failure criteria, bearing capacity design using Terzaghi's theory, one-dimensional consolidation settlement, and slope stability assessment. Computational methods are applied to practical foundation design scenarios with parametric studies examining the influence of soil properties on engineering behavior.\n\n\n**Key equations you'll work with:**\n- c\n- φ\n- σ'\n\n\n## What You'll See\n\n**Visualization:** Mohr circles and stress-strain curves\n\n- See shear failure criteria and consolidation\n- Observe soil bearing capacity limits\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/civil-engineering/soil_mechanics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",652"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/civil-engineering/soil_mechanics.tex",653"category": "templates",654"date": "2026-01-17",655"time": "09:00"656},657{658"source": "game_theory",659"content_type": "template",660"subreddit": "CoCalc",661"title": "Learn Game Theory: Nash Equilibrium, Mixed Strategies, and Payoff Analysis with Interactive Python",662"body": "## What You'll Learn\n\na computational analysis of game theory concepts. We implement Nash equilibrium finding for 2-player games, analyze mixed strategies in zero-sum and non-zero-sum games, visualize payoff matrices, and explore classic games including Prisoner's Dilemma, Battle of the Sexes, and Matching Pennies.\n\n\n**Key equations you'll work with:**\n- S₁ = \\s₁^1, ..., s₁^m\\\n- S₂ = \\s₂^1, ..., s₂ⁿ\\\n- (p, q)\n\n\n## What You'll See\n\n**Visualization:** payoff matrices and Nash equilibrium visualization\n\n- See strategy interactions in game-theoretic space\n- Observe where players converge to equilibrium\n\n## Make It Yours\n\nAdjust utility functions, add new agents, or model different market conditions.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/economics/game_theory.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",663"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/economics/game_theory.tex",664"category": "templates",665"date": "2026-01-17",666"time": "09:00"667},668{669"source": "edge_detection",670"content_type": "template",671"subreddit": "CoCalc",672"title": "Learn Edge Detection: From Gradient Operators to Multi-Scale Analysis with Interactive Python",673"body": "## What You'll Learn\n\na comprehensive analysis of edge detection algorithms in digital image processing. We examine gradient-based methods (Sobel, Prewitt, Roberts), the Canny edge detector with its multi-stage pipeline, and Laplacian-based approaches including zero-crossing detection. Computational implementations demonstrate edge detection on synthetic test patterns and natural images, with quantitative performance evaluation using precision, recall, and F-measure metrics. We analyze the trade-offs between noise sensitivity, localization accuracy, and computational efficiency across different detector families.\n\n\n**Key equations you'll work with:**\n- I(x,y)\n- I(x,y)\n- |∇ I| = √Iₓ² + I_y²\n\n\n## What You'll See\n\n**Visualization:** gradient magnitude images and edge maps\n\n- See boundaries detected in images\n- Observe filter kernel effects on edges\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/edge_detection.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",674"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/edge_detection.tex",675"category": "templates",676"date": "2026-01-18",677"time": "09:00"678},679{680"source": "market_model",681"content_type": "template",682"subreddit": "CoCalc",683"title": "Learn Market Economics: Supply, Demand, Elasticity, and Welfare Analysis with Interactive Python",684"body": "## What You'll Learn\n\na computational analysis of market models. We examine supply and demand curves, compute market equilibrium, analyze price elasticity, measure consumer and producer surplus, and evaluate the welfare effects of taxes and price controls.\n\n\n**Key equations you'll work with:**\n- P^* = a - c/b + d\n- Q^* = ad + bc/b + d\n- Q\n\n\n## What You'll See\n\n**Visualization:** supply-demand curves and price equilibrium\n\n- See market clearing price determination\n- Observe elasticity effects on price changes\n\n## Make It Yours\n\nAdjust utility functions, add new agents, or model different market conditions.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/economics/market_model.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",685"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/economics/market_model.tex",686"category": "templates",687"date": "2026-01-18",688"time": "09:00"689},690{691"source": "harmonic_oscillator",692"content_type": "template",693"subreddit": "CoCalc",694"title": "Learn Quantum Harmonic Oscillator: Energy Quantization and Wavefunctions with Interactive Python",695"body": "## What You'll Learn\n\na comprehensive computational analysis of the quantum harmonic oscillator, one of the most fundamental exactly-solvable systems in quantum mechanics. We derive the energy eigenvalues Eₙ = ω(n + 1/2) and construct wavefunctions using Hermite polynomials. The ladder operator formalism is developed, demonstrating the algebraic solution method. We visualize probability densities for the first ten energy eigenstates, analyze the correspondence principle in the classical limit, and explore coherent states as minimal uncertainty wavepackets. Computational analysis reveals the zero-point energy, tunneling into classically forbidden regions, and the emergence of classical behavior at high quantum numbers.\n\n\n**Key equations you'll work with:**\n- Eₙ = ω(n + 1/2)\n- m\n- V(x) = 1/2mω² x²\n\n\n## What You'll See\n\n**Visualization:** wavefunctions and probability densities\n\n- See quantized energy levels emerge\n- Observe zero-point energy and tunneling\n\n## Make It Yours\n\nExplore different potentials, modify particle numbers, or simulate new scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/harmonic_oscillator.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",696"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/harmonic_oscillator.tex",697"category": "templates",698"date": "2026-01-19",699"time": "09:00"700},701{702"source": "graph_algorithms",703"content_type": "template",704"subreddit": "CoCalc",705"title": "Learn Computer Science: Graph Algorithms and Network Analysis with Interactive Python",706"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of fundamental graph algorithms including shortest path algorithms (Dijkstra, Bellman-Ford, Floyd-Warshall), minimum spanning trees (Prim, Kruskal), graph traversal (BFS, DFS), and network flow algorithms. We implement these algorithms in Python and analyze their time complexity, correctness, and practical applications in network routing, social network analysis, and optimization problems.\n\n\n**Key equations you'll work with:**\n- G = (V, E)\n- V\n- E\n\n\n## What You'll See\n\n**Visualization:** network visualizations with shortest paths highlighted\n\n- See algorithm progress through graph structures\n- Observe computational complexity in action\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computer-science/graph_algorithms.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",707"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computer-science/graph_algorithms.tex",708"category": "templates",709"date": "2026-01-19",710"time": "09:00"711},712{713"source": "monte_carlo",714"content_type": "template",715"subreddit": "CoCalc",716"title": "Learn Monte Carlo Methods: Sampling, Integration, and MCMC with Interactive Python",717"body": "## What You'll Learn\n\nA hands-on exploration of Monte Carlo Methods: Sampling, Integration, and MCMC.\n\n\n**Key equations you'll work with:**\n- 1/√N\n- q(x)\n- x'\n\n\n## What You'll See\n\n**Visualization:** sample distributions and convergence plots\n\n- See estimates improving with samples\n- Observe variance reduction techniques\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/simulations/monte_carlo.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",718"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/simulations/monte_carlo.tex",719"category": "templates",720"date": "2026-01-20",721"time": "09:00"722},723{724"source": "extreme_value",725"content_type": "template",726"subreddit": "CoCalc",727"title": "Learn Extreme Value Theory: Statistical Analysis of Rare Events with Interactive Python",728"body": "## What You'll Learn\n\na comprehensive analysis of extreme value theory (EVT) and its applications to rare event prediction. We examine the Fisher-Tippett-Gnedenko theorem, which establishes the generalized extreme value (GEV) distribution as the limiting distribution of block maxima, and the Peaks Over Threshold (POT) method using the generalized Pareto distribution (GPD). Computational analysis demonstrates parameter estimation via maximum likelihood, return level calculation with confidence intervals, threshold selection using mean excess plots, and applications to flood frequency analysis and financial risk assessment.\n\n\n**Key equations you'll work with:**\n- X₁, X₂, , Xₙ\n- F\n- aₙ > 0\n\n\n## What You'll See\n\n**Visualization:** extreme value distributions and return levels\n\n- See tail behavior of rare events\n- Observe GEV parameter estimation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/extreme_value.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",729"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/extreme_value.tex",730"category": "templates",731"date": "2026-01-20",732"time": "09:00"733},734{735"source": "ocean_currents",736"content_type": "template",737"subreddit": "CoCalc",738"title": "Learn Ocean Currents: Geostrophic Flow and Wind-Driven Circulation with Interactive Python",739"body": "## What You'll Learn\n\nA hands-on exploration of Ocean Currents: Geostrophic Flow and Wind-Driven Circulation.\n\n\n**Key equations you'll work with:**\n- f = 2Ω φ\n- D_E = √2A_v/f\n- β = ∂ f/∂ y\n\n\n## What You'll See\n\n**Visualization:** current velocity maps and gyre patterns\n\n- See global ocean circulation patterns\n- Observe thermohaline circulation strength\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/oceanography/ocean_currents.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",740"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/oceanography/ocean_currents.tex",741"category": "templates",742"date": "2026-01-21",743"time": "09:00"744},745{746"source": "special_relativity",747"content_type": "template",748"subreddit": "CoCalc",749"title": "Learn Special Relativity: Lorentz Transformations and Relativistic Dynamics with Interactive Python",750"body": "## What You'll Learn\n\nThis computational report presents a comprehensive analysis of special relativity through the Lorentz transformation framework. We examine time dilation and length contraction effects across the full velocity range from non-relativistic to ultra-relativistic regimes, analyze the relativistic energy-momentum relation including rest mass energy and kinetic energy, construct spacetime diagrams showing worldlines and light cones, and verify theoretical predictions against experimental observations including atmospheric muon decay and GPS satellite time corrections. Numerical computations demonstrate the fundamental invariance of the spacetime interval and the unification of space and time in Minkowski geometry.\n\n\n**Key equations you'll work with:**\n- c\n- v\n- S\n\n\n## What You'll See\n\n**Visualization:** Minkowski diagrams and Lorentz transforms\n\n- See time dilation and length contraction\n- Observe relativity of simultaneity\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/relativity/special_relativity.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",751"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/relativity/special_relativity.tex",752"category": "templates",753"date": "2026-01-21",754"time": "09:00"755},756{757"source": "scheduling",758"content_type": "template",759"subreddit": "CoCalc",760"title": "Learn Production Scheduling: Algorithms and Optimization Strategies with Interactive Python",761"body": "## What You'll Learn\n\na comprehensive analysis of deterministic scheduling problems across multiple machine configurations. We examine single-machine scheduling with SPT (Shortest Processing Time) and EDD (Earliest Due Date) rules, parallel machine scheduling using LPT (Longest Processing Time) heuristics, two-machine flow shop scheduling via Johnson's algorithm, and job shop scheduling complexity. Computational experiments demonstrate makespan minimization, tardiness reduction, and the trade-offs between optimality and computational tractability. Results show that Johnson's algorithm achieves optimal makespan of 83min for a 10-job flow shop, while SPT reduces mean flow time by 42\\% compared to random sequencing.\n\n\n**Key equations you'll work with:**\n- n\n- J = \\J₁, J₂, , Jₙ\\\n- m\n\n\n## What You'll See\n\n**Visualization:** Gantt charts and makespan optimization\n\n- See job sequencing on machines\n- Observe idle time minimization\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/scheduling.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",762"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/scheduling.tex",763"category": "templates",764"date": "2026-01-22",765"time": "09:00"766},767{768"source": "phase_diagram",769"content_type": "template",770"subreddit": "CoCalc",771"title": "Learn Binary Phase Diagrams: Computational Analysis with Interactive Python",772"body": "## What You'll Learn\n\ncomputational analysis of binary phase diagrams in materials science. We examine isomorphous and eutectic systems, the lever rule for phase fraction calculations, cooling curve analysis, and Gibbs phase rule applications. Python-based computations provide quantitative analysis with dynamic visualization.\n\n\n**Key equations you'll work with:**\n- F\n- C\n- P\n\n\n## What You'll See\n\n**Visualization:** phase boundaries and tie lines\n\n- See solid-liquid-gas regions vs composition\n- Observe eutectic and peritectic reactions\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/materials-science/phase_diagram.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",773"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/materials-science/phase_diagram.tex",774"category": "templates",775"date": "2026-01-22",776"time": "09:00"777},778{779"source": "stress_analysis",780"content_type": "template",781"subreddit": "CoCalc",782"title": "Learn Stress Analysis: Mohr's Circle and Failure Theories with Interactive Python",783"body": "## What You'll Learn\n\ncomputational analysis of stress states in solid mechanics. We examine stress transformation using Mohr's circle, principal stresses, von Mises equivalent stress, and common failure theories. Python-based computations provide quantitative analysis with dynamic visualization.\n\n\n**Key equations you'll work with:**\n- θ\n- \\σ₁\n- \\σ₂\n\n\n## What You'll See\n\n**Visualization:** stress contours and strain distributions\n\n- See stress concentrations at notches\n- Observe von Mises yield criteria\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/stress_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",784"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/stress_analysis.tex",785"category": "templates",786"date": "2026-01-23",787"time": "09:00"788},789{790"source": "groundwater_flow",791"content_type": "template",792"subreddit": "CoCalc",793"title": "Learn Groundwater Flow Analysis: Darcy's Law and Well Hydraulics with Interactive Python",794"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of groundwater flow in porous media, focusing on aquifer hydraulics and well dynamics. We examine the fundamental principles of Darcy's law, develop analytical solutions for radial flow to wells using the Theis equation and Cooper-Jacob approximation, and implement numerical finite difference methods for solving the 2D groundwater flow equation. Pumping test analysis demonstrates determination of hydraulic parameters including transmissivity (T = 1.2 × 10^-3 m²/s) and storativity (S = 2.5 × 10^-4), with visualization of hydraulic head distributions and groundwater flow fields under various pumping scenarios.\n\n\n**Key equations you'll work with:**\n- T = 1.2 × 10^-3\n- S = 2.5 × 10^-4\n- q\n\n\n## What You'll See\n\n**Visualization:** hydraulic head contours and flow paths\n\n- See groundwater moving through aquifers\n- Observe Darcy flow patterns and travel times\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/hydrology/groundwater_flow.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",795"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/hydrology/groundwater_flow.tex",796"category": "templates",797"date": "2026-01-23",798"time": "09:00"799},800{801"source": "perturbation_theory",802"content_type": "template",803"subreddit": "CoCalc",804"title": "Learn Time-Independent Perturbation Theory: Stark and Zeeman Effects in Hydrogen with Interactive Python",805"body": "## What You'll Learn\n\na comprehensive analysis of time-independent perturbation theory applied to hydrogen atom energy levels. We examine non-degenerate and degenerate perturbation theory, computing first and second-order energy corrections for the quantum harmonic oscillator. The Stark effect (hydrogen in external electric field) and Zeeman effect (hydrogen in magnetic field) are analyzed computationally, with explicit calculation of energy level shifts and splittings. Variational methods provide upper bounds for ground state energies, demonstrating complementary approaches to approximate quantum solutions.\n\n\n**Key equations you'll work with:**\n- H = H₀ + λ V\n- H₀\n- λ V\n\n\n## What You'll See\n\n**Visualization:** energy level corrections and mixing coefficients\n\n- See perturbation effects on spectra\n- Observe convergence of perturbation series\n\n## Make It Yours\n\nExplore different potentials, modify particle numbers, or simulate new scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/perturbation_theory.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",806"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/perturbation_theory.tex",807"category": "templates",808"date": "2026-01-24",809"time": "09:00"810},811{812"source": "food_webs",813"content_type": "template",814"subreddit": "CoCalc",815"title": "Learn Food Web Dynamics: Trophic Structure, Stability, and Network Properties with Interactive Python",816"body": "## What You'll Learn\n\nThis computational ecology report examines food web dynamics through the lens of network theory and population dynamics. We analyze trophic structure using the cascade model, investigate predator-prey oscillations via Lotka-Volterra equations, and assess community stability through eigenvalue analysis of Jacobian matrices. The analysis demonstrates that network topology—specifically connectance, clustering, and modularity—critically influences stability patterns. We find that moderate connectance (C ≈ 0.15) maximizes persistence, while highly connected webs (C > 0.3) exhibit eigenvalue destabilization consistent with May's stability-complexity paradox. Simulation of a 20-species food web reveals characteristic oscillatory dynamics with period T ≈ 8.4 time units and predator-prey phase lags of π/2 radians.\n\n\n**Key equations you'll work with:**\n- C ≈ 0.15\n- C > 0.3\n- T ≈ 8.4\n\n\n## What You'll See\n\n**Visualization:** trophic network diagrams and energy flow\n\n- See predator-prey connections in ecosystems\n- Observe energy transfer efficiency between levels\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/food_webs.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",817"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/food_webs.tex",818"category": "templates",819"date": "2026-01-24",820"time": "09:00"821},822{823"source": "kmeans",824"content_type": "template",825"subreddit": "CoCalc",826"title": "Learn K-Means Clustering: Algorithm and Analysis with Interactive Python",827"body": "## What You'll Learn\n\nThis document presents a comprehensive study of K-means clustering, including algorithm implementation, convergence analysis, cluster quality metrics (silhouette score, inertia), the elbow method for optimal K selection, and comparison with other clustering approaches. We demonstrate practical considerations for initialization and scaling.\n\n\n**Key equations you'll work with:**\n- n\n- K\n- C_k\n\n\n## What You'll See\n\n**Visualization:** cluster assignments and centroid convergence\n\n- See data points grouped into clusters\n- Observe cluster separation quality\n\n## Make It Yours\n\nTune hyperparameters, swap in your data, or extend the model architecture.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/kmeans.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",828"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/kmeans.tex",829"category": "templates",830"date": "2026-01-25",831"time": "09:00"832},833{834"source": "antenna_design",835"content_type": "template",836"subreddit": "CoCalc",837"title": "Learn Antenna Design and Analysis: Radiation Patterns and Array Synthesis with Interactive Python",838"body": "## What You'll Learn\n\nThis engineering report presents a comprehensive computational analysis of antenna design principles, including radiation pattern characterization, dipole antenna theory, linear array synthesis, aperture antenna analysis, and impedance matching techniques. We examine the fundamental parameters of antenna performance including gain, directivity, beamwidth, and radiation efficiency. Computational analysis demonstrates the design and optimization of half-wave dipole antennas, uniform and non-uniform linear arrays with beam steering capabilities, horn antenna aperture analysis, and VSWR-based impedance matching networks. The analysis includes calculation of actual gain values (8.2dBi for an 8-element array), beamwidth characteristics (12.5{} for optimized designs), and array factor patterns for various element spacings and excitation distributions.\n\n\n**Key equations you'll work with:**\n- F(θ, φ)\n- E(θ, φ)\n- θ\n\n\n## What You'll See\n\n**Visualization:** radiation patterns and gain curves\n\n- See antenna directivity in 3D patterns\n- Observe beamwidth and sidelobe levels\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/antenna_design.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",839"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/antenna_design.tex",840"category": "templates",841"date": "2026-01-25",842"time": "09:00"843},844{845"source": "integration",846"content_type": "template",847"subreddit": "CoCalc",848"title": "Learn Numerical Integration Methods: Quadrature Algorithms and Error Analysis with Interactive Python",849"body": "## What You'll Learn\n\na comprehensive analysis of numerical integration (quadrature) methods. We implement and compare the trapezoidal rule, Simpson's rule, Romberg integration, and Gaussian quadrature. Error analysis demonstrates convergence rates, and we verify results against analytically known integrals. All computations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- f(x)\n- n\n- E_T = -(b-a)h²/12f''(ξ) = O(h²)\n\n\n## What You'll See\n\n**Visualization:** quadrature approximations and error convergence\n\n- See trapezoid and Simpson rules in action\n- Observe order of accuracy effects\n\n## Make It Yours\n\nTest different initial conditions, compare convergence rates, or optimize parameters.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/integration.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",850"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/integration.tex",851"category": "templates",852"date": "2026-01-26",853"time": "09:00"854},855{856"source": "qubit_operations",857"content_type": "template",858"subreddit": "CoCalc",859"title": "Learn Qubit Operations and Quantum Gates: Pauli Gates, Superposition, Entanglement, and Bell States with Interactive Python",860"body": "## What You'll Learn\n\nThis report explores fundamental qubit operations and quantum gates. We implement Pauli gates (X, Y, Z), demonstrate superposition using Hadamard gates, create entangled Bell states, and visualize quantum states on the Bloch sphere. Matrix representations and state evolution are simulated using PythonTeX.\n\n\n**Key equations you'll work with:**\n- X² = Y² = Z² = I\n- X\n- Y\n\n\n## What You'll See\n\n**Visualization:** state vectors and measurement probabilities\n\n- See superposition and entanglement\n- Observe quantum interference effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-computing/qubit_operations.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",861"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-computing/qubit_op",862"category": "templates",863"date": "2026-01-26",864"time": "09:00"865},866{867"source": "copula_multivariate_dependence",868"content_type": "template",869"subreddit": "CoCalc",870"title": "I built a copula library in Python to understand multivariate dependence - here's what I learned about tail risk",871"body": "Ever wondered how to model the relationship between variables beyond simple correlation? That's what copulas do.\n\n**ELI5 Version:**\nImagine you have two stocks. Regular correlation tells you \"when one goes up, the other tends to go up.\" But copulas let you ask deeper questions like \"when one CRASHES, how likely is the other to crash too?\" That's tail dependence, and it matters a lot for risk management.\n\n**What I Built:**\n- Sampling functions for 5 copula families (Gaussian, Student-t, Clayton, Gumbel, Frank)\n- Density and CDF calculations\n- Kendall's tau verification (empirical vs theoretical)\n- Tail dependence estimation\n\n**Key Takeaways:**\n\n1. **Sklar's Theorem** is the foundation: Any joint distribution H(x,y) can be written as C(F(x), G(y)) where C is the copula and F,G are marginals.\n\n2. **Different copulas capture different tail behaviors:**\n - Gaussian: No tail dependence (λL = λU = 0)\n - Clayton: Lower tail dependence (λL = 2^(-1/θ))\n - Gumbel: Upper tail dependence (λU = 2 - 2^(1/θ))\n\n3. **This matters for finance:** When I simulated portfolio returns, the Clayton copula showed significantly worse 5% VaR because it captures joint downside moves.\n\n**The Math (simplified):**\n- Kendall's τ for Gaussian copula: τ = (2/π)·arcsin(ρ)\n- For Clayton: τ = θ/(θ+2)\n- For Gumbel: τ = 1 - 1/θ\n\nAll implemented in pure NumPy/SciPy - no external copula libraries needed.\n\nView the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/copula_multivariate_dependence.ipynb",872"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/copula_multivariate_dependence.ipynb",873"category": "general",874"date": "2026-01-27",875"time": "09:00"876},877{878"source": "renewable_integration",879"content_type": "template",880"subreddit": "CoCalc",881"title": "Learn Grid Integration of Variable Renewable Energy: Solar PV and Wind Power Analysis with Energy Storage with Interactive Python",882"body": "## What You'll Learn\n\na comprehensive computational analysis of renewable energy integration challenges in modern electric grids. We examine the variability characteristics of solar photovoltaic (PV) and wind power generation, analyze the duck curve phenomenon resulting from high solar penetration, calculate energy storage requirements for grid balancing, and assess grid stability metrics under various renewable penetration scenarios. The analysis includes realistic time-series modeling of solar irradiance and wind speed, capacity factor calculations, ramping rate constraints, and battery energy storage system (BESS) sizing for maintaining grid reliability with up to 60\\% renewable energy penetration.\n\n\n**Key equations you'll work with:**\n- P_renewable(t)\n- P_demand(t)\n- G(t)\n\n\n## What You'll See\n\n**Visualization:** generation mix and variability curves\n\n- See renewable intermittency effects\n- Observe grid stability challenges\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/power-systems/renewable_integration.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",883"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/power-systems/renewable_integration.tex",884"category": "templates",885"date": "2026-01-27",886"time": "09:00"887},888{889"source": "clinical_report",890"content_type": "template",891"subreddit": "CoCalc",892"title": "Learn Phase III Clinical Trial Analysis with Interactive Python",893"body": "## What You'll Learn\n\nA hands-on exploration of Phase III Clinical Trial Analysis.\n\n\n\n## What You'll See\n\n**Visualization:** clinical data visualizations with R/knitr integration\n\n- See patient outcomes and statistical summaries\n- Observe treatment effects in clinical data\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/knitr/clinical_report.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",894"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/knitr/clinical_report.tex",895"category": "templates",896"date": "2026-01-28",897"time": "09:00"898},899{900"source": "laser_physics",901"content_type": "template",902"subreddit": "CoCalc",903"title": "Learn Laser Physics: Rate Equations, Threshold Analysis, and Gaussian Beam Propagation with Interactive Python",904"body": "## What You'll Learn\n\na comprehensive computational analysis of laser physics fundamentals, including Einstein A and B coefficients, population inversion dynamics, threshold conditions for laser oscillation, cavity mode structures, and Gaussian beam propagation. We derive the rate equations from first principles, analyze the threshold pump power for a four-level laser system (Nd:YAG-like), simulate relaxation oscillations and Q-switching dynamics, and characterize Gaussian beam propagation including beam waist evolution, Rayleigh range, and the M² beam quality factor. Computational results demonstrate threshold behavior at pump rates of 2.6e7{}, relaxation oscillation frequencies near 50{}, Q-switched giant pulse generation with peak intensities exceeding CW operation by orders of magnitude, and diffraction-limited beam propagation with M² = 1.0 for a 100{} waist yielding Rayleigh range 29.5{}.\n\n\n**Key equations you'll work with:**\n- A_21\n- B_21\n- B_12\n\n\n## What You'll See\n\n**Visualization:** gain curves and mode competition\n\n- See laser threshold and output power\n- Observe population inversion dynamics\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/photonics/laser_physics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",905"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/photonics/laser_physics.tex",906"category": "templates",907"date": "2026-01-28",908"time": "09:00"909},910{911"source": "musical_acoustics",912"content_type": "template",913"subreddit": "CoCalc",914"title": "Learn Musical Acoustics Harmonic Analysis and Instrument Modeling with Interactive Python",915"body": "## What You'll Learn\n\nComputational analysis of musical acoustics including harmonic series, string vibrations, wind instrument resonances, and psychoacoustic phenomena.\n\n\n**Key equations you'll work with:**\n- fₙ = n f₁\n- f_n\n- f₁\n\n\n## What You'll See\n\n**Visualization:** frequency spectrum of musical instruments showing harmonic overtones\n\n- See the harmonic series and overtone patterns\n- Observe how instrument timbre emerges from harmonic content\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/acoustics/musical_acoustics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",916"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/acoustics/musical_acoustics.tex",917"category": "templates",918"date": "2026-01-29",919"time": "09:00"920},921{922"source": "plasma_waves",923"content_type": "template",924"subreddit": "CoCalc",925"title": "Learn Plasma Wave Dispersion and Kinetic Theory Computational Analysis of Electrostatic and Electromagnetic Modes with Interactive Python",926"body": "## What You'll Learn\n\nThis computational study examines wave propagation in magnetized plasmas using kinetic theory. We derive and numerically solve dispersion relations for electrostatic waves (Langmuir and ion acoustic modes) and electromagnetic waves (ordinary and extraordinary modes). The analysis includes Landau damping calculations from the plasma dispersion function, wave-particle resonances at cyclotron harmonics, and parametric decay instabilities. Numerical solutions reveal the transition from fluid-like behavior at long wavelengths to kinetic effects at scales comparable to the Debye length. We compute growth rates for stimulated Raman scattering and demonstrate the diagnostic potential of interferometry and reflectometry for density measurements. Results provide quantitative predictions for wave behavior in fusion plasmas, space physics applications, and laser-plasma interactions.\n\n\n**Key equations you'll work with:**\n- ω(k)\n- ω\n- k\n\n\n## What You'll See\n\n**Visualization:** dispersion relations and wave modes\n\n- See electromagnetic waves in plasma\n- Observe cutoffs and resonances\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/plasma-physics/plasma_waves.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",927"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/plasma-physi",928"category": "templates",929"date": "2026-01-29",930"time": "09:00"931},932{933"source": "band_theory",934"content_type": "template",935"subreddit": "CoCalc",936"title": "Learn Semiconductor Band Theory: From Bloch's Theorem to Effective Mass with Interactive Python",937"body": "## What You'll Learn\n\nThis comprehensive analysis of semiconductor band theory provides computational insights into the electronic structure of crystalline solids. We derive band structures using the Kronig-Penney model for one-dimensional periodic potentials, compute effective masses from parabolic band approximations, and calculate density of states for various dimensionalities. The analysis demonstrates the origin of band gaps (E_g ≈ 1.1 eV for Si), the relationship between effective mass and band curvature, and the impact of quantum confinement on density of states in low-dimensional semiconductor structures. These fundamental concepts underpin modern semiconductor device physics from transistors to quantum dots.\n\n\n**Key equations you'll work with:**\n- E_g ≈ 1.1\n- V(r + R) = V(r)\n- R\n\n\n## What You'll See\n\n**Visualization:** band structures and density of states\n\n- See energy bands in semiconductors\n- Observe bandgap and effective mass\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/semiconductor/band_theory.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",938"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/semiconductor/band_theory.tex",939"category": "templates",940"date": "2026-01-30",941"time": "09:00"942},943{944"source": "pharmacokinetics",945"content_type": "template",946"subreddit": "CoCalc",947"title": "Learn Pharmacokinetics Compartment Models and Drug Concentration with Interactive Python",948"body": "## What You'll Learn\n\nComputational modeling of drug absorption, distribution, metabolism, and elimination using compartment models.\n\n\n**Key equations you'll work with:**\n- dC/dt = -k_e C\n- t_{1/2}\n- C_{max}\n\n\n## What You'll See\n\n**Visualization:** drug concentration curves and compartment models\n\n- See absorption, distribution, and elimination phases\n- Observe half-life and therapeutic windows\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/pharmacokinetics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",949"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/pharmacokinetics.tex",950"category": "templates",951"date": "2026-01-30",952"time": "09:00"953},954{955"source": "room_acoustics",956"content_type": "template",957"subreddit": "CoCalc",958"title": "Learn Room Acoustics Analysis Reverberation and Sound Field Modeling with Interactive Python",959"body": "## What You'll Learn\n\nThis technical report presents computational analysis of room acoustics including reverberation time calculations using Sabine and Eyring equations, sound absorption modeling, and acoustic parameter optimization.\n\n\n**Key equations you'll work with:**\n- T_60\n- \\α\n- T_60 = 0.161 V/A\n\n\n## What You'll See\n\n**Visualization:** reverberation time decay and room impulse response\n\n- Watch sound energy decay over time in different room geometries\n- See how room shape affects acoustic quality\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/acoustics/room_acoustics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",960"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/acoustics/room_acoustics.tex",961"category": "templates",962"date": "2026-01-31",963"time": "09:00"964},965{966"source": "gene_expression",967"content_type": "template",968"subreddit": "CoCalc",969"title": "Learn Gene Expression Analysis: From RNA-Seq to Pathway Enrichment A Comprehensive Guide to Differential Expression Analysis with Interactive Python",970"body": "## What You'll Learn\n\nThis comprehensive analysis presents methods for analyzing gene expression data from RNA-sequencing experiments. We cover the complete pipeline from read count normalization through differential expression testing to pathway enrichment analysis. Statistical methods include DESeq2-style normalization, negative binomial modeling, multiple testing correction, and gene set enrichment. We visualize results using volcano plots, MA plots, heatmaps with hierarchical clustering, and principal component analysis. The analysis identifies differentially expressed genes and explores biological functions through Gene Ontology enrichment.\n\n\n**Key equations you'll work with:**\n- cᵢ\n- i\n- lᵢ\n\n\n## What You'll See\n\n**Visualization:** heatmaps and volcano plots of differential expression\n\n- See gene clusters and expression patterns across samples\n- Observe significant up/down-regulated genes\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/bioinformatics/gene_expression.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",971"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/b",972"category": "templates",973"date": "2026-01-31",974"time": "09:00"975},976{977"source": "big_bang",978"content_type": "template",979"subreddit": "CoCalc",980"title": "Learn Big Bang Cosmology: Friedmann Dynamics and Primordial Nucleosynthesis with Interactive Python",981"body": "## What You'll Learn\n\nThis computational analysis examines the thermal and dynamical history of the early universe within the framework of the standard ΛCDM cosmological model. We numerically solve the Friedmann equations to determine scale factor evolution through the radiation-dominated, matter-dominated, and dark-energy-dominated epochs. The cosmic microwave background (CMB) temperature redshift relationship T(z) = T₀(1+z) is validated across z = 0 to z = 1100. Big Bang nucleosynthesis (BBN) is modeled to compute primordial abundances of light elements (H, D, ³He, ^4He, ^7Li), demonstrating agreement with observational constraints from the Planck satellite and WMAP. We also derive the age of the universe and Hubble parameter evolution, obtaining t₀ = 13.8 Gyr and H₀ = 67.4 km/s/Mpc consistent with Planck 2018 results.\n\n\n**Key equations you'll work with:**\n- Λ\n- T(z) = T₀(1+z)\n- z = 0\n\n\n## What You'll See\n\n**Visualization:** scale factor evolution and cosmic timeline\n\n- See universe expansion from initial singularity\n- Observe radiation-matter-dark energy eras\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cosmology/big_bang.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",982"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cosmology/big_bang.tex",983"category": "templates",984"date": "2026-02-01",985"time": "09:00"986},987{988"source": "root_finding",989"content_type": "template",990"subreddit": "CoCalc",991"title": "Learn Root Finding Algorithms: Convergence Analysis and Comparison with Interactive Python",992"body": "## What You'll Learn\n\na comprehensive analysis of root-finding algorithms for nonlinear equations. We implement and compare bisection, Newton-Raphson, secant, and Brent's methods. Convergence rates are analyzed theoretically and verified computationally. The importance of initial guesses and potential pitfalls of each method are demonstrated through examples.\n\n\n**Key equations you'll work with:**\n- f(x) = 0\n- |e_n+1| ≤ C|eₙ|\n- C < 1\n\n\n## What You'll See\n\n**Visualization:** iteration sequences converging to roots\n\n- See Newton and bisection methods converge\n- Observe convergence rates by method\n\n## Make It Yours\n\nTest different initial conditions, compare convergence rates, or optimize parameters.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/root_finding.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",993"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/root_finding.tex",994"category": "templates",995"date": "2026-02-01",996"time": "09:00"997},998{999"source": "linear_programming",1000"content_type": "template",1001"subreddit": "CoCalc",1002"title": "Learn Linear Programming: Simplex Method, Duality, and Sensitivity Analysis with Interactive Python",1003"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of linear programming (LP) techniques, including graphical solutions for two-variable problems, the simplex algorithm for higher-dimensional optimization, duality theory, and sensitivity analysis. We examine a production planning problem, demonstrate the simplex tableau method, derive the dual problem, and analyze shadow prices and reduced costs. Computational implementation using Python's scipy.optimize.linprog validates theoretical results and illustrates practical applications in resource allocation.\n\n\n**Key equations you'll work with:**\n- c Rⁿ\n- A R^m × n\n- b R^m\n\n\n## What You'll See\n\n**Visualization:** feasible regions and optimal vertices\n\n- See constraint boundaries and objective contours\n- Observe simplex method progress\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/linear_programming.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1004"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/linear_programming.tex",1005"category": "templates",1006"date": "2026-02-02",1007"time": "09:00"1008},1009{1010"source": "sentiment",1011"content_type": "template",1012"subreddit": "CoCalc",1013"title": "Learn Sentiment Analysis: Lexicon-Based and Machine Learning Approaches with Interactive Python",1014"body": "## What You'll Learn\n\nA hands-on exploration of Sentiment Analysis: Lexicon-Based and Machine Learning Approaches.\n\n\n**Key equations you'll work with:**\n- vᵢ\n- i\n- α\n\n\n## What You'll See\n\n**Visualization:** sentiment scores and word clouds\n\n- See positive/negative sentiment distributions\n- Observe key sentiment-bearing words\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nlp/sentiment.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1015"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nlp/sentiment.tex",1016"category": "templates",1017"date": "2026-02-02",1018"time": "09:00"1019},1020{1021"source": "visualization",1022"content_type": "template",1023"subreddit": "CoCalc",1024"title": "Learn Data Visualization: Principles and Practice with Interactive Python",1025"body": "## What You'll Learn\n\nThis document explores comprehensive data visualization techniques, including various plot types for different data characteristics, color palette design with accessibility considerations, perceptual principles, and dashboard composition. We demonstrate best practices for effective visual communication of quantitative information.\n\n\n**Key equations you'll work with:**\n- ≥ 12\n\n\n## What You'll See\n\n**Visualization:** multi-dimensional data representations\n\n- See patterns emerge from data visualization\n- Observe correlations and outliers\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/data-science/visualization.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1026"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/data-science/visualization.tex",1027"category": "templates",1028"date": "2026-02-03",1029"time": "09:00"1030},1031{1032"source": "grover_search",1033"content_type": "template",1034"subreddit": "CoCalc",1035"title": "Learn Grover's Quantum Search Algorithm: Oracle, Diffusion, and Amplitude Amplification with Interactive Python",1036"body": "## What You'll Learn\n\na comprehensive analysis of Grover's quantum search algorithm. We implement the oracle and diffusion operators, demonstrate amplitude amplification, analyze the quadratic speedup over classical search, and explore the algorithm's complexity. All simulations use matrix representations with PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- N\n- O(√N)\n- O(N)\n\n\n## What You'll See\n\n**Visualization:** amplitude evolution through Grover iterations\n\n- See target state amplitude grow\n- Observe quadratic speedup emerge\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-computing/grover_search.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1037"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-computing/grover_search.tex",1038"category": "templates",1039"date": "2026-02-03",1040"time": "09:00"1041},1042{1043"source": "ocean_productivity",1044"content_type": "template",1045"subreddit": "CoCalc",1046"title": "Learn Ocean Productivity: Photosynthesis-Irradiance Relationships and Nutrient-Limited Primary Production with Interactive Python",1047"body": "## What You'll Learn\n\na comprehensive computational analysis of ocean primary productivity, focusing on photosynthesis-irradiance (P-I) relationships, nutrient limitation dynamics, and seasonal phytoplankton bloom patterns. We examine the classic P-I curve parameterization (Jassby \\& Platt 1976), Michaelis-Menten nutrient uptake kinetics, depth-integrated net primary production (NPP), and the interplay between mixed layer dynamics and euphotic zone structure. Computational simulations demonstrate seasonal cycles in temperate ocean regions, quantifying spring bloom magnitude, summer nutrient depletion, and annual carbon export.\n\n\n**Key equations you'll work with:**\n- ^-2\n- ^-1\n- ^-3\n\n\n## What You'll See\n\n**Visualization:** chlorophyll maps and nutrient distributions\n\n- See primary production patterns in oceans\n- Observe upwelling and nutrient limitation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/marine-biology/ocean_productivity.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1048"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/marine-biology/ocean_productivity.tex",1049"category": "templates",1050"date": "2026-02-04",1051"time": "09:00"1052},1053{1054"source": "mhd",1055"content_type": "template",1056"subreddit": "CoCalc",1057"title": "Learn Magnetohydrodynamics: Plasma Confinement and Stability with Interactive Python",1058"body": "## What You'll Learn\n\nMagnetohydrodynamics (MHD) describes the macroscopic behavior of electrically conducting fluids, treating plasma as a single conducting fluid coupled to electromagnetic fields. This report presents computational analysis of fundamental MHD wave modes, stability boundaries, and magnetic reconnection dynamics. We solve the MHD dispersion relation for Alfvén and magnetosonic waves, compute growth rates for kink and sausage instabilities in cylindrical plasmas, and model Sweet-Parker magnetic reconnection. The analysis demonstrates how magnetic field tension stabilizes plasma columns, how plasma beta determines wave propagation characteristics, and how resistivity controls reconnection rates. Results include Alfvén velocities of 500–5000 km/s for fusion-relevant parameters, instability growth times of 10–100 μs for laboratory plasmas, and reconnection timescales of milliseconds to seconds depending on magnetic Reynolds number.\n\n\n**Key equations you'll work with:**\n- μ\n- v_A = B/√μ₀ ρ\n- θ\n\n\n## What You'll See\n\n**Visualization:** magnetic field lines and plasma flow\n\n- See magnetohydrodynamic instabilities\n- Observe magnetic reconnection events\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/plasma-physics/mhd.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1059"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/plasma-physics/mhd.tex",1060"category": "templates",1061"date": "2026-02-04",1062"time": "09:00"1063},1064{1065"source": "wave_propagation",1066"content_type": "template",1067"subreddit": "CoCalc",1068"title": "Learn Electromagnetic Wave Propagation: From Maxwell's Equations to Waveguides with Interactive Python",1069"body": "## What You'll Learn\n\nThis technical report presents a comprehensive computational analysis of electromagnetic wave propagation phenomena derived from Maxwell's equations. We examine plane wave solutions in free space and lossy media, analyze reflection and transmission at dielectric interfaces using Fresnel coefficients, investigate rectangular waveguide modes with cutoff frequencies and dispersion relations, and characterize skin depth and attenuation in conductors. The analysis includes numerical computation of Brewster angles, total internal reflection conditions, TE/TM mode field distributions, and frequency-dependent penetration depths relevant to RF engineering and microwave systems.\n\n\n**Key equations you'll work with:**\n- D = E\n- B = μ H\n- v_p = 1/√μ\n\n\n## What You'll See\n\n**Visualization:** field patterns and attenuation curves\n\n- See EM waves propagating through media\n- Observe frequency-dependent propagation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/wave_propagation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1070"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/wave_propagation.tex",1071"category": "templates",1072"date": "2026-02-05",1073"time": "09:00"1074},1075{1076"source": "elliptic_curves",1077"content_type": "template",1078"subreddit": "SageMath",1079"title": "Learn Elliptic Curve Cryptography: From Mathematical Foundations to Digital Signatures with Interactive Python",1080"body": "## What You'll Learn\n\na comprehensive analysis of elliptic curve cryptography (ECC), exploring the mathematical foundations of elliptic curves over finite fields and their application to public-key cryptography. We implement point addition and scalar multiplication algorithms, demonstrate the Elliptic Curve Digital Signature Algorithm (ECDSA), and analyze the computational hardness of the Elliptic Curve Discrete Logarithm Problem (ECDLP) that underpins ECC security. Through computational examples using the secp256k1 curve (Bitcoin) and NIST P-256, we illustrate key generation, ECDH key exchange, and digital signature verification with realistic cryptographic parameters.\n\n\n**Key equations you'll work with:**\n- P\n- Q = kP\n- k\n\n\n## What You'll See\n\n**Visualization:** elliptic curve point plots and group operations\n\n- See geometric addition on elliptic curves\n- Observe the discrete log problem difficulty\n\n## Make It Yours\n\nChange key sizes, implement different algorithms, or test your security scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cryptography/elliptic_curves.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1081"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cryptography/elliptic_curves.tex",1082"category": "templates",1083"date": "2026-02-05",1084"time": "09:00"1085},1086{1087"source": "metapopulation",1088"content_type": "template",1089"subreddit": "CoCalc",1090"title": "Learn Metapopulation Dynamics: Patch Occupancy Models and Spatial Population Persistence with Interactive Python",1091"body": "## What You'll Learn\n\nMetapopulation theory provides a framework for understanding how spatial population structure influences persistence in fragmented landscapes. We analyze the classical Levins model of patch occupancy dynamics, where local populations experience colonization and extinction at the patch level. Using computational simulations, we examine equilibrium occupancy patterns, extinction thresholds, and the critical colonization-extinction ratio that determines metapopulation viability. Our analysis reveals that the metapopulation equilibrium p^* = 1 - e/c requires colonization rates (c) to exceed extinction rates (e) for persistence, with spatial connectivity playing a crucial role in maintaining population networks. We extend the analysis to source-sink dynamics, incidence functions dependent on patch characteristics, and extinction debt following habitat loss. Results demonstrate that metapopulations can persist regionally despite local extinctions when sufficient patch connectivity maintains colonization-extinction balance, but face collapse when habitat destruction reduces metapopulation capacity below critical thresholds.\n\n\n**Key equations you'll work with:**\n- p^* = 1 - e/c\n- c\n- e\n\n\n## What You'll See\n\n**Visualization:** patch occupancy dynamics and extinction risk\n\n- See populations blinking on and off across landscape\n- Observe connectivity effects on persistence\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/metapopulation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1092"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/metapopulation.tex",1093"category": "templates",1094"date": "2026-02-06",1095"time": "09:00"1096},1097{1098"source": "microwave",1099"content_type": "template",1100"subreddit": "CoCalc",1101"title": "Learn Microwave Engineering Analysis S-Parameters, Transmission Lines, and Impedance Matching with Interactive Python",1102"body": "## What You'll Learn\n\nThis computational study presents comprehensive analysis of microwave network parameters, including S-parameter characterization of two-port networks, transmission line impedance transformations, Smith chart impedance matching techniques, and microstrip transmission line design. We analyze VSWR (Voltage Standing Wave Ratio), return loss, insertion loss, and reflection coefficients for practical microwave circuits operating at frequencies from 1 GHz to 10 GHz. Impedance matching networks using quarter-wave transformers and stub tuning are designed and evaluated. Microstrip line design equations incorporating effective permittivity and characteristic impedance are implemented for FR-4 and Rogers RO4003C substrates.\n\n\n**Key equations you'll work with:**\n- Z₀\n- β\n- L\n\n\n## What You'll See\n\n**Visualization:** S-parameters and Smith charts\n\n- See impedance matching on complex plane\n- Observe reflection and transmission behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/microwave.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1103"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/microwave.tex",1104"category": "templates",1105"date": "2026-02-06",1106"time": "09:00"1107},1108{1109"source": "decision_tree",1110"content_type": "template",1111"subreddit": "CoCalc",1112"title": "Learn Decision Trees: Theory and Implementation with Interactive Python",1113"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of decision tree algorithms for classification and regression. We explore information gain, Gini impurity, and variance reduction as splitting criteria, implement tree construction from scratch, examine pruning techniques, and analyze feature importance measures.\n\n\n**Key equations you'll work with:**\n- p_c\n- c\n- S\n\n\n## What You'll See\n\n**Visualization:** tree structure and decision boundaries\n\n- See classification rules in tree form\n- Observe feature importance and splits\n\n## Make It Yours\n\nTune hyperparameters, swap in your data, or extend the model architecture.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/decision_tree.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1114"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/decision_tree.tex",1115"category": "templates",1116"date": "2026-02-07",1117"time": "09:00"1118},1119{1120"source": "population_dynamics",1121"content_type": "template",1122"subreddit": "CoCalc",1123"title": "Learn Population Dynamics: Predator-Prey Models and Stability Analysis with Interactive Python",1124"body": "## What You'll Learn\n\nThis tutorial presents a comprehensive analysis of predator-prey population dynamics using the Lotka-Volterra equations and their extensions. We examine the classical model, perform stability analysis of equilibrium points, and explore modifications including carrying capacity and functional responses. Computational analysis demonstrates phase portraits, limit cycles, and the ecological implications of parameter variations.\n\n\n**Key equations you'll work with:**\n- x\n- y\n- α\n\n\n## What You'll See\n\n**Visualization:** predator-prey oscillations and phase portraits\n\n- See Lotka-Volterra cycles in population space\n- Observe the lag between predator and prey peaks\n\n## Make It Yours\n\nChange population parameters, add new species, or simulate your biological system.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/population_dynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1125"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/population_dynamics.tex",1126"category": "templates",1127"date": "2026-02-07",1128"time": "09:00"1129},1130{1131"source": "black_holes",1132"content_type": "template",1133"subreddit": "CoCalc",1134"title": "Learn Black Hole Physics Schwarzschild Radius, Accretion, and Hawking Radiation with Interactive Python",1135"body": "## What You'll Learn\n\nComputational analysis of black hole physics including Schwarzschild geometry, accretion disk properties, and Hawking radiation calculations.\n\n\n**Key equations you'll work with:**\n- r_s = 2GM/c²\n- M_\\\n- r/r_s\n\n\n## What You'll See\n\n**Visualization:** spacetime curvature and photon orbits near event horizon\n\n- See gravitational lensing effects around black holes\n- Observe how mass warps spacetime geometry\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/black_holes.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1136"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/black_holes.tex",1137"category": "templates",1138"date": "2026-02-08",1139"time": "09:00"1140},1141{1142"source": "air_pollution",1143"content_type": "template",1144"subreddit": "CoCalc",1145"title": "Learn Air Pollution Dispersion Gaussian Plume Modeling with Interactive Python",1146"body": "## What You'll Learn\n\nComputational analysis of air pollution dispersion using Gaussian plume models and deposition calculations.\n\n\n**Key equations you'll work with:**\n- \\μ\n- ³\n- \\μ\n\n\n## What You'll See\n\n**Visualization:** pollutant concentration maps and dispersion plumes\n\n- See how wind carries pollutants from sources\n- Observe atmospheric mixing and dilution patterns\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/atmospheric-science/air_pollution.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1147"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/atmospheric-science/air_pollution.tex",1148"category": "templates",1149"date": "2026-02-08",1150"time": "09:00"1151},1152{1153"source": "engineering_report",1154"content_type": "template",1155"subreddit": "CoCalc",1156"title": "Learn Automated Signal Analysis Report with Interactive Python",1157"body": "## What You'll Learn\n\nA hands-on exploration of Automated Signal Analysis Report.\n\n\n**Key equations you'll work with:**\n- y(t)\n- ζ=zeta, ωₙ=omegaₙ\n- t = round(peak_time, 3){}\n\n\n## What You'll See\n\n**Visualization:** engineering calculations with embedded Python\n\n- See computed results integrated with text\n- Observe reproducible technical analysis\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/pythontex/engineering_report.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1158"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/pythontex/engineering_report.tex",1159"category": "templates",1160"date": "2026-02-09",1161"time": "09:00"1162},1163{1164"source": "atmospheric_dynamics",1165"content_type": "template",1166"subreddit": "CoCalc",1167"title": "Learn Atmospheric Dynamics Geostrophic Wind and Thermal Balance with Interactive Python",1168"body": "## What You'll Learn\n\nAnalysis of atmospheric dynamics including geostrophic balance, thermal wind, and jet stream formation.\n\n\n**Key equations you'll work with:**\n- Ro = U/fL\n- ^\\\n- ^\\\n\n\n## What You'll See\n\n**Visualization:** pressure fields and wind vector patterns\n\n- See cyclonic and anticyclonic flow patterns\n- Observe geostrophic balance in atmospheric motion\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/atmospheric-science/atmospheric_dynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1169"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/atmospheric-science/atmospheric_dynamics.tex",1170"category": "templates",1171"date": "2026-02-09",1172"time": "09:00"1173},1174{1175"source": "orbital_mechanics",1176"content_type": "template",1177"subreddit": "CoCalc",1178"title": "Learn Orbital Mechanics: Hohmann Transfers, Orbital Elements, and Ground Tracks A Comprehensive Analysis of Spacecraft Trajectory Design with Interactive Python",1179"body": "## What You'll Learn\n\nThis textbook-style analysis presents the fundamentals of orbital mechanics for spacecraft mission design. We examine Hohmann transfer orbits between circular orbits, compute orbital elements from state vectors, and generate ground tracks for various orbit types. The analysis covers delta-v budgets for LEO-to-GEO transfers, interplanetary trajectory concepts, and the effects of orbital inclination on ground coverage patterns.\n\n\n**Key equations you'll work with:**\n- a\n- e\n- i\n\n\n## What You'll See\n\n**Visualization:** orbital trajectories and Hohmann transfer ellipses\n\n- See spacecraft paths around Earth with velocity vectors\n- Observe how delta-v determines orbital changes\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/aerospace/orbital_mechanics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1180"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates",1181"category": "templates",1182"date": "2026-02-10",1183"time": "09:00"1184},1185{1186"source": "radioactive_decay",1187"content_type": "template",1188"subreddit": "CoCalc",1189"title": "Learn Radioactive Decay Chains: Bateman Equations and Activity Calculations with Interactive Python",1190"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of radioactive decay chains using the Bateman equations. We implement solutions for sequential decay series, compute activities as functions of time, and analyze equilibrium conditions including secular and transient equilibrium. Applications include medical isotope production, nuclear waste management, radiometric dating, and radiation dosimetry.\n\n\n**Key equations you'll work with:**\n- λ = (2)/t_1/2\n- t_1/2\n- N₁ N₂ N₃ ·s\n\n\n## What You'll See\n\n**Visualization:** decay curves and activity time series\n\n- See exponential decay of radioactive nuclei\n- Observe half-life and decay chains\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nuclear-physics/radioactive_decay.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1191"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nuclear-physics/radioactive_decay.tex",1192"category": "templates",1193"date": "2026-02-10",1194"time": "09:00"1195},1196{1197"source": "reaction_kinetics",1198"content_type": "template",1199"subreddit": "CoCalc",1200"title": "Learn Chemical Reaction Kinetics: Rate Laws, Mechanisms, and Catalysis with Interactive Python",1201"body": "## What You'll Learn\n\nThis study presents a comprehensive analysis of chemical reaction kinetics, examining rate laws for reactions of different orders, temperature dependence through the Arrhenius equation, and the effects of catalysis on reaction rates. We analyze experimental concentration-time data to determine rate constants, activation energies, and pre-exponential factors. Computational analysis demonstrates the integrated rate laws, half-life relationships, and mechanistic interpretation of kinetic data.\n\n\n**Key equations you'll work with:**\n- aA + bB →\n- k\n- m\n\n\n## What You'll See\n\n**Visualization:** concentration vs time curves and Arrhenius plots\n\n- See reactant depletion and product formation\n- Observe activation energy effects on rate\n\n## Make It Yours\n\nModify reaction rates, add new species, or simulate different conditions.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemistry/reaction_kinetics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1202"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemistry/reaction_kinetics.tex",1203"category": "templates",1204"date": "2026-02-11",1205"time": "09:00"1206},1207{1208"source": "aerodynamic_lift",1209"content_type": "template",1210"subreddit": "CoCalc",1211"title": "Learn Aerodynamic Lift Analysis: From Thin Airfoil Theory to Computational Modeling Multi-Airfoil Comparison with Reynolds Number Effects with Interactive Python",1212"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of aerodynamic lift characteristics for various airfoil configurations. We examine lift coefficient behavior as a function of angle of attack across multiple NACA airfoil series, investigate Reynolds number effects on boundary layer transition, and compute optimal flight conditions for maximum aerodynamic efficiency. The analysis includes thin airfoil theory validation, stall modeling, and drag polar construction for performance envelope determination.\n\n\n**Key equations you'll work with:**\n- C_L\n- C_D\n- L\n\n\n## What You'll See\n\n**Visualization:** pressure distribution and lift coefficient vs angle of attack\n\n- See airfoil pressure contours and CL curves\n- Observe stall behavior and lift generation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/aerospace/aerodynamic_lift.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1213"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-template",1214"category": "templates",1215"date": "2026-02-11",1216"time": "09:00"1217},1218{1219"source": "hash_functions",1220"content_type": "template",1221"subreddit": "CoCalc",1222"title": "Learn Cryptographic Hash Functions: SHA-256 Analysis and Security Properties with Interactive Python",1223"body": "## What You'll Learn\n\na comprehensive computational analysis of cryptographic hash functions, with focus on the SHA-256 algorithm and its security properties. We examine the three fundamental requirements of cryptographic hash functions: preimage resistance, second preimage resistance, and collision resistance. Through computational experiments, we demonstrate the avalanche effect where single-bit input changes produce dramatically different hash outputs, analyze birthday attack probabilities for collision finding, and explore applications in digital signatures, blockchain technology, and password hashing. Results confirm SHA-256's strong security properties including 50\\% bit flip probability under minimal input perturbation.\n\n\n**Key equations you'll work with:**\n- H: \\0,1\\^* → \\0,1\\ⁿ\n- n\n- h\n\n\n## What You'll See\n\n**Visualization:** avalanche effect and collision resistance metrics\n\n- See how small input changes cascade to output\n- Observe uniform distribution of hash values\n\n## Make It Yours\n\nChange key sizes, implement different algorithms, or test your security scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cryptography/hash_functions.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1224"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cryptography/hash_functions.tex",1225"category": "templates",1226"date": "2026-02-12",1227"time": "09:00"1228},1229{1230"source": "epidemiology_sir",1231"content_type": "template",1232"subreddit": "CoCalc",1233"title": "Learn Epidemiological Modeling: SIR Dynamics Parameter Sensitivity, Interventions, and Real-World Context with Interactive Python",1234"body": "## What You'll Learn\n\na comprehensive analysis of the SIR (Susceptible-Infected-Recovered) compartmental model for infectious disease dynamics. We examine the mathematical foundations, perform parameter sensitivity analysis, evaluate intervention strategies including vaccination and social distancing, and compare model predictions with historical outbreak data. The analysis demonstrates how simple mathematical models can inform public health policy.\n\n\n**Key equations you'll work with:**\n- β\n- γ\n- R₀\n\n\n## What You'll See\n\n**Visualization:** SIR compartment curves showing infection dynamics\n\n- See susceptible, infected, and recovered populations evolve\n- Observe how R0 affects epidemic peak timing\n\n## Make It Yours\n\nChange population parameters, add new species, or simulate your biological system.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/epidemiology_sir.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1235"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/epidemiology_sir.tex",1236"category": "templates",1237"date": "2026-02-12",1238"time": "09:00"1239},1240{1241"source": "causal_inference_propensity_score",1242"content_type": "template",1243"subreddit": "CoCalc",1244"title": "Title: Built propensity score methods for causal inference from scratch in Python - here's what I learned",1245"body": "Body:\n\nI just finished implementing propensity score methods in Python without relying on causal inference libraries, and wanted to share what I learned.\n\n**The Problem:**\nIn observational studies, we can't just compare treated vs control groups because people who get treatment are often systematically different. This \"selection bias\" confounds our estimates.\n\nFor example: patients who take a new drug might be healthier to begin with, making the drug look more effective than it really is.\n\n**The Solution - Propensity Scores:**\nThe propensity score e(x) = P(treatment | characteristics) is the probability someone receives treatment given their observed features.\n\nThe key insight from Rosenbaum & Rubin (1983): if we adjust for propensity scores, treatment assignment becomes independent of potential outcomes - essentially mimicking randomization.\n\n**What I Implemented:**\n1. Logistic regression to estimate propensity scores\n2. Inverse Probability Weighting (IPW)\n3. Normalized IPW (Hajek estimator)\n4. Stratification\n5. Nearest-neighbor matching\n6. Bootstrap confidence intervals\n7. Rosenbaum bounds sensitivity analysis\n\n**Results:**\nWith a simulated dataset (n=2000, true effect = 2.5):\n- Naive comparison: estimate = 3.24 (29% bias)\n- IPW methods: estimate ~ 2.5 (< 5% bias)\n- Matching: estimate ~ 2.5 (< 5% bias)\n\nThat's 90%+ bias reduction!\n\n**Key Takeaways:**\n- Always check overlap (positivity assumption)\n- Extreme propensity scores create unstable weights - clip them\n- Sensitivity analysis tells you how much unmeasured confounding would be needed to explain away your results\n- No statistical method can overcome missing confounders - domain knowledge matters\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/causal_inference_propensity_score.ipynb\n\nHappy to answer questions about the implementation!",1246"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/causal_inference_propensity_score.ipynb",1247"category": "general",1248"date": "2026-02-13",1249"time": "09:00"1250},1251{1252"source": "magnetic_field",1253"content_type": "template",1254"subreddit": "CoCalc",1255"title": "Learn Earth's Magnetic Field: Dipole Model, Secular Variation, and IGRF Analysis with Interactive Python",1256"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of Earth's main magnetic field using the geocentric axial dipole model and spherical harmonic representations. We implement field component calculations (radial, meridional, total intensity, inclination, declination), model the International Geomagnetic Reference Field (IGRF), and analyze secular variation. Applications include navigation, paleomagnetic studies, and space weather prediction.\n\n\n**Key equations you'll work with:**\n- B\n- V\n- V\n\n\n## What You'll See\n\n**Visualization:** magnetic anomaly maps and dipole models\n\n- See magnetic signatures of geological features\n- Observe magnetization patterns in rocks\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/magnetic_field.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1257"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/magnetic_field.tex",1258"category": "templates",1259"date": "2026-02-13",1260"time": "09:00"1261},1262{1263"source": "protein_folding",1264"content_type": "template",1265"subreddit": "CoCalc",1266"title": "Learn Protein Folding Simulation: Energy Landscapes and Folding Kinetics with Interactive Python",1267"body": "## What You'll Learn\n\nThis computational study investigates protein folding through simplified lattice models and energy landscape analysis. We implement the hydrophobic-polar (HP) model to simulate folding on a 2D square lattice, employ Monte Carlo methods to explore conformational space, and analyze folding thermodynamics and kinetics. The simulations demonstrate spontaneous folding driven by hydrophobic collapse, visualize energy funnels characteristic of foldable sequences, and quantify structural convergence using RMSD and radius of gyration metrics. Results reproduce key principles of Anfinsen's thermodynamic hypothesis and provide insights into the folding free energy landscape.\n\n\n**Key equations you'll work with:**\n- Δ H_fold\n- -TΔ S_fold\n- δ_Hᵢ, H_j = 1\n\n\n## What You'll See\n\n**Visualization:** energy landscapes and folding trajectories\n\n- See proteins finding native state conformations\n- Observe funnel-shaped energy surfaces\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/protein_folding.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1268"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/protein_folding.tex",1269"category": "templates",1270"date": "2026-02-14",1271"time": "09:00"1272},1273{1274"source": "reactor_kinetics",1275"content_type": "template",1276"subreddit": "CoCalc",1277"title": "Learn Nuclear Reactor Kinetics: Point Kinetics Model and Delayed Neutron Analysis with Interactive Python",1278"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of nuclear reactor kinetics using the point kinetics equations with delayed neutrons. We implement solutions for reactivity transients, analyze the role of delayed neutron precursors in reactor control, and compute reactor periods for various reactivity insertions. The analysis covers prompt criticality, xenon dynamics, and feedback mechanisms essential for safe reactor operation.\n\n\n**Key equations you'll work with:**\n- ρ\n- k_eff\n- n\n\n\n## What You'll See\n\n**Visualization:** neutron flux and reactivity transients\n\n- See reactor power response to perturbations\n- Observe delayed neutron effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nuclear-physics/reactor_kinetics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1279"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/nuclear-physics/reactor_kinetics.tex",1280"category": "templates",1281"date": "2026-02-14",1282"time": "09:00"1283},1284{1285"source": "cosmological_expansion",1286"content_type": "template",1287"subreddit": "CoCalc",1288"title": "Learn Cosmological Expansion: From Hubble's Law to Dark Energy A Comprehensive Analysis of the ΛCDM Model with Interactive Python",1289"body": "## What You'll Learn\n\nThis comprehensive analysis explores the expansion history of the universe from observational foundations to theoretical frameworks. We examine Hubble's law and its modern calibrations, derive the Friedmann equations governing cosmic evolution, and analyze different cosmological models including matter-dominated, radiation-dominated, and dark energy-dominated universes. The ΛCDM concordance model is developed in detail, with computational analysis of the scale factor evolution, distance-redshift relations, and the cosmic age problem. We explore observational evidence from Type Ia supernovae, baryon acoustic oscillations, and cosmic microwave background measurements that constrain cosmological parameters.\n\n\n**Key equations you'll work with:**\n- Λ\n- Λ\n- v\n\n\n## What You'll See\n\n**Visualization:** Hubble diagram with redshift vs distance relationship\n\n- See the linear expansion law with supernova data\n- Observe evidence for accelerating expansion\n\n## Make It Yours\n\nChange stellar parameters, simulate different objects, or model your observation.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astronomy/cosmological_expansion.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1290"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astronomy/cosmologic",1291"category": "templates",1292"date": "2026-02-15",1293"time": "09:00"1294},1295{1296"source": "optimal_control",1297"content_type": "template",1298"subreddit": "CoCalc",1299"title": "Learn Optimal Control Theory: From LQR to Model Predictive Control with Interactive Python",1300"body": "## What You'll Learn\n\nThis technical report presents a comprehensive treatment of optimal control theory, covering both classical and modern methodologies. We examine the Linear Quadratic Regulator (LQR) framework and its solution via the Riccati equation, derive necessary conditions for optimality using Pontryagin's Maximum Principle, explore dynamic programming and the Hamilton-Jacobi-Bellman equation, analyze time-optimal bang-bang control, and demonstrate model predictive control for constrained systems. Computational implementations illustrate LQR gain computation, costate trajectory optimization, and receding horizon MPC with state and input constraints.\n\n\n**Key equations you'll work with:**\n- x(t) Rⁿ\n- u(t) R^m\n- u^*(t)\n\n\n## What You'll See\n\n**Visualization:** optimal trajectories and cost function evolution\n\n- See state and control histories minimizing cost\n- Observe Pontryagin principle in action\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/control-theory/optimal_control.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1301"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/control-theory/optimal_control.tex",1302"category": "templates",1303"date": "2026-02-15",1304"time": "09:00"1305},1306{1307"source": "genetic_algorithms",1308"content_type": "template",1309"subreddit": "CoCalc",1310"title": "Learn Genetic Algorithms: Evolutionary Optimization and Function Approximation with Interactive Python",1311"body": "## What You'll Learn\n\na comprehensive analysis of genetic algorithms (GAs) as bio-inspired optimization methods. We explore chromosome representations (binary, real-valued, permutation encodings), selection mechanisms (tournament, roulette wheel, rank-based), genetic operators (single-point, two-point, uniform crossover; bit-flip, Gaussian, and swap mutation), and fitness landscape theory including the schema theorem and building block hypothesis. Computational experiments optimize benchmark functions (Rastrigin, Rosenbrock, Schwefel) and demonstrate convergence dynamics, parameter sensitivity, and the exploration-exploitation trade-off. Results show that real-coded GAs with adaptive mutation achieve optimal solutions within 150-300 generations for multimodal landscapes.\n\n\n**Key equations you'll work with:**\n- x \\0,1\\^L\n- L\n- x Rⁿ\n\n\n## What You'll See\n\n**Visualization:** fitness evolution curves and population diversity\n\n- See optimization progress over generations\n- Observe selection pressure effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/genetic_algorithms.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1312"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/genetic_algorithms.tex",1313"category": "templates",1314"date": "2026-02-16",1315"time": "09:00"1316},1317{1318"source": "flood_frequency",1319"content_type": "template",1320"subreddit": "CoCalc",1321"title": "Learn Flood Frequency Analysis: Statistical Methods for Design Flow Estimation with Interactive Python",1322"body": "## What You'll Learn\n\na comprehensive flood frequency analysis using annual maximum flow data to estimate design floods for hydraulic structures. We apply three probability distributions (Gumbel extreme value, Log-Pearson Type III, and generalized extreme value) fitted using L-moments and maximum likelihood estimation. The analysis includes computation of return period flows for 10-year, 50-year, 100-year, and 500-year events with confidence intervals, comparison of fitting methods, and sensitivity analysis of distribution selection on design flood estimates. Results demonstrate critical considerations for infrastructure design under hydrologic uncertainty.\n\n\n**Key equations you'll work with:**\n- T\n- P\n- T = 100\n\n\n## What You'll See\n\n**Visualization:** flood frequency curves and return periods\n\n- See extreme value statistics of streamflow\n- Observe probability of rare flood events\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/hydrology/flood_frequency.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1323"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/hydrology/flood_frequency.tex",1324"category": "templates",1325"date": "2026-02-16",1326"time": "09:00"1327},1328{1329"source": "medical_imaging",1330"content_type": "template",1331"subreddit": "CoCalc",1332"title": "Learn Medical Imaging Image Reconstruction and Segmentation with Interactive Python",1333"body": "## What You'll Learn\n\ncomprehensive medical image processing techniques including CT reconstruction via filtered backprojection, image enhancement through CLAHE and histogram equalization, segmentation algorithms (thresholding, watershed), noise reduction filters (Gaussian, median, bilateral), and quantitative quality assessment using PSNR, SSIM, and CNR metrics. The analysis demonstrates practical implementations of core medical imaging algorithms with computed quality metrics.\n\n\n**Key equations you'll work with:**\n- s\n- θ\n- 1/r\n\n\n## What You'll See\n\n**Visualization:** reconstructed images and intensity histograms\n\n- See anatomical structures in CT/MRI slices\n- Observe tissue contrast and spatial resolution\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/medical_imaging.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1334"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/medical_imaging.tex",1335"category": "templates",1336"date": "2026-02-17",1337"time": "09:00"1338},1339{1340"source": "statistical_analysis",1341"content_type": "template",1342"subreddit": "CoCalc",1343"title": "Learn Data Science: Comprehensive Statistical Analysis with Interactive Python",1344"body": "## What You'll Learn\n\nThis document presents a comprehensive statistical analysis workflow including descriptive statistics, hypothesis testing, confidence intervals, ANOVA, correlation analysis, and regression diagnostics. We demonstrate parametric and non-parametric tests, effect size calculations, and multiple testing corrections using Python's scipy and statsmodels libraries.\n\n\n**Key equations you'll work with:**\n- s_p\n- (1-α)\n- -\\_10(p)\n\n\n## What You'll See\n\n**Visualization:** distributions with confidence intervals and p-values\n\n- See data spread and central tendency measures\n- Observe statistical significance testing\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/data-science/statistical_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1345"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/data-science/statistical_analysis.tex",1346"category": "templates",1347"date": "2026-02-17",1348"time": "09:00"1349},1350{1351"source": "plate_tectonics",1352"content_type": "template",1353"subreddit": "CoCalc",1354"title": "Learn Plate Tectonics: Thermal Evolution, Plate Motion, and Mantle Convection with Interactive Python",1355"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of plate tectonic processes including lithospheric cooling, seafloor subsidence, heat flow evolution, and plate kinematics. We implement the half-space and plate cooling models, analyze Euler pole rotation kinematics, and model mantle convection using Rayleigh-Benard theory. The analysis quantifies lithospheric thickness, thermal age relationships, and driving forces of plate motion.\n\n\n**Key equations you'll work with:**\n- = k/(ρ c_p)\n- 10^-6\n- ²\n\n\n## What You'll See\n\n**Visualization:** plate motion vectors and subduction geometry\n\n- See tectonic plates moving across Earth\n- Observe collision and spreading rates\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/plate_tectonics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1356"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/plate_tectonics.tex",1357"category": "templates",1358"date": "2026-02-18",1359"time": "09:00"1360},1361{1362"source": "optimization",1363"content_type": "template",1364"subreddit": "CoCalc",1365"title": "Learn Optimization: Gradient Descent Methods and Convergence Analysis with Interactive Python",1366"body": "## What You'll Learn\n\nOptimization algorithms find minima of objective functions and are fundamental to machine learning, scientific computing, and engineering design. This document demonstrates gradient descent variants (vanilla, momentum, RMSprop, Adam), Newton's method, quasi-Newton methods (BFGS), and constrained optimization. Using PythonTeX, we compare convergence behavior on standard test functions and analyze the effects of hyperparameters on optimization performance.\n\n\n**Key equations you'll work with:**\n- H_f\n- B_k\n- μ\n\n\n## What You'll See\n\n**Visualization:** objective function surfaces and convergence paths\n\n- See gradient descent finding minima\n- Observe local vs global optima\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/other/optimization.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1367"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/other/optimization.tex",1368"category": "templates",1369"date": "2026-02-18",1370"time": "09:00"1371},1372{1373"source": "separation_processes",1374"content_type": "template",1375"subreddit": "CoCalc",1376"title": "Learn Separation Processes Distillation and Absorption with Interactive Python",1377"body": "## What You'll Learn\n\nDesign of separation operations including McCabe-Thiele distillation analysis, minimum reflux ratio calculations, flash distillation, and membrane separation. Computational methods determine theoretical stages, operating lines, and separation efficiency for binary distillation columns and membrane systems.\n\n\n**Key equations you'll work with:**\n- y = x\n- R=R\n- q=:.1f\n\n\n## What You'll See\n\n**Visualization:** McCabe-Thiele diagrams and separation efficiency curves\n\n- See theoretical stages in distillation columns\n- Observe reflux ratio effects on purity\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/separation_processes.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1378"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/separation_processes.tex",1379"category": "templates",1380"date": "2026-02-19",1381"time": "09:00"1382},1383{1384"source": "differential_eq",1385"content_type": "template",1386"subreddit": "CoCalc",1387"title": "Learn Ordinary Differential Equations: Phase Portraits, Stability Analysis, and Limit Cycles with Interactive Python",1388"body": "## What You'll Learn\n\nThis report provides a comprehensive computational analysis of ordinary differential equations (ODEs). We examine first and second-order ODEs, construct phase portraits for autonomous systems, perform stability analysis of equilibrium points, and investigate limit cycles in nonlinear oscillators. The van der Pol oscillator is analyzed in detail to demonstrate relaxation oscillations and Hopf bifurcations.\n\n\n**Key equations you'll work with:**\n- dy/dt = f(t, y)\n- d²y/dt² = f(t, y, dy/dt)\n- t\n\n\n## What You'll See\n\n**Visualization:** solution curves and phase portraits\n\n- See families of solutions to ODEs\n- Observe stability of equilibrium points\n\n## Make It Yours\n\nModify parameters, extend to higher dimensions, or apply to your specific problem.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mathematics/differential_eq.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1389"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mathematics/differential_eq.tex",1390"category": "templates",1391"date": "2026-02-19",1392"time": "09:00"1393},1394{1395"source": "network_epidemics",1396"content_type": "template",1397"subreddit": "CoCalc",1398"title": "Learn Network Epidemiology: Epidemic Dynamics on Complex Contact Networks with Interactive Python",1399"body": "## What You'll Learn\n\nThis computational report investigates epidemic dynamics on complex contact networks, extending traditional compartmental models to structured populations. We analyze SIR and SIS dynamics on Erdős-Rényi random networks, scale-free Barabási-Albert networks, and Watts-Strogatz small-world networks. The epidemic threshold τ_c = 1/λ_{} is computed for each network type, where λ_{} is the largest eigenvalue of the adjacency matrix. Percolation analysis reveals the role of network structure in outbreak size distribution, and targeted immunization strategies demonstrate the effectiveness of hub removal in scale-free networks. Simulations show that degree heterogeneity dramatically reduces the epidemic threshold, enabling pathogens to persist even at low transmission rates.\n\n\n**Key equations you'll work with:**\n- τ_c = 1/λ_{}\n- λ_{}\n- G = (V, E)\n\n\n## What You'll See\n\n**Visualization:** infection spread on contact networks\n\n- See disease spreading through social connections\n- Observe superspreader effects on dynamics\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/epidemiology/network_epidemics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1400"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/epidemiology/network_epidemics.tex",1401"category": "templates",1402"date": "2026-02-20",1403"time": "09:00"1404},1405{1406"source": "emc",1407"content_type": "template",1408"subreddit": "CoCalc",1409"title": "Learn Electromagnetic Compatibility Engineering Shielding, Filtering, and Compliance Analysis with Interactive Python",1410"body": "## What You'll Learn\n\nElectromagnetic compatibility (EMC) ensures that electronic systems operate without causing or suffering from electromagnetic interference (EMI). This report presents comprehensive computational analysis of EMC engineering principles including shielding effectiveness, filter design, grounding strategies, and regulatory compliance. We analyze conducted and radiated emissions, calculate shielding performance across frequency ranges, design common-mode and differential-mode filters, and evaluate compliance with FCC Part 15 and CISPR 22 standards. The analysis demonstrates that proper shielding can achieve 60-100 dB attenuation above 1 MHz, LC filters provide 40+ dB insertion loss at switching frequencies, and multi-point grounding reduces ground loop coupling by 20-30 dB compared to single-point configurations.\n\n\n**Key equations you'll work with:**\n- R\n- A\n- B\n\n\n## What You'll See\n\n**Visualization:** emission spectra and coupling coefficients\n\n- See electromagnetic interference patterns\n- Observe shielding effectiveness\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/emc.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1411"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electromagnetics/emc.tex",1412"category": "templates",1413"date": "2026-02-20",1414"time": "09:00"1415},1416{1417"source": "energy_balance",1418"content_type": "template",1419"subreddit": "CoCalc",1420"title": "Learn Earth's Energy Balance Model: Climate Sensitivity and Radiative Forcing with Interactive Python",1421"body": "## What You'll Learn\n\nThis document develops the fundamental physics of Earth's energy balance, from the Stefan-Boltzmann law governing planetary radiation to the greenhouse effect and climate sensitivity. We derive the zero-dimensional energy balance model, calculate equilibrium temperatures with and without an atmosphere, and explore how changes in radiative forcing translate to temperature changes. The analysis includes numerical solutions of the time-dependent energy balance equation, demonstrating the transient response to perturbations such as increased CO₂ concentrations.\n\n\n**Key equations you'll work with:**\n- ₂\n- S₀ = 1361\n- ²\n\n\n## What You'll See\n\n**Visualization:** radiation budget diagrams and temperature profiles\n\n- See incoming solar vs outgoing thermal radiation\n- Observe the greenhouse effect quantitatively\n\n## Make It Yours\n\nAdjust forcing parameters, extend time scales, or model specific scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/climate-science/energy_balance.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1422"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/climate-science/energy_balance.tex",1423"category": "templates",1424"date": "2026-02-21",1425"time": "09:00"1426},1427{1428"source": "polarization",1429"content_type": "template",1430"subreddit": "CoCalc",1431"title": "Learn Optics: Polarization States and Jones Calculus with Interactive Python",1432"body": "## What You'll Learn\n\nA hands-on exploration of Optics: Polarization States and Jones Calculus.\n\n\n**Key equations you'll work with:**\n- θ\n- δ\n- ^\\\n\n\n## What You'll See\n\n**Visualization:** Stokes parameters and polarization ellipses\n\n- See light polarization states on Poincare sphere\n- Observe birefringence and dichroism\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/polarization.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1433"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/polarization.tex",1434"category": "templates",1435"date": "2026-02-21",1436"time": "09:00"1437},1438{1439"source": "protein_structure",1440"content_type": "template",1441"subreddit": "CoCalc",1442"title": "Learn Protein Structure Analysis: From Backbone Geometry to Fold Recognition Ramachandran Analysis, Secondary Structure, and Structural Comparison with Interactive Python",1443"body": "## What You'll Learn\n\nThis comprehensive analysis presents methods for analyzing protein three-dimensional structure. We cover backbone dihedral angle analysis through Ramachandran plots, secondary structure prediction using propensity scales, structural comparison using RMSD and TM-score, and contact map analysis for fold topology. The analysis includes the geometric principles underlying protein conformation, statistical analysis of allowed regions in φ-ψ space, and methods for comparing protein structures to identify homologs and predict function.\n\n\n**Key equations you'll work with:**\n- φ\n- ψ\n- α\n\n\n## What You'll See\n\n**Visualization:** Ramachandran plots and contact maps\n\n- See allowed backbone conformations and residue contacts\n- Observe secondary structure preferences\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/bioinformatics/protein_structure.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1444"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex",1445"category": "templates",1446"date": "2026-02-22",1447"time": "09:00"1448},1449{1450"source": "sound_propagation",1451"content_type": "template",1452"subreddit": "CoCalc",1453"title": "Learn Sound Propagation Analysis Wave Equations and Transmission Loss with Interactive Python",1454"body": "## What You'll Learn\n\nAnalysis of sound wave propagation through various media, including acoustic impedance, transmission coefficients, and transmission loss calculations.\n\n\n**Key equations you'll work with:**\n- Z = ρ c\n- \\·\n- R = Z₂ - Z₁/Z₂ + Z₁\n\n\n## What You'll See\n\n**Visualization:** sound pressure level contours and wave propagation patterns\n\n- See how sound waves spread and attenuate with distance\n- Observe inverse square law in action\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/acoustics/sound_propagation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1455"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/acoustics/sound_propagation.tex",1456"category": "templates",1457"date": "2026-02-22",1458"time": "09:00"1459},1460{1461"source": "satellite_coverage",1462"content_type": "template",1463"subreddit": "CoCalc",1464"title": "Learn Satellite Coverage Analysis: Ground Coverage, Revisit Times, and Constellation Design A Comprehensive Study of Earth Observation and Communication Systems with Interactive Python",1465"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of satellite ground coverage for Earth observation and communication missions. We compute instantaneous coverage footprints, revisit times for single satellites and constellations, and analyze Walker constellation parameters for global coverage. The analysis includes elevation angle constraints, slant range calculations, and coverage optimization for various orbital configurations.\n\n\n**Key equations you'll work with:**\n- _min\n- h\n- φ\n\n\n## What You'll See\n\n**Visualization:** ground track coverage maps and visibility windows\n\n- See satellite footprints across Earth's surface\n- Observe how orbital inclination affects coverage\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/aerospace/satellite_coverage.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1466"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/b",1467"category": "templates",1468"date": "2026-02-23",1469"time": "09:00"1470},1471{1472"source": "time_series_finance",1473"content_type": "template",1474"subreddit": "CoCalc",1475"title": "Learn Financial Time Series Analysis: ARIMA, GARCH, and Cointegration with Interactive Python",1476"body": "## What You'll Learn\n\na comprehensive econometric analysis of financial time series using modern time series techniques. We examine the stylized facts of asset returns including fat tails, volatility clustering, and leverage effects. We develop ARIMA models for mean dynamics, GARCH models for conditional heteroskedasticity, test for cointegration between asset pairs, and implement Markov regime-switching models to capture bull and bear market dynamics. Computational analysis demonstrates parameter estimation, diagnostic testing, and out-of-sample forecasting performance across multiple financial time series.\n\n\n**Key equations you'll work with:**\n- (p,d,q)\n- \\y_t\\\n- φ(L) = 1 - φ₁ L - ·s - φ_p L^p\n\n\n## What You'll See\n\n**Visualization:** volatility clustering and return distributions\n\n- See GARCH effects in market data\n- Observe heteroskedasticity patterns\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/time_series_finance.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1477"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/time_series_finance.tex",1478"category": "templates",1479"date": "2026-02-23",1480"time": "09:00"1481},1482{1483"source": "decay_kinematics",1484"content_type": "template",1485"subreddit": "CoCalc",1486"title": "Learn Particle Decay Kinematics: Two-Body and Three-Body Phase Space Analysis with Interactive Python",1487"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of relativistic decay kinematics for unstable particles. We implement energy-momentum conservation for two-body and three-body decays, compute Lorentz transformations between rest and laboratory frames, analyze Dalitz plots for three-body phase space, and calculate decay widths from matrix elements. Applications include particle identification, resonance analysis, and detector design optimization.\n\n\n**Key equations you'll work with:**\n- pᵢ^μ\n- A B + C\n- A\n\n\n## What You'll See\n\n**Visualization:** Dalitz plots and momentum distributions\n\n- See particle decay products in phase space\n- Observe resonance structures\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/particle-physics/decay_kinematics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1488"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/particle-physics/decay_kinematics.tex",1489"category": "templates",1490"date": "2026-02-24",1491"time": "09:00"1492},1493{1494"source": "hydrogen_atom",1495"content_type": "template",1496"subreddit": "CoCalc",1497"title": "Learn The Hydrogen Atom: Exact Solution of the Schrödinger Equation with Interactive Python",1498"body": "## What You'll Learn\n\nThe hydrogen atom represents the only exactly solvable atomic system in quantum mechanics, providing fundamental insights into atomic structure and spectroscopy. This computational analysis solves the time-independent Schrödinger equation in spherical coordinates, yielding the complete set of wavefunctions ψ_nlm(r,θ,φ) characterized by quantum numbers n (principal), l (orbital angular momentum), and m (magnetic). We compute radial probability densities for the 1s, 2s, 2p, and 3d orbitals, calculate energy eigenvalues Eₙ = -13.6/n² eV, and visualize three-dimensional probability density distributions using spherical harmonics. The results reproduce the Balmer, Lyman, and Paschen spectral series and demonstrate the quantization of angular momentum in atomic systems.\n\n\n**Key equations you'll work with:**\n- ψ_nlm(r,θ,φ)\n- n\n- l\n\n\n## What You'll See\n\n**Visualization:** orbital shapes and radial distributions\n\n- See electron clouds in 3D\n- Observe quantum number effects on shape\n\n## Make It Yours\n\nExplore different potentials, modify particle numbers, or simulate new scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/hydrogen_atom.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1499"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/hydrogen_atom.tex",1500"category": "templates",1501"date": "2026-02-24",1502"time": "09:00"1503},1504{1505"source": "eeg_analysis",1506"content_type": "template",1507"subreddit": "CoCalc",1508"title": "Learn EEG Signal Analysis: Spectral Decomposition and Brain State Classification with Interactive Python",1509"body": "## What You'll Learn\n\na comprehensive analysis of electroencephalography (EEG) signal processing. We implement spectral analysis methods, extract frequency band power, compute event-related potentials, analyze connectivity measures, and classify brain states. All computations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- δ\n- θ\n- α\n\n\n## What You'll See\n\n**Visualization:** power spectra and topographic brain maps\n\n- See brain wave patterns across scalp\n- Observe frequency band distributions\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/neuroscience/eeg_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1510"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/neuroscience/eeg_analysis.tex",1511"category": "templates",1512"date": "2026-02-25",1513"time": "09:00"1514},1515{1516"source": "pde_solver",1517"content_type": "template",1518"subreddit": "CoCalc",1519"title": "Learn Partial Differential Equations: Finite Difference Methods and Stability Analysis with Interactive Python",1520"body": "## What You'll Learn\n\nfinite difference methods for solving partial differential equations (PDEs). We implement explicit and implicit schemes for the heat equation and wave equation, analyze stability using von Neumann analysis, and demonstrate the CFL condition. Numerical examples illustrate the importance of stability constraints in time-stepping methods.\n\n\n**Key equations you'll work with:**\n- r ≤ 1/2\n- t = t*dt_stable:.3f\n- x\n\n\n## What You'll See\n\n**Visualization:** spatiotemporal solution surfaces\n\n- See heat equation evolving in space and time\n- Observe discretization effects on accuracy\n\n## Make It Yours\n\nTest different initial conditions, compare convergence rates, or optimize parameters.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/pde_solver.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1521"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/pde_solver.tex",1522"category": "templates",1523"date": "2026-02-25",1524"time": "09:00"1525},1526{1527"source": "sequence_alignment",1528"content_type": "template",1529"subreddit": "CoCalc",1530"title": "Learn Sequence Alignment: Dynamic Programming and Scoring Matrices Global and Local Alignment with Statistical Significance with Interactive Python",1531"body": "## What You'll Learn\n\nThis comprehensive analysis presents algorithms for pairwise sequence alignment. We implement the Needleman-Wunsch algorithm for global alignment and Smith-Waterman for local alignment using dynamic programming. The analysis covers scoring matrices (BLOSUM, PAM), gap penalties, traceback procedures, and statistical significance assessment. We demonstrate alignment on nucleotide and protein sequences, visualize scoring matrices, and evaluate alignment quality through comparison with random sequence distributions.\n\n\n**Key equations you'll work with:**\n- s(aᵢ, b_j)\n- d\n- d\n\n\n## What You'll See\n\n**Visualization:** dot matrices and alignment scoring matrices\n\n- See sequence similarity patterns and gaps\n- Observe conserved regions across sequences\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/bioinformatics/sequence_alignment.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1532"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/bi",1533"category": "templates",1534"date": "2026-02-26",1535"time": "09:00"1536},1537{1538"source": "markov_chains",1539"content_type": "template",1540"subreddit": "CoCalc",1541"title": "Learn Markov Chains: Transition Matrices and Stationary Distributions with Interactive Python",1542"body": "## What You'll Learn\n\na comprehensive computational analysis of discrete-time Markov chains, including the construction and analysis of transition matrices, computation of stationary distributions through eigenvalue decomposition and iterative methods, classification of states by recurrence and periodicity properties, and convergence to equilibrium under ergodic conditions. We examine the Chapman-Kolmogorov equations governing multi-step transitions, demonstrate the fundamental theorem for irreducible aperiodic chains, and apply Markov chain theory to real-world applications including PageRank algorithms, weather modeling, and Markov Chain Monte Carlo (MCMC) methods.\n\n\n**Key equations you'll work with:**\n- \\Xₙ : n = 0, 1, 2, \\\n- S\n- i, j S\n\n\n## What You'll See\n\n**Visualization:** transition matrices and steady-state distributions\n\n- See state probabilities evolving over time\n- Observe convergence to equilibrium\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/markov_chains.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1543"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/probability/markov_chains.tex",1544"category": "templates",1545"date": "2026-02-26",1546"time": "09:00"1547},1548{1549"source": "scattering",1550"content_type": "template",1551"subreddit": "CoCalc",1552"title": "Learn Quantum Scattering Theory: Cross Sections and Partial Wave Analysis with Interactive Python",1553"body": "## What You'll Learn\n\na comprehensive computational analysis of quantum scattering theory, examining differential and total cross sections, partial wave decomposition, and the Born approximation. We analyze Rutherford scattering from Coulomb potentials, extract phase shifts from numerical solutions of the radial Schrödinger equation, verify the optical theorem, and investigate resonance phenomena through the Breit-Wigner formula. The computational framework demonstrates phase shift extraction, cross section calculations in barns and femtobarns, and the transition from classical to quantum scattering regimes.\n\n\n**Key equations you'll work with:**\n- dσ/dΩ\n- f(θ, φ)\n- V(r)\n\n\n## What You'll See\n\n**Visualization:** scattering amplitudes and cross sections\n\n- See wave packet scattering from potentials\n- Observe resonance and tunneling\n\n## Make It Yours\n\nExplore different potentials, modify particle numbers, or simulate new scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/scattering.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1554"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/quantum-mechanics/scattering.tex",1555"category": "templates",1556"date": "2026-02-27",1557"time": "09:00"1558},1559{1560"source": "sorting_algorithms",1561"content_type": "template",1562"subreddit": "CoCalc",1563"title": "Learn Computer Science: Sorting Algorithm Analysis and Comparison with Interactive Python",1564"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of sorting algorithms including comparison-based sorts (QuickSort, MergeSort, HeapSort, InsertionSort, BubbleSort) and non-comparison sorts (CountingSort, RadixSort). We implement each algorithm in Python, measure their empirical performance across different input sizes and distributions, analyze time and space complexity, and compare their stability and practical applicability. The analysis demonstrates the trade-offs between different sorting strategies and guides algorithm selection for specific use cases.\n\n\n**Key equations you'll work with:**\n- Ω(n n)\n- O(n \\n)\n- O(n²)\n\n\n## What You'll See\n\n**Visualization:** comparison counts and execution time scaling\n\n- See sorting progress visualizations\n- Observe O(n log n) vs O(n²) behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computer-science/sorting_algorithms.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1565"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computer-science/sorting_algorithms.tex",1566"category": "templates",1567"date": "2026-02-27",1568"time": "09:00"1569},1570{1571"source": "inventory_models",1572"content_type": "template",1573"subreddit": "CoCalc",1574"title": "Learn Inventory Management: Economic Order Quantity and Stochastic Models with Interactive Python",1575"body": "## What You'll Learn\n\na comprehensive computational analysis of inventory management models under both deterministic and stochastic demand. We derive and analyze the Economic Order Quantity (EOQ) model, examine continuous review (s,Q) and periodic review (s,S) policies, solve the newsvendor problem under various demand distributions, and compute optimal safety stock levels for specified service targets. Sensitivity analysis demonstrates the robustness of EOQ to parameter estimation errors and quantifies the cost of suboptimal ordering policies.\n\n\n**Key equations you'll work with:**\n- (s,Q)\n- (s,S)\n- h\n\n\n## What You'll See\n\n**Visualization:** inventory levels and reorder point charts\n\n- See stock fluctuations over time\n- Observe EOQ and safety stock effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/inventory_models.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1576"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/operations-research/inventory_models.tex",1577"category": "templates",1578"date": "2026-02-28",1579"time": "09:00"1580},1581{1582"source": "heat_transfer",1583"content_type": "template",1584"subreddit": "CoCalc",1585"title": "Learn Heat Transfer Analysis: Conduction, Convection, and Fins with Interactive Python",1586"body": "## What You'll Learn\n\ncomputational analysis of heat transfer mechanisms including conduction through composite walls, convection correlations, fin analysis, and heat exchanger design. Python-based computations provide quantitative analysis with dynamic visualization of temperature distributions and heat flux.\n\n\n**Key equations you'll work with:**\n- k\n- ^\\\n- ²\n\n\n## What You'll See\n\n**Visualization:** temperature distributions and heat flux vectors\n\n- See conduction, convection, and radiation\n- Observe thermal resistance effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/heat_transfer.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1587"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mechanical-engineering/heat_transfer.tex",1588"category": "templates",1589"date": "2026-02-28",1590"time": "09:00"1591},1592{1593"source": "mosfet",1594"content_type": "template",1595"subreddit": "CoCalc",1596"title": "Learn MOSFET Device Physics: Threshold Voltage Modeling and I-V Characterization with Interactive Python",1597"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) device physics, including threshold voltage derivation, current-voltage characteristics, and short-channel effects. We examine the threshold voltage equation V_th = V_FB + 2φ_F + Q_dep/C_ox, analyze transfer and output characteristics across multiple bias conditions, and investigate drain-induced barrier lowering (DIBL), velocity saturation, and subthreshold swing degradation. Computational modeling demonstrates the transition from long-channel to short-channel behavior for devices with gate lengths ranging from 1 μm to 50 nm.\n\n\n**Key equations you'll work with:**\n- V_th = V_FB + 2φ_F + Q_dep/C_ox\n- μ\n- N_A\n\n\n## What You'll See\n\n**Visualization:** I-V characteristics and transfer curves\n\n- See transistor switching behavior\n- Observe threshold voltage and saturation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/semiconductor/mosfet.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1598"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/semiconductor/mosfet.tex",1599"category": "templates",1600"date": "2026-03-01",1601"time": "09:00"1602},1603{1604"source": "path_planning",1605"content_type": "template",1606"subreddit": "CoCalc",1607"title": "Learn Robotics: A* and RRT Path Planning Algorithms with Interactive Python",1608"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of path planning algorithms for mobile robots and manipulators. We implement and compare A* search for grid-based planning, Rapidly-exploring Random Trees (RRT) for sampling-based planning, and potential field methods for reactive navigation. The analysis includes performance metrics, path quality assessment, and computational efficiency comparisons across different obstacle configurations.\n\n\n**Key equations you'll work with:**\n- g(n)\n- n\n- h(n)\n\n\n## What You'll See\n\n**Visualization:** configuration space and trajectory plots\n\n- See robot navigating around obstacles\n- Observe path optimality vs computation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/robotics/path_planning.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1609"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/robotics/path_planning.tex",1610"category": "templates",1611"date": "2026-03-01",1612"time": "09:00"1613},1614{1615"source": "traffic_flow",1616"content_type": "template",1617"subreddit": "CoCalc",1618"title": "Learn Traffic Flow LWR Model and Congestion with Interactive Python",1619"body": "## What You'll Learn\n\na comprehensive computational analysis of traffic flow theory, implementing the Lighthill-Whitham-Richards (LWR) continuum model, Greenshields' fundamental diagram, and queuing analysis at signalized intersections. We derive traffic wave propagation characteristics, compute capacity using fundamental relationships, analyze Webster's delay formula for signal timing optimization, and demonstrate shock wave formation during congestion onset. The analysis provides quantitative tools for traffic engineers to predict flow breakdown, optimize signal timing, and estimate delay under varying demand conditions.\n\n\n**Key equations you'll work with:**\n- q\n- k\n- v\n\n\n## What You'll See\n\n**Visualization:** fundamental diagrams and speed-density relationships\n\n- See traffic flow transitions from free to congested\n- Observe capacity limits and shockwave formation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/civil-engineering/traffic_flow.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1620"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/civil-engineering/traffic_flow.tex",1621"category": "templates",1622"date": "2026-03-02",1623"time": "09:00"1624},1625{1626"source": "fisheries_models",1627"content_type": "template",1628"subreddit": "CoCalc",1629"title": "Learn Fisheries Population Models: Stock-Recruitment Dynamics and Sustainable Harvesting with Interactive Python",1630"body": "## What You'll Learn\n\na comprehensive computational analysis of fisheries population models used in sustainable fisheries management. We examine surplus production models including the Schaefer and Fox formulations, calculate maximum sustainable yield (MSY) and optimal fishing effort, analyze stock-recruitment relationships through Beverton-Holt and Ricker models, and investigate age-structured population dynamics. The analysis demonstrates critical thresholds for overfishing, equilibrium biomass levels, and yield-per-recruit optimization for effective fisheries conservation.\n\n\n**Key equations you'll work with:**\n- B\n- r\n- ^-1\n\n\n## What You'll See\n\n**Visualization:** yield curves and stock-recruitment relationships\n\n- See sustainable fishing levels\n- Observe overfishing thresholds\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/marine-biology/fisheries_models.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1631"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/marine-biology/fisheries_models.tex",1632"category": "templates",1633"date": "2026-03-02",1634"time": "09:00"1635},1636{1637"source": "gene_regulatory",1638"content_type": "template",1639"subreddit": "CoCalc",1640"title": "Learn Gene Regulatory Networks Boolean Models with Interactive Python",1641"body": "## What You'll Learn\n\nGene regulatory networks (GRNs) control cellular behavior through complex interactions between genes, proteins, and regulatory elements. This report implements multiple modeling frameworks for GRNs, including Boolean network models for discrete gene states, Hill function representations of cooperative binding, ordinary differential equation (ODE) models of gene expression dynamics, and bistable toggle switch systems. We analyze the dynamics of regulatory interactions, identify stable states and attractors, and visualize network behavior across different regulatory architectures.\n\n\n**Key equations you'll work with:**\n- n\n- f(x) = β xⁿ / (Kⁿ + xⁿ)\n- β\n\n\n## What You'll See\n\n**Visualization:** regulatory network diagrams and expression dynamics\n\n- See genes turning each other on and off\n- Observe feedback loop effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/systems-biology/gene_regulatory.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1642"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/systems-biology/gene_regulatory.tex",1643"category": "templates",1644"date": "2026-03-03",1645"time": "09:00"1646},1647{1648"source": "svm",1649"content_type": "template",1650"subreddit": "CoCalc",1651"title": "Learn Support Vector Machines: Kernels and Classification with Interactive Python",1652"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of Support Vector Machines (SVM) including hard and soft margin classification, the kernel trick for nonlinear decision boundaries, hyperparameter tuning (C and gamma), and multi-class strategies. We visualize decision boundaries, support vectors, and margin regions.\n\n\n**Key equations you'll work with:**\n- ξᵢ\n- K(xᵢ, x_j) = φ(xᵢ) · φ(x_j)\n- K(x, x') = x · x'\n\n\n## What You'll See\n\n**Visualization:** support vectors and margin visualization\n\n- See optimal separating hyperplane\n- Observe kernel effects on boundaries\n\n## Make It Yours\n\nTune hyperparameters, swap in your data, or extend the model architecture.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/svm.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1653"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/svm.tex",1654"category": "templates",1655"date": "2026-03-03",1656"time": "09:00"1657},1658{1659"source": "inflation",1660"content_type": "template",1661"subreddit": "CoCalc",1662"title": "Learn Cosmic Inflation: Slow-Roll Dynamics and Primordial Perturbations with Interactive Python",1663"body": "## What You'll Learn\n\na comprehensive computational analysis of inflationary cosmology within the slow-roll approximation. We examine the dynamics of a scalar field (the inflaton) driving exponential expansion in the early universe, calculate slow-roll parameters (ε, η) that govern the duration and ending of inflation, and compute the primordial power spectra for scalar and tensor perturbations. The analysis includes observational constraints from Planck 2018 measurements, focusing on the scalar spectral index n_s and tensor-to-scalar ratio r. We demonstrate how different inflaton potentials predict distinct signatures in the cosmic microwave background radiation.\n\n\n**Key equations you'll work with:**\n- ε\n- η\n- n_s\n\n\n## What You'll See\n\n**Visualization:** inflaton potential and perturbation spectra\n\n- See exponential expansion solving horizon problem\n- Observe primordial fluctuation generation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cosmology/inflation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1664"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cosmology/inflation.tex",1665"category": "templates",1666"date": "2026-03-04",1667"time": "09:00"1668},1669{1670"source": "ode_solver",1671"content_type": "template",1672"subreddit": "CoCalc",1673"title": "Learn Numerical Methods for Ordinary Differential Equations: A Comparative Analysis of Integration Schemes with Interactive Python",1674"body": "## What You'll Learn\n\nThis technical report presents a comprehensive comparison of numerical methods for solving ordinary differential equations. We implement and analyze Forward Euler, fourth-order Runge-Kutta (RK4), and adaptive Runge-Kutta-Fehlberg (RKF45) methods. Performance is evaluated on stiff and non-stiff test problems using accuracy, computational cost, and stability metrics. Results demonstrate the trade-offs between method complexity and solution quality, with RKF45 achieving optimal efficiency for most engineering applications.\n\n\n**Key equations you'll work with:**\n- y(t)\n- |1 + hλ| < 1\n- y = λ y\n\n\n## What You'll See\n\n**Visualization:** numerical solutions with step size comparison\n\n- See Euler and Runge-Kutta methods\n- Observe stability and accuracy tradeoffs\n\n## Make It Yours\n\nTest different initial conditions, compare convergence rates, or optimize parameters.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/ode_solver.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1675"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/numerical-methods/o",1676"category": "templates",1677"date": "2026-03-04",1678"time": "09:00"1679},1680{1681"source": "portfolio_optimization",1682"content_type": "template",1683"subreddit": "CoCalc",1684"title": "Learn Portfolio Optimization: Modern Portfolio Theory and Risk Management with Interactive Python",1685"body": "## What You'll Learn\n\na comprehensive computational analysis of portfolio optimization using Modern Portfolio Theory (MPT). We examine the mean-variance framework introduced by Markowitz, construct efficient frontiers for multi-asset portfolios, derive optimal portfolio weights under various constraints, and analyze risk-adjusted performance using the Sharpe ratio. The analysis includes Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) computations, demonstrating the practical implementation of quadratic programming for portfolio construction with realistic constraints including long-only positions and sector allocation limits.\n\n\n**Key equations you'll work with:**\n- n\n- w = (w₁, , wₙ)^T\n- ∑_i=1ⁿ wᵢ = 1\n\n\n## What You'll See\n\n**Visualization:** efficient frontier and risk-return tradeoffs\n\n- See Markowitz optimal portfolios\n- Observe diversification benefits\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/portfolio_optimization.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1686"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/portfolio_optimization.tex",1687"category": "templates",1688"date": "2026-03-05",1689"time": "09:00"1690},1691{1692"source": "gravity_anomaly",1693"content_type": "template",1694"subreddit": "CoCalc",1695"title": "Learn Gravity Anomaly Analysis: Forward Modeling and Inversion of Subsurface Density Structures with Interactive Python",1696"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of gravity anomalies arising from subsurface density variations. We implement forward modeling for multiple geometric bodies including spheres, cylinders, and rectangular prisms, along with Bouguer and terrain corrections. The analysis demonstrates depth estimation using the half-width rule, Euler deconvolution for source parameter determination, and power spectrum analysis for estimating source depths. Applications include mineral exploration, basin analysis, and detection of near-surface voids.\n\n\n**Key equations you'll work with:**\n- U\n- r\n- ρ(r')\n\n\n## What You'll See\n\n**Visualization:** Bouguer anomaly maps and subsurface models\n\n- See density variations revealed by gravity\n- Observe crustal structure from gravity data\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/gravity_anomaly.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1697"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geophysics/gravity_anomaly.tex",1698"category": "templates",1699"date": "2026-03-05",1700"time": "09:00"1701},1702{1703"source": "radiation_dosimetry",1704"content_type": "template",1705"subreddit": "CoCalc",1706"title": "Learn Radiation Dosimetry: Depth-Dose Distributions and Treatment Planning with Interactive Python",1707"body": "## What You'll Learn\n\na comprehensive analysis of radiation dosimetry for external beam radiotherapy. We compute depth-dose distributions for photon, electron, and proton beams, analyze tissue inhomogeneity corrections, evaluate dose-volume histograms, and compare treatment planning techniques. All calculations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- d{}\n- dm\n- D\n\n\n## What You'll See\n\n**Visualization:** dose distributions and depth-dose curves\n\n- See radiation dose deposition patterns\n- Observe beam energy effects on penetration\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/medical-physics/radiation_dosimetry.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1708"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/medical-physics/radiation_dosimetry.tex",1709"category": "templates",1710"date": "2026-03-06",1711"time": "09:00"1712},1713{1714"source": "process_control",1715"content_type": "template",1716"subreddit": "CoCalc",1717"title": "Learn Process Control Feedback and Cascade Systems with Interactive Python",1718"body": "## What You'll Learn\n\ncomputational analysis of feedback control systems in chemical process engineering. We examine first-order and second-order system dynamics, PID controller design, closed-loop performance, and Ziegler-Nichols tuning methods. The analysis includes step response characteristics, stability analysis, and controller parameter optimization for typical chemical engineering applications such as temperature control in reactors and level control in tanks.\n\n\n**Key equations you'll work with:**\n- G(s) = K/(τ s + 1)\n- τ\n- K\n\n\n## What You'll See\n\n**Visualization:** step responses and Bode plots of control systems\n\n- See PID tuning effects on system response\n- Observe stability margins and transient behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/process_control.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1719"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/process_control.tex",1720"category": "templates",1721"date": "2026-03-06",1722"time": "09:00"1723},1724{1725"source": "cellular_automata",1726"content_type": "template",1727"subreddit": "CoCalc",1728"title": "Learn Cellular Automata: From Elementary Rules to Biological Systems with Interactive Python",1729"body": "## What You'll Learn\n\na comprehensive computational analysis of cellular automata (CA) as models for biological systems. We examine elementary one-dimensional CA including Wolfram's Rule 30 (chaotic), Rule 110 (class IV complexity), and Rule 184 (traffic flow), followed by two-dimensional systems including Conway's Game of Life and its pattern classification. We analyze lattice gas automata for fluid simulation, forest fire models for ecological dynamics, and epidemic spread using the SIRS (Susceptible-Infected-Recovered-Susceptible) model. Computational experiments demonstrate emergent complexity from simple local rules, pattern stability and periodicity, and the computational universality of class IV automata.\n\n\n**Key equations you'll work with:**\n- (L, S, N, f)\n- L\n- S\n\n\n## What You'll See\n\n**Visualization:** space-time evolution patterns and rule classifications\n\n- See emergent complexity from simple rules\n- Observe edge of chaos dynamics\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/cellular_automata.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1730"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/cellular_automata.tex",1731"category": "templates",1732"date": "2026-03-07",1733"time": "09:00"1734},1735{1736"source": "neural_network",1737"content_type": "template",1738"subreddit": "CoCalc",1739"title": "Learn Spiking Neural Networks: Population Dynamics and Synchronization with Interactive Python",1740"body": "## What You'll Learn\n\na comprehensive analysis of spiking neural network dynamics. We implement leaky integrate-and-fire neurons, analyze population synchronization, compute spike train statistics, examine balanced excitation-inhibition, and investigate network oscillations. All simulations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- V ≥ V_th\n- V_reset\n- V_m\n\n\n## What You'll See\n\n**Visualization:** spike trains and network activity patterns\n\n- See information flow through neural circuits\n- Observe synchronization and coding\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/neuroscience/neural_network.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1741"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/neuroscience/neural_network.tex",1742"category": "templates",1743"date": "2026-03-07",1744"time": "09:00"1745},1746{1747"source": "pulsar_timing",1748"content_type": "template",1749"subreddit": "CoCalc",1750"title": "Learn Pulsar Timing: From Spin-down to Gravitational Wave Detection A Comprehensive Analysis of Pulsar Astrophysics with Interactive Python",1751"body": "## What You'll Learn\n\nThis comprehensive analysis explores pulsar timing theory and applications, from basic spin-down physics to gravitational wave detection via pulsar timing arrays. We derive the fundamental pulsar equations including period derivatives, characteristic ages, and magnetic field strengths. The analysis covers the P-P diagram for pulsar classification, timing residuals analysis, binary pulsar systems, and the search for nanohertz gravitational waves. We examine synthetic pulsar populations representing normal pulsars, millisecond pulsars, and magnetars, and explore their evolutionary relationships.\n\n\n**Key equations you'll work with:**\n- P\n- I ≈ 10^45\n- ²\n\n\n## What You'll See\n\n**Visualization:** pulse arrival times and timing residuals\n\n- See the precise periodicity of pulsar signals\n- Observe millisecond-precision clock stability\n\n## Make It Yours\n\nChange stellar parameters, simulate different objects, or model your observation.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astronomy/pulsar_timing.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1752"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astronomy/",1753"category": "templates",1754"date": "2026-03-08",1755"time": "09:00"1756},1757{1758"source": "control_systems",1759"content_type": "template",1760"subreddit": "CoCalc",1761"title": "Learn Control Systems Analysis: A Comprehensive Tutorial on Classical Feedback Control Design with Interactive Python",1762"body": "## What You'll Learn\n\nThis tutorial provides a comprehensive analysis of classical control system design techniques. We examine transfer function modeling, frequency response analysis through Bode and Nyquist plots, root locus methods for stability analysis, and PID controller tuning strategies. All computations are performed dynamically using Python's control systems toolbox, demonstrating reproducible engineering analysis workflows.\n\n\n**Key equations you'll work with:**\n- G(s)\n- C(s)\n- L(s) = C(s)G(s)\n\n\n## What You'll See\n\n**Visualization:** root locus and Nyquist plots\n\n- See pole migration with gain changes\n- Observe stability margins and phase relationships\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electrical-engineering/control_systems.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1763"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electrical-engineering/control_s",1764"category": "templates",1765"date": "2026-03-08",1766"time": "09:00"1767},1768{1769"source": "reaction_engineering",1770"content_type": "template",1771"subreddit": "CoCalc",1772"title": "Learn Reaction Engineering CSTR and PFR Design with Interactive Python",1773"body": "## What You'll Learn\n\ncomprehensive computational analysis of chemical reactor design and kinetics, focusing on batch reactors, continuous stirred tank reactors (CSTRs), and plug flow reactors (PFRs). We investigate first-order and second-order reaction kinetics, residence time distributions, temperature effects via Arrhenius behavior, and reactor performance comparisons using Levenspiel plots. The analysis provides design equations, numerical solutions for non-isothermal operation, and optimization strategies for industrial reactor systems.\n\n\n**Key equations you'll work with:**\n- A → P\n- k\n- A → P\n\n\n## What You'll See\n\n**Visualization:** conversion profiles in CSTR and PFR reactors\n\n- See residence time effects on reaction completion\n- Observe reactor design tradeoffs\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/reaction_engineering.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1774"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/reaction_engineering.tex",1775"category": "templates",1776"date": "2026-03-09",1777"time": "09:00"1778},1779{1780"source": "option_pricing",1781"content_type": "template",1782"subreddit": "CoCalc",1783"title": "Learn Option Pricing: Black-Scholes Model and Greeks Analysis with Interactive Python",1784"body": "## What You'll Learn\n\na comprehensive computational analysis of option pricing using the Black-Scholes framework. We derive and implement the closed-form solutions for European call and put options, compute the sensitivity measures known as Greeks (delta, gamma, theta, vega, rho), and compare analytical methods with numerical approaches including binomial tree models and Monte Carlo simulation. The analysis demonstrates practical hedging applications through delta-neutral portfolio construction and examines the volatility smile phenomenon observed in real markets. All computations are performed for a model stock with current price \\100, examining options with strikes ranging from \\80 to \\120 and maturities from 1 week to 1 year.\n\n\n**Key equations you'll work with:**\n- and drift\n- , the value\n\n\n## What You'll See\n\n**Visualization:** option value surfaces and Greeks profiles\n\n- See Black-Scholes prices vs strike and time\n- Observe delta hedging sensitivity\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/option_pricing.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1785"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/financial-math/option_pricing.tex",1786"category": "templates",1787"date": "2026-03-09",1788"time": "09:00"1789},1790{1791"source": "isotope_geochemistry",1792"content_type": "template",1793"subreddit": "CoCalc",1794"title": "Learn Isotope Geochemistry Fractionation and Radiogenic Dating with Interactive Python",1795"body": "## What You'll Learn\n\nThis document presents computational methods for isotope geochemistry, including delta notation calculations, Rayleigh fractionation modeling, two-component mixing, and radiogenic dating via isochron analysis. We demonstrate temperature-dependent oxygen isotope fractionation, carbon isotope systematics in closed systems, and Rb-Sr geochronology applied to crustal rocks. All calculations use standard reference materials (VSMOW, VPDB) and established fractionation equations.\n\n\n**Key equations you'll work with:**\n- δ^18\n- δ^13\n- δ\n\n\n## What You'll See\n\n**Visualization:** isotope ratios and fractionation trends\n\n- See isotopic signatures trace geological processes\n- Observe temperature-dependent fractionation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geochemistry/isotope_geochemistry.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1796"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geochemistry/isotope_geochemistry.tex",1797"category": "templates",1798"date": "2026-03-10",1799"time": "09:00"1800},1801{1802"source": "action_potential",1803"content_type": "template",1804"subreddit": "CoCalc",1805"title": "Learn Hodgkin-Huxley Model: Action Potential Generation and Ion Channel Dynamics with Interactive Python",1806"body": "## What You'll Learn\n\na comprehensive analysis of the Hodgkin-Huxley model for action potential generation. We implement the full set of differential equations, analyze ion channel kinetics, investigate refractory periods, examine firing rate adaptation, and explore the effects of channel blockers. All simulations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- V_m\n- I_{ext}=I_ext\n- \\μ\n\n\n## What You'll See\n\n**Visualization:** voltage spikes and ion channel dynamics\n\n- See Hodgkin-Huxley dynamics unfold\n- Observe threshold and refractory periods\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/neuroscience/action_potential.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1807"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/neuroscience/action_potential.tex",1808"category": "templates",1809"date": "2026-03-10",1810"time": "09:00"1811},1812{1813"source": "atmospheric_reentry",1814"content_type": "template",1815"subreddit": "CoCalc",1816"title": "Learn Atmospheric Reentry Analysis: Heat Flux, Trajectory, and Ablation Modeling A Comprehensive Study of Ballistic and Lifting Reentry Profiles with Interactive Python",1817"body": "## What You'll Learn\n\nThis research paper presents a comprehensive analysis of atmospheric reentry dynamics for spacecraft vehicles. We develop and compare ballistic and lifting reentry trajectories, computing time histories of altitude, velocity, deceleration, and stagnation-point heat flux. The analysis includes an exponential atmospheric model, Sutton-Graves heat flux correlation, and a simplified ablation model for thermal protection system sizing. Multiple entry angles and ballistic coefficients are evaluated to determine optimal reentry profiles for human-rated and cargo vehicles.\n\n\n**Key equations you'll work with:**\n- m\n- C_D\n- A\n\n\n## What You'll See\n\n**Visualization:** temperature and velocity profiles during spacecraft reentry\n\n- Watch heating rates peak as altitude decreases\n- See the thermal challenge of hypersonic flight\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/aerospace/atmospheric_reentry.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1818"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-t",1819"category": "templates",1820"date": "2026-03-11",1821"time": "09:00"1822},1823{1824"source": "network_analysis",1825"content_type": "template",1826"subreddit": "CoCalc",1827"title": "Learn Network Science: Graph Metrics and Community Detection with Interactive Python",1828"body": "## What You'll Learn\n\nNetwork science provides tools for analyzing complex systems through graph theory. This document demonstrates computational methods for calculating centrality measures, detecting community structure, analyzing random graph models, and characterizing small-world and scale-free properties. Using PythonTeX, we implement algorithms for betweenness centrality, modularity optimization, Erdos-Renyi and Barabasi-Albert models, and compute characteristic path lengths and clustering coefficients.\n\n\n**Key equations you'll work with:**\n- σ_st\n- s\n- t\n\n\n## What You'll See\n\n**Visualization:** network graphs and centrality measures\n\n- See node connections and community structure\n- Observe degree distributions and hubs\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/other/network_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1829"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/other/network_analysis.tex",1830"category": "templates",1831"date": "2026-03-11",1832"time": "09:00"1833},1834{1835"source": "cross_section",1836"content_type": "template",1837"subreddit": "CoCalc",1838"title": "Learn Particle Scattering Cross Sections: Rutherford, Mott, and Form Factor Analysis with Interactive Python",1839"body": "## What You'll Learn\n\nThis technical report presents comprehensive computational analysis of particle scattering cross sections. We implement the Rutherford formula for classical Coulomb scattering, Mott cross section with relativistic and spin corrections, and nuclear form factors for extended charge distributions. The analysis covers differential and integrated cross sections, structure functions, and momentum transfer dependence essential for understanding particle interactions and nuclear structure.\n\n\n**Key equations you'll work with:**\n- dσ/dΩ\n- (θ, φ)\n- I₀\n\n\n## What You'll See\n\n**Visualization:** scattering cross-sections vs energy\n\n- See resonance peaks in particle collisions\n- Observe threshold and resonance behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/particle-physics/cross_section.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1840"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/particle-physics/cross_section.tex",1841"category": "templates",1842"date": "2026-03-12",1843"time": "09:00"1844},1845{1846"source": "photonic_crystals",1847"content_type": "template",1848"subreddit": "CoCalc",1849"title": "Learn Photonic Crystals: Band Structure and Optical Properties with Interactive Python",1850"body": "## What You'll Learn\n\nPhotonic crystals are periodic dielectric structures that exhibit photonic band gaps—frequency ranges in which electromagnetic wave propagation is forbidden. This report presents computational analysis of one-dimensional (1D) Bragg stacks using transfer matrix methods, examining band structure formation, reflectance spectra, defect mode engineering, and slow light phenomena. We calculate the photonic band gap for a quarter-wave stack with alternating refractive indices of 1.45 and 3.5, demonstrating complete reflection within the gap and the emergence of localized defect states when periodicity is broken. Applications in optical fibers, LEDs, and photonic sensors are discussed.\n\n\n**Key equations you'll work with:**\n- a\n- u_{k}(r)\n- ω(k)\n\n\n## What You'll See\n\n**Visualization:** band diagrams and bandgap structures\n\n- See light forbidden in periodic structures\n- Observe photonic bandgap engineering\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/photonics/photonic_crystals.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1851"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/photonics/photonic_crystals.tex",1852"category": "templates",1853"date": "2026-03-12",1854"time": "09:00"1855},1856{1857"source": "adaptive_control",1858"content_type": "template",1859"subreddit": "CoCalc",1860"title": "Learn Adaptive Control Systems: Model Reference and Self-Tuning Approaches with Interactive Python",1861"body": "## What You'll Learn\n\na comprehensive analysis of adaptive control systems with emphasis on Model Reference Adaptive Control (MRAC) and Self-Tuning Regulators (STR). We examine the stability guarantees provided by Lyapunov-based adaptation laws, analyze the role of persistent excitation in parameter convergence, and compare direct versus indirect adaptive approaches. Computational simulations demonstrate parameter adaptation dynamics, tracking performance, and robustness modifications including σ-modification and projection methods for handling unmodeled dynamics and bounded disturbances.\n\n\n**Key equations you'll work with:**\n- σ\n- θ^*\n- θ(t)\n\n\n## What You'll See\n\n**Visualization:** parameter convergence and tracking error\n\n- See controller adapting to changing plant dynamics\n- Observe learning rate effects on stability\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/control-theory/adaptive_control.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1862"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/control-theory/adaptive_control.tex",1863"category": "templates",1864"date": "2026-03-13",1865"time": "09:00"1866},1867{1868"source": "morphological",1869"content_type": "template",1870"subreddit": "CoCalc",1871"title": "Learn Mathematical Morphology: Fundamental Operations and Applications with Interactive Python",1872"body": "## What You'll Learn\n\na comprehensive analysis of mathematical morphology, focusing on fundamental operations (erosion, dilation, opening, closing) and their applications in binary and grayscale image processing. We examine the algebraic properties of morphological operators, demonstrate their use in noise removal, edge detection, and feature extraction, and analyze the effects of structuring element shape and size on operation outcomes. Computational examples illustrate morphological gradient, top-hat transforms, and the hit-or-miss transform for pattern detection.\n\n\n**Key equations you'll work with:**\n- B\n- A\n- A\n\n\n## What You'll See\n\n**Visualization:** erosion/dilation sequences and shape analysis\n\n- See morphological operations transform images\n- Observe structuring element effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/morphological.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1873"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/image-processing/morphological.tex",1874"category": "templates",1875"date": "2026-03-13",1876"time": "09:00"1877},1878{1879"source": "biosignal_processing",1880"content_type": "template",1881"subreddit": "CoCalc",1882"title": "Learn Biosignal Processing ECG and EEG Analysis with Interactive Python",1883"body": "## What You'll Learn\n\nThis document presents advanced biosignal processing techniques for electrocardiogram (ECG), electroencephalogram (EEG), and electromyogram (EMG) signals. We implement R-peak detection for heart rate analysis, frequency band decomposition for brain activity monitoring, and envelope detection for muscle activation patterns. Heart rate variability (HRV) metrics including SDNN and RMSSD are computed from R-R intervals, while spectral analysis quantifies alpha, beta, theta, and delta band power in EEG recordings. Signal-to-noise ratio (SNR) improvements through bandpass filtering are demonstrated for each biosignal modality.\n\n\n**Key equations you'll work with:**\n- RRₙ\n- RR_n+1\n- \\μ\n\n\n## What You'll See\n\n**Visualization:** filtered ECG/EMG signals with frequency components\n\n- See noise removal and feature extraction\n- Observe clinically relevant signal patterns\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/biosignal_processing.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1884"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biomedical/biosignal_processing.tex",1885"category": "templates",1886"date": "2026-03-14",1887"time": "09:00"1888},1889{1890"source": "math_thesis",1891"content_type": "template",1892"subreddit": "CoCalc",1893"title": "Learn Explorations in Partition Theory and Topology with Interactive Python",1894"body": "## What You'll Learn\n\nA hands-on exploration of Explorations in Partition Theory and Topology.\n\n\n**Key equations you'll work with:**\n- p(n)\n- n\n- n\n\n\n## What You'll See\n\n**Visualization:** mathematical computations with SageMath integration\n\n- See symbolic math rendered beautifully\n- Observe exact vs numerical results\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/sagetex/math_thesis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1895"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/sagetex/math_thesis.tex",1896"category": "templates",1897"date": "2026-03-14",1898"time": "09:00"1899},1900{1901"source": "pathfinding",1902"content_type": "template",1903"subreddit": "CoCalc",1904"title": "Learn Pathfinding Algorithms A* and Dijkstra Comparison for Game AI with Interactive Python",1905"body": "## What You'll Learn\n\nThis report implements and compares optimal pathfinding algorithms for game AI navigation. We present comprehensive implementations of A* and Dijkstra's algorithm on grid-based environments, comparing Manhattan, Euclidean, and diagonal distance heuristics. The analysis includes visualization of explored nodes, path optimality verification, and computational complexity comparisons. Results demonstrate that A* with admissible heuristics provides optimal paths while exploring significantly fewer nodes than Dijkstra's algorithm, making it ideal for real-time game navigation systems.\n\n\n**Key equations you'll work with:**\n- f(n) = g(n) + h(n)\n- g(n)\n- n\n\n\n## What You'll See\n\n**Visualization:** A* algorithm visualization on grid maps\n\n- See optimal paths computed through obstacles\n- Observe heuristic effects on search efficiency\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/game-development/pathfinding.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1906"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/game-development/pathfinding.tex",1907"category": "templates",1908"date": "2026-03-15",1909"time": "09:00"1910},1911{1912"source": "neutron_stars",1913"content_type": "template",1914"subreddit": "CoCalc",1915"title": "Learn Neutron Star Physics Equation of State and Structure with Interactive Python",1916"body": "## What You'll Learn\n\nAnalysis of neutron star structure including mass-radius relations, equation of state, and magnetic field properties.\n\n\n**Key equations you'll work with:**\n- ³\n- M_\\\n- 10^15\n\n\n## What You'll See\n\n**Visualization:** mass-radius relationships and equation of state\n\n- See the extreme density of neutron star matter\n- Observe how nuclear physics constrains stellar structure\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/neutron_stars.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1917"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/neutron_stars.tex",1918"category": "templates",1919"date": "2026-03-15",1920"time": "09:00"1921},1922{1923"source": "stochastic",1924"content_type": "template",1925"subreddit": "CoCalc",1926"title": "Learn Stochastic Differential Equations: Modeling Random Processes with Interactive Python",1927"body": "## What You'll Learn\n\nA hands-on exploration of Stochastic Differential Equations: Modeling Random Processes.\n\n\n**Key equations you'll work with:**\n- W_t\n- Δ Wₙ N(0, Δ t)\n- t\n\n\n## What You'll See\n\n**Visualization:** ensemble trajectories and probability distributions\n\n- See randomness propagating through systems\n- Observe mean and variance evolution\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/simulations/stochastic.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1928"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/simulations/stochastic.tex",1929"category": "templates",1930"date": "2026-03-16",1931"time": "09:00"1932},1933{1934"source": "ct_reconstruction",1935"content_type": "template",1936"subreddit": "CoCalc",1937"title": "Learn Computed Tomography: Image Reconstruction and Artifact Analysis with Interactive Python",1938"body": "## What You'll Learn\n\na comprehensive analysis of CT image reconstruction algorithms. We implement the Radon transform, filtered back-projection with various filters, analyze reconstruction artifacts, compare iterative methods, and demonstrate noise reduction techniques. All simulations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- ^\\\n- ^\\\n- \\×\n\n\n## What You'll See\n\n**Visualization:** sinograms and reconstructed CT images\n\n- See filtered backprojection in action\n- Observe artifact types and resolution limits\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/medical-physics/ct_reconstruction.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1939"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/medical-physics/ct_reconstruction.tex",1940"category": "templates",1941"date": "2026-03-16",1942"time": "09:00"1943},1944{1945"source": "island_biogeography",1946"content_type": "template",1947"subreddit": "CoCalc",1948"title": "Learn Island Biogeography: Equilibrium Theory and Species-Area Relationships with Interactive Python",1949"body": "## What You'll Learn\n\na comprehensive computational analysis of the MacArthur-Wilson Theory of Island Biogeography, examining the dynamic equilibrium between immigration and extinction rates as determinants of island species richness. We model immigration as a function of island isolation, extinction as a function of island area, and derive equilibrium species number S^* for islands varying in size and distance from mainland source pools. The species-area relationship S = cA^z is fitted to empirical data, yielding scaling exponents consistent with observed values (z ≈ 0.25 for continental islands). Applications to conservation biology, including the SLOSS (Single Large Or Several Small) debate and nature reserve design, are explored through simulations of turnover dynamics and extinction vulnerability.\n\n\n**Key equations you'll work with:**\n- S^*\n- S = cA^z\n- z ≈ 0.25\n\n\n## What You'll See\n\n**Visualization:** species-area curves and colonization dynamics\n\n- See equilibrium between immigration and extinction\n- Observe island size effects on biodiversity\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/island_biogeography.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1950"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/island_biogeography.tex",1951"category": "templates",1952"date": "2026-03-17",1953"time": "09:00"1954},1955{1956"source": "gravitational_lensing",1957"content_type": "template",1958"subreddit": "CoCalc",1959"title": "Learn Gravitational Lensing: From Einstein Rings to Weak Lensing Surveys with Interactive Python",1960"body": "## What You'll Learn\n\na comprehensive computational analysis of gravitational lensing phenomena across multiple regimes. We derive and analyze the deflection angle α = 4GM/(c²b) for point mass lenses, compute Einstein ring radii for galaxy-scale strong lensing, simulate magnification maps for multiple image systems, and model microlensing light curves for exoplanet detection. The analysis includes weak lensing shear and convergence calculations relevant to cosmic shear surveys, demonstrating the full range of lensing astrophysics from stellar to cosmological scales.\n\n\n**Key equations you'll work with:**\n- α = 4GM/(c²b)\n- M\n- b\n\n\n## What You'll See\n\n**Visualization:** deflection angles and Einstein rings\n\n- See light bending around massive objects\n- Observe mass estimation from lensing\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/relativity/gravitational_lensing.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1961"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/relativity/gravitational_lensing.tex",1962"category": "templates",1963"date": "2026-03-17",1964"time": "09:00"1965},1966{1967"source": "structure_formation",1968"content_type": "template",1969"subreddit": "CoCalc",1970"title": "Learn Cosmic Structure Formation: Linear Growth and Nonlinear Collapse with Interactive Python",1971"body": "## What You'll Learn\n\na comprehensive computational analysis of cosmic structure formation from linear perturbations in the early universe to nonlinear halo collapse. We solve the linear growth equation to compute the growth factor D(z) and growth rate f(z), calculate the matter power spectrum P(k) using the BBKS transfer function, apply the Press-Schechter formalism to predict halo mass functions, and examine the two-point correlation function including baryon acoustic oscillation (BAO) features. Computational analysis demonstrates the evolution of density perturbations across cosmic time and the emergence of nonlinear structures.\n\n\n**Key equations you'll work with:**\n- D(z)\n- f(z)\n- P(k)\n\n\n## What You'll See\n\n**Visualization:** matter power spectrum and halo mass functions\n\n- See gravitational clustering of matter\n- Observe hierarchical structure formation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cosmology/structure_formation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1972"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cosmology/structure_formation.tex",1973"category": "templates",1974"date": "2026-03-18",1975"time": "09:00"1976},1977{1978"source": "rc_circuit",1979"content_type": "template",1980"subreddit": "CoCalc",1981"title": "Learn RC Circuit Analysis: Transient Response, Frequency Domain, and Filter Design Laboratory Report with Interactive Python",1982"body": "## What You'll Learn\n\nThis laboratory report presents a comprehensive analysis of RC circuits, covering transient response characteristics, Laplace transform methods, frequency domain analysis, and filter design applications. Through computational analysis with Python, we demonstrate charging and discharging dynamics, time constant determination, Bode plot interpretation, and the design of low-pass, high-pass, and band-pass filter configurations. All numerical results are dynamically computed, ensuring reproducibility.\n\n\n**Key equations you'll work with:**\n- τ\n- V₀\n- u(t)\n\n\n## What You'll See\n\n**Visualization:** transient response and frequency response curves\n\n- See capacitor charging dynamics and filter behavior\n- Observe time constant effects on response\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electrical-engineering/rc_circuit.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1983"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/electrical-engineering/rc",1984"category": "templates",1985"date": "2026-03-18",1986"time": "09:00"1987},1988{1989"source": "mass_transfer",1990"content_type": "template",1991"subreddit": "CoCalc",1992"title": "Learn Mass Transfer Diffusion and Convective Transport with Interactive Python",1993"body": "## What You'll Learn\n\nMass transfer describes the movement of chemical species due to concentration gradients (diffusion) and bulk fluid motion (convection). This computational analysis examines Fick's laws of diffusion, film theory mass transfer coefficients, and packed column design for gas-liquid absorption. Applications include chemical reactors, separation processes, and biological systems. We present transient diffusion solutions, dimensionless correlations for mass transfer coefficients, and design calculations for industrial absorption columns.\n\n\n**Key equations you'll work with:**\n- J_A\n- D_AB\n- C₀\n\n\n## What You'll See\n\n**Visualization:** concentration profiles and Sherwood number correlations\n\n- See mass flux across interfaces and boundary layers\n- Observe diffusion and convection contributions\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/mass_transfer.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",1994"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/chemical-engineering/mass_transfer.tex",1995"category": "templates",1996"date": "2026-03-19",1997"time": "09:00"1998},1999{2000"source": "fractals",2001"content_type": "template",2002"subreddit": "CoCalc",2003"title": "Learn Fractal Geometry and Self-Similarity: Computational Analysis of Fractal Structures with Interactive Python",2004"body": "## What You'll Learn\n\na computational exploration of fractal geometry, examining the Mandelbrot set, Julia sets, Sierpinski triangle, and Koch snowflake. We compute fractal dimensions using box-counting methods, analyze escape-time algorithms, and investigate the self-similar structures that characterize these mathematical objects. All visualizations are generated using PythonTeX for complete reproducibility.\n\n\n**Key equations you'll work with:**\n- F\n- N(ε)\n- ε\n\n\n## What You'll See\n\n**Visualization:** fractal patterns with zoom sequences\n\n- See self-similarity at different scales\n- Observe fractal dimension calculations\n\n## Make It Yours\n\nModify parameters, extend to higher dimensions, or apply to your specific problem.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mathematics/fractals.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2005"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mathematics/fractals.tex",2006"category": "templates",2007"date": "2026-03-19",2008"time": "09:00"2009},2010{2011"source": "chaos",2012"content_type": "template",2013"subreddit": "CoCalc",2014"title": "Learn Chaos Theory and Nonlinear Dynamics: Analysis of Deterministic Chaos in Dynamical Systems with Interactive Python",2015"body": "## What You'll Learn\n\nThis technical report presents a comprehensive computational analysis of chaotic dynamical systems. We examine the logistic map, compute bifurcation diagrams showing the route to chaos through period-doubling, calculate Lyapunov exponents as quantitative measures of chaos, and simulate the Lorenz attractor demonstrating strange attractor dynamics. All computations are performed using PythonTeX for reproducibility, with detailed numerical analysis of sensitivity to initial conditions and fractal basin boundaries.\n\n\n**Key equations you'll work with:**\n- λ\n- λ > 0\n- r [0, 4]\n\n\n## What You'll See\n\n**Visualization:** bifurcation diagrams and strange attractors\n\n- See the cascade to chaos as parameters change\n- Observe period-doubling route to chaos\n\n## Make It Yours\n\nModify parameters, extend to higher dimensions, or apply to your specific problem.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mathematics/chaos.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2016"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/mathematics/chaos.tex",2017"category": "templates",2018"date": "2026-03-20",2019"time": "09:00"2020},2021{2022"source": "mineral_thermodynamics",2023"content_type": "template",2024"subreddit": "CoCalc",2025"title": "Learn Mineral Thermodynamics Phase Equilibria with Interactive Python",2026"body": "## What You'll Learn\n\nThis document presents computational thermodynamic analysis of mineral phase equilibria, including Gibbs free energy calculations for metamorphic reactions, construction of pressure-temperature phase diagrams using the Clapeyron equation, temperature-dependent equilibrium constants via the van't Hoff equation, and solid solution thermodynamics for Fe-Mg olivine. Applications include predicting mineral stability fields in crustal and mantle conditions, calculating reaction boundaries for index mineral assemblages, and modeling activity-composition relationships in solid solutions.\n\n\n**Key equations you'll work with:**\n- G = H - TS\n- Δ G = 0\n- dP/dT = Δ S/Δ V\n\n\n## What You'll See\n\n**Visualization:** phase diagrams and Gibbs energy surfaces\n\n- See mineral stability fields vs P and T\n- Observe reaction boundaries in P-T space\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geochemistry/mineral_thermodynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2027"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/geochemistry/mineral_thermodynamics.tex",2028"category": "templates",2029"date": "2026-03-20",2030"time": "09:00"2031},2032{2033"source": "regression_analysis",2034"content_type": "template",2035"subreddit": "CoCalc",2036"title": "Learn Multiple Regression Analysis: Model Building and Diagnostics with Interactive Python",2037"body": "## What You'll Learn\n\nA hands-on exploration of Multiple Regression Analysis: Model Building and Diagnostics.\n\n\n**Key equations you'll work with:**\n- R²\n- R²\n- X₁\n\n\n## What You'll See\n\n**Visualization:** scatter plots with regression lines and residuals\n\n- See relationship strength and model fit\n- Observe coefficient significance and R²\n\n## Make It Yours\n\nAdjust sample sizes, add your own datasets, or customize the analysis pipeline.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/statistics/regression_analysis.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2038"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/statistics/regression_analysis.tex",2039"category": "templates",2040"date": "2026-03-21",2041"time": "09:00"2042},2043{2044"source": "memory_models",2045"content_type": "template",2046"subreddit": "CoCalc",2047"title": "Learn Memory Models Forgetting Curves and ACT-R with Interactive Python",2048"body": "## What You'll Learn\n\ncomputational models of human memory systems, including the Ebbinghaus forgetting curve, serial position effects, working memory capacity limits, and the ACT-R activation model. We quantify retention decay, recall probability across list positions, Cowan's K capacity estimates, and base-level activation dynamics. Results demonstrate exponential and power-law forgetting, primacy-recency effects, a capacity limit of approximately 4 items, and activation decay following ACT-R principles. These models provide mathematical frameworks for understanding memory phenomena across multiple timescales and cognitive tasks.\n\n\n**Key equations you'll work with:**\n- R(t) = e^-t/τ\n- τ\n- R(t) = at^-b\n\n\n## What You'll See\n\n**Visualization:** forgetting curves and recall probability\n\n- See memory decay over retention intervals\n- Observe spacing and testing effects\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cognitive-science/memory_models.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2049"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/cognitive-science/memory_models.tex",2050"category": "templates",2051"date": "2026-03-21",2052"time": "09:00"2053},2054{2055"source": "nonlinear_control",2056"content_type": "template",2057"subreddit": "CoCalc",2058"title": "Learn Nonlinear Control Systems: Stability Analysis and Advanced Control Design with Interactive Python",2059"body": "## What You'll Learn\n\na comprehensive analysis of nonlinear control systems using Lyapunov stability theory, phase plane methods, and advanced nonlinear control design techniques. We examine the stability of equilibrium points for representative nonlinear systems, demonstrate feedback linearization for affine nonlinear systems, design sliding mode controllers with chattering reduction, and apply backstepping to cascade systems. Computational analysis verifies stability margins, control performance, and robustness properties through simulation of Van der Pol oscillator, inverted pendulum, and nonlinear mass-spring-damper systems.\n\n\n**Key equations you'll work with:**\n- x Rⁿ\n- u R^m\n- {x} = f(x)\n\n\n## What You'll See\n\n**Visualization:** phase portraits and Lyapunov function contours\n\n- See limit cycles and stability regions\n- Observe bifurcation behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/control-theory/nonlinear_control.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2060"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/control-theory/nonlinear_control.tex",2061"category": "templates",2062"date": "2026-03-22",2063"time": "09:00"2064},2065{2066"source": "carbon_cycle",2067"content_type": "template",2068"subreddit": "CoCalc",2069"title": "Learn Global Carbon Cycle Modeling: Reservoirs, Fluxes, and Anthropogenic Perturbation with Interactive Python",2070"body": "## What You'll Learn\n\nThis technical report presents a comprehensive analysis of the global carbon cycle using box models to represent carbon exchange between atmosphere, ocean, and terrestrial biosphere. We examine natural carbon fluxes, anthropogenic emissions, and the resulting changes in atmospheric CO₂ concentration. The model explores the airborne fraction of emissions, ocean uptake dynamics, and climate-carbon feedbacks. Projections under different emission scenarios illustrate the long-term implications for atmospheric carbon levels.\n\n\n**Key equations you'll work with:**\n- ₂\n- ₂\n- \n\n\n## What You'll See\n\n**Visualization:** carbon flux diagrams and atmospheric CO2 trends\n\n- See carbon moving between atmosphere, ocean, and land\n- Observe anthropogenic perturbation to natural cycle\n\n## Make It Yours\n\nAdjust forcing parameters, extend time scales, or model specific scenarios.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/climate-science/carbon_cycle.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2071"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/climate-science/carbon_cycle.tex",2072"category": "templates",2073"date": "2026-03-22",2074"time": "09:00"2075},2076{2077"source": "metabolic_networks",2078"content_type": "template",2079"subreddit": "CoCalc",2080"title": "Learn Metabolic Network Modeling: Flux Balance Analysis and Constraint-Based Optimization with Interactive Python",2081"body": "## What You'll Learn\n\na comprehensive computational analysis of metabolic networks using constraint-based modeling approaches. We construct stoichiometric matrices for a simplified central carbon metabolism network, perform Flux Balance Analysis (FBA) to predict optimal growth rates under nutrient limitations, analyze elementary flux modes to identify minimal functional pathways, and apply Metabolic Control Analysis (MCA) to quantify pathway regulation. Linear programming optimization reveals maximal biomass production rates of 0.87 h^-1 under glucose-limited conditions, with sensitivity analysis identifying rate-limiting enzymes for metabolic engineering interventions.\n\n\n**Key equations you'll work with:**\n- ^-1\n- m\n- n\n\n\n## What You'll See\n\n**Visualization:** flux distributions and pathway maps\n\n- See metabolic flow through biochemical networks\n- Observe bottleneck reactions and regulation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/metabolic_networks.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2082"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/computational-biology/metabolic_networks.tex",2083"category": "templates",2084"date": "2026-03-23",2085"time": "09:00"2086},2087{2088"source": "linear_regression",2089"content_type": "template",2090"subreddit": "CoCalc",2091"title": "Learn Linear Regression: OLS, Gradient Descent, and Regularization with Interactive Python",2092"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of linear regression methods including ordinary least squares (OLS), gradient descent optimization, and regularization techniques (Ridge and Lasso). We examine model diagnostics, multicollinearity detection, and cross-validation for hyperparameter tuning.\n\n\n**Key equations you'll work with:**\n- X\n- y\n- n = n_samples\n\n\n## What You'll See\n\n**Visualization:** regression lines with residual plots\n\n- See fitted model vs observed data\n- Observe R² and prediction intervals\n\n## Make It Yours\n\nTune hyperparameters, swap in your data, or extend the model architecture.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/linear_regression.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2093"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/machine-learning/linear_regression.tex",2094"category": "templates",2095"date": "2026-03-23",2096"time": "09:00"2097},2098{2099"source": "fiber_modes",2100"content_type": "template",2101"subreddit": "CoCalc",2102"title": "Learn Optics: Optical Fiber Mode Analysis with Interactive Python",2103"body": "## What You'll Learn\n\nA hands-on exploration of Optics: Optical Fiber Mode Analysis.\n\n\n**Key equations you'll work with:**\n- V < 2.405\n- _01\n- _lm\n\n\n## What You'll See\n\n**Visualization:** mode field distributions and cutoff conditions\n\n- See light propagating in optical fibers\n- Observe single vs multimode behavior\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/fiber_modes.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2104"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/fiber_modes.tex",2105"category": "templates",2106"date": "2026-03-24",2107"time": "09:00"2108},2109{2110"source": "galaxy_dynamics",2111"content_type": "template",2112"subreddit": "CoCalc",2113"title": "Learn Galaxy Dynamics Rotation Curves and Dark Matter with Interactive Python",2114"body": "## What You'll Learn\n\nAnalysis of galaxy dynamics including rotation curves, dark matter profiles, and scaling relations.\n\n\n**Key equations you'll work with:**\n- ρ(r) = ρ_s/(r/r_s)(1+r/r_s)²\n- M_\\\n- ³\n\n\n## What You'll See\n\n**Visualization:** rotation curves and velocity dispersion profiles\n\n- See flat rotation curves indicating dark matter\n- Observe the dark matter signature in galaxy dynamics\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/galaxy_dynamics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2115"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astrophysics/galaxy_dynamics.tex",2116"category": "templates",2117"date": "2026-03-24",2118"time": "09:00"2119},2120{2121"source": "species_distribution",2122"content_type": "template",2123"subreddit": "CoCalc",2124"title": "Learn Species Distribution Modeling Maximum Entropy and Niche Theory with Interactive Python",2125"body": "## What You'll Learn\n\nSpecies distribution models (SDMs) predict species occurrence across geographic space based on environmental correlates and presence records. We implement maximum entropy modeling following the MaxEnt framework to estimate habitat suitability for a hypothetical montane species. The analysis demonstrates fundamental vs.\\ realized niche concepts, climate envelope methods, and model evaluation using AUC, TSS, and omission rates. Our MaxEnt model achieves AUC = 0.87, indicating excellent discrimination between suitable and unsuitable habitat. Climate change projections under RCP 8.5 scenario predict 34\\% range contraction by 2070, with upslope shifts averaging 285 meters in elevation. Results highlight the utility of correlative niche models for conservation planning and climate change impact assessment.\n\n\n**Key equations you'll work with:**\n- n\n- P(x)\n- x = (x₁, x₂, , xₙ)\n\n\n## What You'll See\n\n**Visualization:** habitat suitability maps and niche models\n\n- See predicted species ranges across geography\n- Observe climate envelope constraints\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/species_distribution.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2126"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/ecology/species_distribution.tex",2127"category": "templates",2128"date": "2026-03-25",2129"time": "09:00"2130},2131{2132"source": "general_relativity",2133"content_type": "template",2134"subreddit": "CoCalc",2135"title": "Learn General Relativity: Computational Analysis of Curved Spacetime and Gravitational Phenomena with Interactive Python",2136"body": "## What You'll Learn\n\na comprehensive computational analysis of Einstein's general relativity, exploring the curvature of spacetime and its observational consequences. We examine the Schwarzschild metric for spherically symmetric spacetimes, compute geodesic trajectories to analyze orbital precession (including Mercury's anomalous perihelion advance of 43 arcseconds per century), model gravitational wave signals from binary mergers, and investigate cosmological solutions via the Friedmann-Lema\\^itre-Robertson-Walker metric. Computational methods demonstrate how mass-energy curves spacetime geometry, producing testable predictions from black hole event horizons to the large-scale structure of the universe.\n\n\n**Key equations you'll work with:**\n- G_μν = R_μν - 1/2Rg_μν\n- R_μν\n- R\n\n\n## What You'll See\n\n**Visualization:** spacetime diagrams and geodesics\n\n- See curved spacetime near masses\n- Observe gravitational time dilation\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/relativity/general_relativity.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2137"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/relativity/general_relativity.tex",2138"category": "templates",2139"date": "2026-03-25",2140"time": "09:00"2141},2142{2143"source": "rocket_propulsion",2144"content_type": "template",2145"subreddit": "CoCalc",2146"title": "Learn Rocket Propulsion Analysis: Thrust Curves, Specific Impulse, and Staging Optimization A Comprehensive Study of Chemical Rocket Performance with Interactive Python",2147"body": "## What You'll Learn\n\nThis laboratory report presents a comprehensive analysis of rocket propulsion systems. We examine thrust curves for different propellant combinations, compare specific impulse values, and optimize multi-stage rocket configurations using the Tsiolkovsky equation. The analysis includes propellant mass flow rates, chamber pressure effects, and payload fraction optimization for orbital insertion missions.\n\n\n**Key equations you'll work with:**\n- F\n- m\n- v_e\n\n\n## What You'll See\n\n**Visualization:** thrust curves and specific impulse vs chamber pressure\n\n- See nozzle expansion ratios and thrust coefficients\n- Observe propulsion efficiency at different altitudes\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/aerospace/rocket_propulsion.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2148"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-t",2149"category": "templates",2150"date": "2026-03-26",2151"time": "09:00"2152},2153{2154"source": "quantile_regression",2155"content_type": "template",2156"subreddit": "CoCalc",2157"title": "Title: I implemented Quantile Regression from scratch in Python - here's what I learned about going beyond the mean",2158"body": "Body:\n\nMost regression tutorials focus on OLS, which estimates E[Y|X]. But what if you care about the full distribution of Y given X?\n\n**Enter Quantile Regression**\n\nInstead of the mean, quantile regression estimates the τ-th conditional quantile:\n\nQ(τ|X) = Xᵀβ(τ)\n\nThe key insight is the \"check function\" (pinball loss):\n- When residual u ≥ 0: loss = τ × u\n- When residual u < 0: loss = (τ-1) × u\n\nThis asymmetric penalty lets you estimate any quantile by choosing τ ∈ (0,1).\n\n**Why it matters**\n\nI generated heteroscedastic data where variance increases with X. Results:\n\n1. The slope coefficient β₁(τ) *changes* across quantiles - it's larger for upper quantiles (0.9, 0.95) than lower ones (0.05, 0.1). This is a clear sign of heteroscedasticity that OLS completely misses.\n\n2. Prediction bands form a \"fan\" shape that widens with X - unlike OLS confidence intervals which assume constant variance.\n\n3. Median regression (τ=0.5) achieved better MAE than OLS thanks to robustness against heavy-tailed errors.\n\n**Real applications:**\n- Finance: Value-at-Risk estimation\n- Medicine: Growth chart percentiles\n- Economics: Wage distribution analysis\n\nCheck out the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantile_regression.ipynb\n\nHappy to answer questions about the implementation!",2159"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantile_regression.ipynb",2160"category": "general",2161"date": "2026-03-26",2162"time": "09:00"2163},2164{2165"source": "plasma_parameters",2166"content_type": "template",2167"subreddit": "CoCalc",2168"title": "Learn Fundamental Plasma Parameters: From Debye Shielding to Wave Propagation with Interactive Python",2169"body": "## What You'll Learn\n\nWe present a comprehensive computational analysis of fundamental plasma parameters spanning laboratory, space, and fusion plasmas. Starting from the Debye length and plasma frequency, we derive characteristic length and time scales that govern collective behavior. We compute the plasma parameter N_D to verify the plasma approximation, analyze Coulomb collision frequencies and Spitzer conductivity, and examine magnetized plasma dynamics through Larmor radii and cyclotron frequencies. Wave propagation characteristics including Langmuir, ion acoustic, and Alfvén waves are calculated across parameter regimes spanning electron densities from 10^6 to 10^26 m^-3 and temperatures from 0.1 eV to 10 keV. Our results demonstrate that the dimensionless ratios of these fundamental scales determine whether plasmas are collisional or collisionless, magnetized or unmagnetized, and which wave modes can propagate.\n\n\n**Key equations you'll work with:**\n- N_D\n- 10^6\n- 10^26\n\n\n## What You'll See\n\n**Visualization:** temperature and density profiles\n\n- See plasma confinement in parameter space\n- Observe Debye length and plasma frequency\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/plasma-physics/plasma_parameters.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2170"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/plasma-physics/plasma_parameters.tex",2171"category": "templates",2172"date": "2026-03-27",2173"time": "09:00"2174},2175{2176"source": "enzyme_kinetics",2177"content_type": "template",2178"subreddit": "CoCalc",2179"title": "Learn Enzyme Kinetics: Michaelis-Menten Analysis and Inhibition Studies with Interactive Python",2180"body": "## What You'll Learn\n\nThis laboratory report presents a comprehensive analysis of enzyme kinetics using the Michaelis-Menten framework and its linearizations. We examine the kinetic parameters K_m and V_max for a model enzyme system, analyze three types of reversible inhibition (competitive, uncompetitive, and mixed), and compare parameter estimation methods including Lineweaver-Burk, Eadie-Hofstee, and Hanes-Woolf plots. Computational analysis demonstrates the determination of inhibition constants and the diagnostic patterns that distinguish inhibition mechanisms.\n\n\n**Key equations you'll work with:**\n- K_m\n- V_max\n- v₀\n\n\n## What You'll See\n\n**Visualization:** Michaelis-Menten curves and Lineweaver-Burk plots\n\n- See reaction velocity vs substrate concentration\n- Observe Km and Vmax from kinetic data\n\n## Make It Yours\n\nChange population parameters, add new species, or simulate your biological system.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/enzyme_kinetics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2181"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/enzyme_kinetics.tex",2182"category": "templates",2183"date": "2026-03-27",2184"time": "09:00"2185},2186{2187"source": "convolution",2188"content_type": "template",2189"subreddit": "CoCalc",2190"title": "Learn Convolution and Linear Time-Invariant Systems with Interactive Python",2191"body": "## What You'll Learn\n\nA hands-on exploration of Convolution and Linear Time-Invariant Systems.\n\n\n**Key equations you'll work with:**\n- h[n]\n- \\σ\n- \\× \\π\n\n\n## What You'll See\n\n**Visualization:** input/output signals and impulse responses\n\n- See convolution operation step by step\n- Observe linear time-invariant system behavior\n\n## Make It Yours\n\nAdjust filter parameters, apply to your signals, or chain multiple transformations.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/signal-processing/convolution.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2192"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/signal-processing/convolution.tex",2193"category": "templates",2194"date": "2026-03-28",2195"time": "09:00"2196},2197{2198"source": "logistic_growth",2199"content_type": "template",2200"subreddit": "CoCalc",2201"title": "Learn Logistic Growth Models: Density Dependence and Population Regulation with Interactive Python",2202"body": "## What You'll Learn\n\nThis study presents a comprehensive analysis of logistic population growth models and their extensions. We examine the classic logistic equation, the Allee effect (positive density dependence at low populations), interspecific competition, and sustainable harvesting strategies. Computational analysis demonstrates population dynamics under various parameter regimes and identifies optimal management strategies for harvested populations.\n\n\n**Key equations you'll work with:**\n- r\n- K\n- K\n\n\n## What You'll See\n\n**Visualization:** S-curves showing carrying capacity approach\n\n- See population growth saturate at equilibrium\n- Observe density-dependent growth limitation\n\n## Make It Yours\n\nChange population parameters, add new species, or simulate your biological system.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/logistic_growth.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2203"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/biology/logistic_growth.tex",2204"category": "templates",2205"date": "2026-03-28",2206"time": "09:00"2207},2208{2209"source": "physics_simulation",2210"content_type": "template",2211"subreddit": "CoCalc",2212"title": "Learn Physics Simulation Rigid Body Dynamics with Interactive Python",2213"body": "## What You'll Learn\n\nReal-time physics simulation is fundamental to game development, enabling realistic object motion, collision responses, and environmental interactions. This report implements core game physics algorithms including Verlet integration for stable particle dynamics, impulse-based collision resolution, spring-damper systems, and projectile motion with aerodynamic drag. These techniques form the computational backbone of modern physics engines used in interactive entertainment and simulation software.\n\n\n**Key equations you'll work with:**\n- e = 0.85\n- e² = 0.72\n- x₁^min < x₂^max\n\n\n## What You'll See\n\n**Visualization:** collision detection and rigid body dynamics\n\n- See objects interact with realistic physics\n- Observe conservation laws in simulations\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/game-development/physics_simulation.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2214"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/game-development/physics_simulation.tex",2215"category": "templates",2216"date": "2026-03-29",2217"time": "09:00"2218},2219{2220"source": "mri_signal",2221"content_type": "template",2222"subreddit": "CoCalc",2223"title": "Learn Magnetic Resonance Imaging: Signal Formation, Contrast Mechanisms, and K-Space with Interactive Python",2224"body": "## What You'll Learn\n\na comprehensive analysis of MRI signal formation and image reconstruction. We implement the Bloch equations, analyze T1 and T2 contrast mechanisms, demonstrate k-space encoding, compare pulse sequences, and evaluate image artifacts. All simulations use PythonTeX for reproducibility.\n\n\n**Key equations you'll work with:**\n- T₁\n- T₂\n- T₂^*\n\n\n## What You'll See\n\n**Visualization:** relaxation curves and k-space data\n\n- See T1/T2 contrast mechanisms\n- Observe sequence timing effects on contrast\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/medical-physics/mri_signal.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2225"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/medical-physics/mri_signal.tex",2226"category": "templates",2227"date": "2026-03-29",2228"time": "09:00"2229},2230{2231"source": "digital_filter",2232"content_type": "template",2233"subreddit": "CoCalc",2234"title": "Learn Digital Filter Design: FIR and IIR Filters with Interactive Python",2235"body": "## What You'll Learn\n\nA hands-on exploration of Digital Filter Design: FIR and IIR Filters.\n\n\n**Key equations you'll work with:**\n- N\n- w[n]\n- ε\n\n\n## What You'll See\n\n**Visualization:** frequency response and pole-zero plots\n\n- See filter shaping signal spectrum\n- Observe passband and stopband behavior\n\n## Make It Yours\n\nAdjust filter parameters, apply to your signals, or chain multiple transformations.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/signal-processing/digital_filter.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2236"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/signal-processing/digital_filter.tex",2237"category": "templates",2238"date": "2026-03-30",2239"time": "09:00"2240},2241{2242"source": "survival_analysis_kaplan_meier",2243"content_type": "template",2244"subreddit": "CoCalc",2245"title": "Title: I built a Kaplan-Meier survival estimator from scratch in Python - here's what I learned about time-to-event analysis",2246"body": "Body:\n\n**What is Survival Analysis?**\n\nSurvival analysis is a branch of statistics that deals with \"time until something happens\" - death, machine failure, customer churn, you name it. Unlike regular statistics, it has to handle a tricky problem: censoring.\n\n**What's censoring?**\n\nImagine you're studying how long patients survive after a treatment. Some patients are still alive when your study ends. You don't know when they'll die - you just know they survived at least until the end of the study. That's \"right-censoring,\" and it's everywhere in real-world data.\n\n**The Kaplan-Meier Estimator**\n\nThe Kaplan-Meier (KM) estimator solves this elegantly. The formula looks like this:\n\nŜ(t) = ∏(1 - dᵢ/nᵢ)\n\nIn plain English: at each time point where an event occurs, multiply the current survival probability by (1 - events/at-risk). Censored patients drop out of the \"at risk\" pool but don't count as events.\n\n**What I built:**\n\n- Full KM estimator class with fit() method\n- Confidence intervals using Greenwood's formula: Var(Ŝ) = Ŝ² × ∑(dᵢ/(nᵢ(nᵢ-dᵢ)))\n- Log-rank test to compare two survival curves\n- Visualization with step functions and CI bands\n\n**Key insight:**\n\nThe beauty of KM is that it's non-parametric - no assumptions about the underlying distribution. The step function perfectly represents what we actually know: survival probability only changes when events happen.\n\n**Try it yourself:**\n\nView and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/survival_analysis_kaplan_meier.ipynb\n\nLibraries used: NumPy, Pandas, Matplotlib, SciPy (just for chi-squared p-values)",2247"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/survival_analysis_kaplan_meier.ipynb",2248"category": "general",2249"date": "2026-03-30",2250"time": "09:00"2251},2252{2253"source": "kinematics",2254"content_type": "template",2255"subreddit": "CoCalc",2256"title": "Learn Robot Kinematics: Forward and Inverse Analysis with Interactive Python",2257"body": "## What You'll Learn\n\nThis document presents a comprehensive analysis of robot kinematics using the Denavit-Hartenberg (DH) convention. We explore forward kinematics for a 3-DOF planar manipulator and 6-DOF articulated arm, implement inverse kinematics solutions using both geometric and numerical methods, compute the Jacobian matrix for velocity analysis, and visualize the robot workspace. The analysis demonstrates the mathematical foundations essential for robot motion planning and control.\n\n\n**Key equations you'll work with:**\n- i-1\n- i\n- θᵢ\n\n\n## What You'll See\n\n**Visualization:** joint configurations and workspace boundaries\n\n- See robot arm reaching positions\n- Observe inverse kinematics solutions\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/robotics/kinematics.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2258"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/robotics/kinematics.tex",2259"category": "templates",2260"date": "2026-03-31",2261"time": "09:00"2262},2263{2264"source": "diffraction_grating",2265"content_type": "template",2266"subreddit": "CoCalc",2267"title": "Learn Diffraction Grating Spectroscopy: Principles and Applications with Interactive Python",2268"body": "## What You'll Learn\n\nA hands-on exploration of Diffraction Grating Spectroscopy: Principles and Applications.\n\n\n**Key equations you'll work with:**\n- d\n- θᵢ\n- θ_m\n\n\n## What You'll See\n\n**Visualization:** diffraction patterns and spectral orders\n\n- See light separating by wavelength\n- Observe resolving power and dispersion\n\n## Make It Yours\n\nModify parameters, extend the analysis, or adapt it to your specific research needs.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/diffraction_grating.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2269"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/optics/diffraction_grating.tex",2270"category": "templates",2271"date": "2026-03-31",2272"time": "09:00"2273},2274{2275"source": "stellar_evolution",2276"content_type": "template",2277"subreddit": "CoCalc",2278"title": "Learn Stellar Evolution: From Main Sequence to Stellar Remnants A Comprehensive Analysis of the HR Diagram and Nuclear Burning Stages with Interactive Python",2279"body": "## What You'll Learn\n\nThis comprehensive analysis explores stellar structure and evolution through the Hertzsprung-Russell diagram. We derive the fundamental stellar relations including the mass-luminosity relation, main sequence lifetime, and Stefan-Boltzmann law. The analysis covers all major evolutionary phases from pre-main sequence contraction through hydrogen and helium burning to white dwarf, neutron star, and black hole endpoints. We generate synthetic stellar populations to visualize the main sequence, red giant branch, horizontal branch, and white dwarf cooling sequence, and explore the physics governing each evolutionary stage.\n\n\n**Key equations you'll work with:**\n- L\n- T_eff\n- σ = 5.67 × 10^-8\n\n\n## What You'll See\n\n**Visualization:** Hertzsprung-Russell diagram with evolutionary tracks\n\n- See stars evolving from main sequence to red giant\n- Observe mass-dependent stellar lifetimes\n\n## Make It Yours\n\nChange stellar parameters, simulate different objects, or model your observation.\n\n---\n\n**Template link:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/templates/astronomy/stellar_evolution.tex\n\n*Open-source computational LaTeX template - use freely for research and education.*",2280"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/latex-templates/te",2281"category": "te\n\n================================================================================\nREDDIT (< 40000 chars)\n================================================================================\n**Title:** Learn Stellar Evolution: From Main Sequence to Stellar Remnants A Comprehensive Analysis of the HR Diagram and Nuclear Burning Stages with Interactive Python\n\n**Body:**\n\n## What You'll Learn\n\nThis comprehensive analysis explores stellar structure and evolution through the Hertzsprung-Russell diagram. We derive the fundamental stellar relations including the mass-luminosity relation, main sequence lifetime, and Stefan-Boltzmann law. The analysis covers all major evolutionary phases from pre-main sequence contraction through hydrogen and helium burning to white dwarf, neutron star, and black hole endpoints. We generate synthetic stellar populations to visualize the main sequence, red giant branch, horizontal branch, and white dwarf cooling sequence, and explore the physics governing each evolutionary stage.\n\n\n**Key equations you'll work with:**\n- L\n- T_eff\n- σ = 5.67 × 10^-8\n\n\n## What You'll See\n\n**Visualization:** Hertzsprung-Russell diagram with evolutionary tracks\n\n- See stars evolving from main sequence to red giant\n- Observe mass-dependent stellar lifetimes\n\n## Make It Yours\n\nChange stellar parameters, simulate different objects, or model your observation.\n\n---\n\n**Template link:** https:",2282"date": "2026-04-01",2283"time": "09:00"2284},2285{2286"source": "finite_difference_method",2287"content_type": "notebook",2288"subreddit": "CoCalc",2289"title": "How to Solve Differential Equations with Python: Finite Difference Method Explained",2290"body": "I created a Jupyter notebook demonstrating the Finite Difference Method (FDM) - a fundamental numerical technique for solving differential equations that can't be solved analytically.\n\n**The Basic Idea (ELI5):**\n\nImagine you want to find a curve, but you only know rules about how the curve bends. Calculus gives us derivatives to describe this bending, but computers can't do calculus directly.\n\nFDM's trick: approximate the \"bendiness\" (derivative) by looking at nearby points. If you know y at x-h, x, and x+h, you can estimate y'' as:\n\ny'' ≈ (y(x+h) - 2y(x) + y(x-h)) / h²\n\n**What the notebook covers:**\n\n1. Taylor series derivation of forward, backward, and central differences\n2. Setting up the tridiagonal matrix system Ay = b\n3. Solving y'' - y = -x with boundary conditions y(0) = y(1) = 0\n4. Convergence analysis showing O(h²) error reduction\n\n**Key results:**\n\n- With 50 interior grid points: max error ≈ 10⁻⁵\n- Doubling grid points → error drops by factor of 4\n- Convergence rate measured at 2.0 (confirming second-order accuracy)\n\n**What I learned:**\n\nThe beauty of FDM is how it converts calculus into linear algebra. The resulting tridiagonal matrix is sparse and well-conditioned, making it efficient to solve even for large grids.\n\n**View the full notebook here:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/finite_difference_method.ipynb\n\nLibraries used: NumPy, SciPy, Matplotlib",2291"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/finite_difference_method.ipynb",2292"category": "general",2293"date": "2026-04-01",2294"time": "09:00"2295},2296{2297"source": "gamblers_ruin_problem",2298"content_type": "notebook",2299"subreddit": "CoCalc",2300"title": "TITLE: I implemented the Gambler's Ruin Problem - here's why the house ALWAYS wins (with Python code)",2301"body": "BODY:\n\n**The Problem**\n\nImagine you have 10 and want to reach 20 by making fair 50/50 bets. What's your probability of going broke first?\n\n**The Surprising Answer**\n\nEven in a perfectly fair game, your probability of ruin is 1 - i/N. With 10 out of 20 total, that's 50%!\n\n**The Math (in plain terms)**\n\nFor unfair games where you win with probability p:\n- Let r = (1-p)/p\n- P(ruin) = (rⁱ - rᴺ)/(1 - rᴺ)\n\n**What I Learned**\n\n1. **Fair doesn't mean safe**: In a fair game, the expected number of rounds is i × (N - i). You WILL eventually hit an absorbing state.\n\n2. **The house edge is brutal**: At p = 0.49 (tiny 2% disadvantage), starting with 10 of 20, your ruin probability jumps from 50% to ~67%.\n\n3. **Bankroll management matters**: Starting with more capital relative to your goal dramatically reduces ruin risk.\n\n**The Code**\n\nI ran 5000 Monte Carlo simulations for each scenario and compared them against the closed-form solutions. They match beautifully!\n\nKey functions:\n- `probabilityₒf_ruin(i, N, p)` - analytical solution\n- `simulate_gamblers_ruin(i, N, p)` - Monte Carlo validation\n\n**Try it yourself!**\n\nView and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gamblers_ruin_problem.ipynb",2302"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gamblers_ruin_problem.ipynb",2303"category": "general",2304"date": "2026-04-02",2305"time": "09:00"2306},2307{2308"source": "algebraic_structures",2309"content_type": "notebook",2310"subreddit": "SageMath",2311"title": "I built a Python toolkit to explore Groups, Rings, and Fields - visualizing abstract algebra with Cayley tables",2312"body": "Hey everyone! I created an interactive Jupyter notebook exploring the three fundamental algebraic structures that underpin modern mathematics.\n\n**What are these structures?**\n\n- **Groups (G, *)** - A set with one operation satisfying: closure, associativity, identity element, and inverses. Example: integers mod n under addition (Zn)\n\n- **Rings (R, +, x)** - Two operations where (R, +) is an abelian group, multiplication is associative, and distribution holds. Example: integers Z\n\n- **Fields (F, +, x)** - A ring where every nonzero element has a multiplicative inverse. Example: integers mod p for prime p (Fp)\n\n**What I implemented:**\n\n1. `CyclicGroup` class - computes Cayley tables, verifies all four group axioms programmatically\n2. `FiniteField` class - uses Fermat's little theorem (a^(-1) = a^(p-2) mod p) for multiplicative inverses\n3. `PolynomialRing` class - arithmetic over finite fields\n4. `SymmetricGroup` class - permutation composition, parity, and alternating groups\n\n**Cool discoveries:**\n\n- Cayley tables are always Latin squares - each element appears exactly once per row/column. This is a visual proof that groups have unique solutions!\n- Verified Lagrange's theorem: subgroup orders always divide group order\n- Group homomorphism from Z6 to Z2 has kernel {0, 2, 4}\n\nThe visualization shows Cayley tables for Z5, Z6, and both operations in F5. The patterns are hypnotic!\n\n**View the full interactive notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/algebraic_structures.ipynb\n\nHappy to answer questions about the math or implementation!",2313"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/algebraic_structures.ipynb",2314"category": "general",2315"date": "2026-04-02",2316"time": "09:00"2317},2318{2319"source": "riemann_zeta_function_zeros",2320"content_type": "notebook",2321"subreddit": "CoCalc",2322"title": "I built a Python tool to hunt for zeros of the Riemann zeta function - here's what I learned",2323"body": "The Riemann zeta function is defined as:\n\nζ(s) = 1 + 1/2^s + 1/3^s + 1/4^s + ...\n\nWhere s is a complex number. This innocent-looking sum connects to prime numbers through Euler's product formula:\n\nζ(s) = ∏(1/(1 - p^(-s))) over all primes p\n\n**The Riemann Hypothesis** (unsolved since 1859!) says all \"interesting\" zeros of this function lie on the line where Re(s) = 1/2. These zeros encode information about how primes are distributed.\n\n**What I built:**\n- Numerical ζ(s) computation using the Dirichlet eta function (converges better in the critical strip)\n- The Riemann-Siegel Z(t) function which is real-valued on the critical line\n- Zero-finding via sign changes + Brent's method refinement\n\n**Key results:**\n- Found zeros at t ≈ 14.13, 21.02, 25.01, 30.42, 32.94...\n- ALL computed zeros lie exactly on Re(s) = 1/2\n- The spacing between zeros follows the GUE distribution from random matrix theory\n\n**Mind-blowing fact:** The same statistics that describe energy level spacings in quantum systems describe the gaps between prime-related zeros. Nobody fully understands why.\n\nThe full interactive notebook with visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/riemann_zeta_function_zeros.ipynb",2324"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/riemann_zeta_function_zeros.ipynb",2325"category": "general",2326"date": "2026-04-03",2327"time": "09:00"2328},2329{2330"source": "monte_carlo_option_pricing",2331"content_type": "notebook",2332"subreddit": "CoCalc",2333"title": "Monte Carlo Simulation for Option Pricing - Python Implementation with Variance Reduction",2334"body": "I built a Monte Carlo option pricing model in Python and wanted to share what I learned!\n\n**The Problem:** How do you price a financial option (the right to buy/sell a stock at a fixed price)?\n\n**ELI5 Version:** Imagine you want to know the fair price of a lottery ticket. You could:\n1. Simulate the lottery 100,000 times\n2. Calculate your winnings each time\n3. Average them all\n4. That's roughly what the ticket is worth!\n\nThat's Monte Carlo simulation in a nutshell.\n\n**The Math (simplified):**\n- Stock prices follow random walks (geometric Brownian motion)\n- S_T = S₀ · exp[(r - σ²/2)T + σ√T · Z], where Z ~ Normal(0,1)\n- Option payoff = max(S_T - Strike, 0) for calls\n- Price = discounted average of all simulated payoffs\n\n**Results:**\n- 100,000 simulations\n- Monte Carlo price: 8.0215\n- Black-Scholes (exact): 8.0214\n- Error: 0.01%!\n\n**Key Insight:** Standard error decreases as 1/√N, so you need 100× more simulations for 10× better precision. That's why variance reduction matters!\n\n**Bonus - Antithetic Variates:**\nInstead of just using random number Z, also use -Z. This creates negative correlation between paired estimates, reducing variance by ~50% for free!\n\nView the full notebook with all code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/monte_carloₒption_pricing.ipynb",2335"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/monte_carloₒption_pricing.ipynb",2336"category": "general",2337"date": "2026-04-03",2338"time": "09:00"2339},2340{2341"source": "chebyshev_polynomials",2342"content_type": "notebook",2343"subreddit": "SageMath",2344"title": "Visualizing why Chebyshev nodes fix Runge's phenomenon - Python implementation",2345"body": "If you've ever done polynomial interpolation and wondered why it works great in the middle but goes haywire at the edges, you've encountered Runge's phenomenon.\n\n**The Problem:**\nTake the innocent-looking function f(x) = 1/(1+25x²). Try to interpolate it with 11 evenly-spaced points. The result near x = ±1 oscillates wildly and completely misses the actual function.\n\n**The Solution: Chebyshev Nodes**\n\nInstead of equidistant points, use:\nxₖ = cos((2k-1)π/(2n))\n\nThese points cluster near the boundaries, which is exactly where polynomial interpolants tend to misbehave.\n\n**Why it works:**\nChebyshev polynomials Tₙ(x) = cos(n·arccos(x)) have the \"minimax\" property - they minimize the maximum error. Their zeros (the nodes) are optimally distributed for interpolation.\n\n**Key properties I verified:**\n- Tₙ oscillates exactly n times between -1 and 1\n- Orthogonal with weight 1/√(1-x²)\n- Satisfy recurrence: Tₙ₊₁ = 2x·Tₙ - Tₙ₋₁\n\n**Python implementation:**\nUsed scipy.special.eval_chebyt and eval_chebyu. The orthogonality integrals check out numerically:\n- ∫T_m·T_n/√(1-x²)dx = 0 for m≠n\n- = π for m=n=0\n- = π/2 for m=n≠0\n\nThe visualization comparing equidistant vs Chebyshev interpolation is striking.\n\n**Full notebook with code and plots:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/chebyshev_polynomials.ipynb",2346"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/chebyshev_polynomials.ipynb",2347"category": "general",2348"date": "2026-04-04",2349"time": "09:00"2350},2351{2352"source": "gaussian_curvature",2353"content_type": "notebook",2354"subreddit": "CoCalc",2355"title": "Title: I computed Gaussian Curvature for spheres, saddles, and tori in Python - here's why every map of Earth is wrong",2356"body": "Body:\n\n**ELI5 version:** Imagine wrapping a gift. You can wrap a cylinder (like a tube) with flat paper - just roll it up. But try wrapping a ball perfectly with flat paper and you'll get creases or have to tear it. That's Gaussian curvature at work!\n\n**The math:** Gaussian curvature K = κ₁ · κ₂, where κ₁ and κ₂ are the maximum and minimum \"bendiness\" at any point.\n\n- Sphere: K = 1/R² (always positive, bends the same way in all directions)\n- Saddle: K < 0 (bends up in one direction, down in another - think Pringles)\n- Plane: K = 0 (no bending)\n\n**Why it matters:** Gauss proved the \"Theorema Egregium\" (Remarkable Theorem) in 1827: you cannot change a surface's Gaussian curvature without stretching or tearing. A sphere has K > 0, a plane has K = 0, so there's no perfect flat map of Earth. Every projection (Mercator, Robinson, etc.) introduces distortion.\n\n**What I built:** Used NumPy to compute K numerically using the first and second fundamental forms:\n\nK = (LN - M²) / (EG - F²)\n\nVisualized three surfaces:\n1. Sphere - constant positive curvature\n2. Hyperbolic paraboloid (saddle) - negative curvature everywhere\n3. Torus - K varies! Positive on outer rim, negative on inner rim, zero at top/bottom\n\n**Cool applications:**\n- General Relativity uses this concept for spacetime curvature\n- Computer graphics uses curvature for mesh analysis\n- Material science studies stress in curved shells\n\nView the interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gaussian_curvature.ipynb",2357"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gaussian_curvature.ipynb",2358"category": "general",2359"date": "2026-04-04",2360"time": "09:00"2361},2362{2363"source": "complex_integration",2364"content_type": "notebook",2365"subreddit": "CoCalc",2366"title": "TITLE: I implemented numerical verification of complex integration theorems (Cauchy, Residues) in Python",2367"body": "BODY:\n\n**TL;DR:** Built a Python notebook that numerically verifies the major theorems of complex integration and shows how to evaluate \"impossible\" real integrals.\n\n---\n\n**What is this?**\n\nComplex integration extends calculus to functions of complex numbers. This leads to incredibly powerful results:\n\n**Cauchy's Integral Theorem:** If f(z) is analytic (differentiable everywhere in a region), then integrating around any closed loop gives zero:\n\n∮f(z)dz = 0\n\n**Cauchy's Integral Formula:** You can compute f(z₀) at any interior point using only boundary values:\n\nf(z₀) = (1/2πi) ∮ f(z)/(z - z₀) dz\n\n**Residue Theorem:** For functions with poles (singularities), the contour integral equals 2πi times the sum of residues:\n\n∮f(z)dz = 2πi · Σ Res(f, zₖ)\n\n---\n\n**The Cool Application**\n\nThe real integral ∫₋∞^∞ dx/(x²+1) is tricky by standard methods. But with complex analysis:\n\n1. Extend to f(z) = 1/(z²+1)\n2. Note poles at z = i and z = -i\n3. Use semicircular contour in upper half-plane (encloses only z = i)\n4. Residue at z = i is 1/(2i)\n5. Result: 2πi · (1/2i) = π\n\nThe notebook verifies this numerically - direct integration and residue theorem both give π!\n\n---\n\n**Implementation Details**\n\n- Used parametric contour integration with trapezoidal rule\n- Verified all three major theorems numerically\n- Errors on order of 10⁻¹⁵ (machine precision)\n- Visualizations show contours, poles, and function magnitudes\n\n**View the interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/complex_integration.ipynb\n\n---\n\n**What I Learned**\n\nComplex analysis isn't just abstract math - it's a computational tool. Problems that seem impossible in real calculus become elegant with residues.\n\nWould love feedback on the implementation or suggestions for other applications!",2368"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/complex_integration.ipynb",2369"category": "general",2370"date": "2026-04-05",2371"time": "09:00"2372},2373{2374"source": "character_theory",2375"content_type": "notebook",2376"subreddit": "CoCalc",2377"title": "Built a Character Table Calculator for Finite Groups in Python - Here's How Characters Reveal Group Structure",2378"body": "I just finished implementing character theory for the symmetric group S₃ and wanted to share what I learned!\n\n**What is character theory?**\n\nA character χ of a group representation ρ is simply the trace of the matrix: χ(g) = Tr(ρ(g)). This single number encodes essential information about how the group acts.\n\n**Why is this cool?**\n\n1. Characters are *class functions* - they're constant on conjugacy classes (elements related by conjugation)\n2. Irreducible characters form an orthonormal basis with respect to the inner product ⟨χᵢ, χⱼ⟩ = (1/|G|) Σ χᵢ(g)χⱼ(g)*\n3. The number of irreducible representations equals the number of conjugacy classes\n\n**Results for S₃:**\n\nS₃ has 6 elements and 3 conjugacy classes:\n- Identity: {e}\n- Transpositions: {(12), (13), (23)}\n- 3-cycles: {(123), (132)}\n\nThe character table:\n\n| | e | (12) | (123) |\n|--|---|------|-------|\n| χ_trivial | 1 | 1 | 1 |\n| χ_sign | 1 | -1 | 1 |\n| χ_standard | 2 | 0 | -1 |\n\nThe degrees satisfy: 1² + 1² + 2² = 6 = |S₃|\n\n**Applications:**\n- Molecular symmetry in chemistry\n- Particle physics (classifying bosons/fermions)\n- Generalized Fourier analysis on groups\n\nCheck out the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/character_theory.ipynb",2379"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/character_theory.ipynb",2380"category": "general",2381"date": "2026-04-05",2382"time": "09:00"2383},2384{2385"source": "naive_bayes_classifier",2386"content_type": "notebook",2387"subreddit": "CoCalc",2388"title": "Built a Gaussian Naive Bayes Classifier from Scratch - Here's What I Learned",2389"body": "I implemented a Naive Bayes classifier without using scikit-learn to really understand the probabilistic foundations. Here's the breakdown:\n\n**The Core Idea (ELI5):**\nImagine you're trying to guess if an email is spam. Instead of looking at all words together (computationally expensive), Naive Bayes assumes each word gives independent evidence. It's \"naive\" because words aren't truly independent, but this simplification works surprisingly well!\n\n**The Math:**\n- Bayes' theorem: P(Class|Features) ∝ P(Features|Class) × P(Class)\n- Naive assumption: P(Features|Class) = P(feature₁|Class) × P(feature₂|Class) × ...\n- For Gaussian NB: each feature follows a normal distribution per class\n\n**Key Implementation Details:**\n1. Use log-probabilities to prevent underflow (multiplying many small numbers → zero)\n2. Add small variance (1e-9) for numerical stability\n3. Apply softmax for normalized probability outputs\n\n**What I Learned:**\n- Training is just computing means and variances per class: O(nd) complexity\n- Despite violating independence assumption (my features had ~0.3 correlation), accuracy remained high\n- The decision boundary is linear in the log-odds space\n\n**Results:**\n- Clean separation of synthetic Gaussian classes\n- Interpretable parameters (you can see what each class \"looks like\")\n- Fast training and prediction\n\nInteractive notebook with full code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/naive_bayes_classifier.ipynb",2390"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/naive_bayes_classifier.ipynb",2391"category": "general",2392"date": "2026-04-06",2393"time": "09:00"2394},2395{2396"source": "sparse_matrix_operations",2397"content_type": "notebook",2398"subreddit": "CoCalc",2399"title": "I benchmarked sparse vs dense matrix operations in Python - here's what I learned about when sparse is worth it",2400"body": "Hey everyone! I just put together a Jupyter notebook exploring sparse matrix operations and wanted to share some findings that surprised me.\n\n**What are sparse matrices?**\n\nImagine a 5000×5000 matrix where only 1% of elements are non-zero. That's 250,000 values in a sea of 25 million zeros. Sparse formats like CSR (Compressed Sparse Row) only store the non-zeros plus some index arrays.\n\n**The benchmarks**\n\nI tested matrix-vector multiplication across sizes from 100 to 5000 at 1% density:\n\n| Size | Sparse (ms) | Dense (ms) | Speedup |\n|------|-------------|------------|---------|\n| 1000 | 0.05 | 1.2 | 24x |\n| 3000 | 0.14 | 11.5 | 82x |\n| 5000 | 0.23 | 32.1 | 140x |\n\nMemory usage tells an even better story - a 5000×5000 sparse matrix at 1% density uses about 1MB vs 190MB for dense.\n\n**When to go sparse**\n\nThe crossover point depends on your operations, but generally:\n- >95% zeros → definitely use sparse\n- 80-95% zeros → benchmark your specific use case\n- <80% zeros → probably stick with dense\n\n**Solver comparison**\n\nFor solving Ax = b on a 2500×2500 Poisson system:\n- Direct solver (spsolve): ~10ms\n- Conjugate Gradient: ~15ms\n- GMRES: ~20ms\n- BiCGSTAB: ~18ms\n\nAll achieved relative errors around 10⁻¹⁴.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/sparse_matrix_operations.ipynb\n\nThe notebook includes visualizations of sparsity patterns, memory scaling, and solver performance. Happy to answer questions!",2401"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/sparse_matrix_operations.ipynb",2402"category": "general",2403"date": "2026-04-06",2404"time": "09:00"2405},2406{2407"source": "random_forest_classification",2408"content_type": "notebook",2409"subreddit": "CoCalc",2410"title": "Title: Built Random Forest from Scratch in Python - Here's What I Learned",2411"body": "Body:\n\nI implemented a complete Random Forest classifier using only NumPy to really understand how ensemble methods work. Here's the breakdown:\n\n**What is Random Forest?**\n\nThink of it like asking 100 different experts to vote on a decision. Each expert (decision tree) sees a slightly different version of the data and focuses on different features. The final answer is whatever most experts agree on.\n\n**The Key Ideas:**\n\n1. **Bootstrap Aggregating (Bagging)**: Each tree trains on a random sample WITH replacement. About 63% of data points appear in each sample - the rest become \"out-of-bag\" samples for free validation.\n\n2. **Random Feature Selection**: At each split, only consider sqrt(total features) randomly chosen features. This decorrelates the trees.\n\n3. **Gini Impurity**: Measures how \"mixed\" the classes are at a node. Formula: G = 1 - (p0 squared + p1 squared). Pure node = 0, maximum impurity = 0.5 for binary classification.\n\n**Why It Works (The Math):**\n\nEnsemble variance = rho * sigma squared + (1-rho)/B * sigma squared\n\nWhere rho is correlation between trees, sigma squared is individual tree variance, B is number of trees.\n\nBy making trees less correlated (random features), we reduce rho, which reduces overall variance. More trees (larger B) also helps, but with diminishing returns.\n\n**My Results:**\n\n- Single Decision Tree: ~91% accuracy\n- Random Forest (100 trees): ~96% accuracy\n- OOB Score closely matched test accuracy\n\nThe decision boundary plot really shows the difference - the forest produces much smoother boundaries than a single tree.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/random_forest_classification.ipynb\n\nHappy to answer questions about the implementation!",2412"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/random_forest_classification.ipynb",2413"category": "general",2414"date": "2026-04-07",2415"time": "09:00"2416},2417{2418"source": "policy_gradient_methods",2419"content_type": "notebook",2420"subreddit": "CoCalc",2421"title": "Implemented REINFORCE Policy Gradient Algorithm from Scratch - Here's How It Works",2422"body": "I built the REINFORCE algorithm from scratch using only NumPy to understand policy gradients better. Here's what I learned:\n\n**What are policy gradients?**\n\nUnlike Q-learning (which learns action values), policy gradient methods directly optimize the policy π_θ(a|s) - the probability of taking action a in state s.\n\n**The core insight:**\n\nThe policy gradient theorem tells us:\n∇J(θ) = E[∑ ∇log π(a|s) · G_t]\n\nIn plain English: increase the probability of actions that led to high returns, decrease those that didn't.\n\n**Variance reduction:**\n\nRaw REINFORCE has high variance. The fix? Subtract a baseline b from returns:\n∇J(θ) = E[∑ ∇log π(a|s) · (G_t - b)]\n\nThis doesn't add bias (since E[∇log π · b] = 0) but dramatically reduces variance.\n\n**What I built:**\n\n- Custom CartPole physics simulation\n- Linear policy network with softmax\n- REINFORCE with moving average baseline\n\n**Results:**\n\nAfter 1000 episodes, the agent consistently balances the pole for 195+ steps. The learned policy is intuitive - when the pole tilts right (θ > 0), push right to catch it.\n\n**Key takeaways:**\n\n1. Policy gradients can learn stochastic policies (useful for exploration)\n2. They handle continuous actions naturally\n3. But they're sample-inefficient compared to value methods\n\nThis is the foundation for modern algorithms like PPO and SAC.\n\n**Interactive Notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/policy_gradient_methods.ipynb\n\nHappy to answer questions about the implementation!",2423"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/policy_gradient_methods.ipynb",2424"category": "general",2425"date": "2026-04-07",2426"time": "09:00"2427},2428{2429"source": "ito_calculus_basics",2430"content_type": "notebook",2431"subreddit": "CoCalc",2432"title": "TITLE: I built a Python simulation to understand why stochastic integrals are \"broken\" (Itô calculus)",2433"body": "BODY:\n\n**The Problem**\n\nIn regular calculus, we learn that ∫x dx = x²/2. Simple, right?\n\nBut when x is replaced by Brownian motion W_t (the mathematical model of random noise), something weird happens:\n\n∫W_t dW_t = ½W_T² - **½T**\n\nWhere does that extra -½T come from?!\n\n**The Answer: Quadratic Variation**\n\nBrownian motion is so \"rough\" that it accumulates variation at rate T over time interval [0,T]. This affects how we compute integrals. The Itô integral uses left-point evaluation (like W_t at time t_i), while the Stratonovich integral uses midpoints.\n\nThe difference between them is exactly T/2 - the quadratic variation.\n\n**Why This Matters**\n\nThis is the foundation of mathematical finance. The Black-Scholes formula for option pricing uses Itô calculus, and that mysterious -σ²/2 drift correction comes directly from Itô's lemma.\n\n**What I Learned**\n\nI simulated 5000 Brownian motion paths and computed both integrals numerically:\n\n- Itô integral mean: ≈ -0.5 (matches theory!)\n- Stratonovich integral mean: ≈ 0 (ordinary calculus result)\n- Itô isometry: E[(∫f dW)²] = ∫f² dt verified for multiple test functions\n\nThe code uses NumPy for vectorized simulation and Matplotlib for visualization. No external dependencies beyond the scientific Python stack.\n\n**Try It Yourself**\n\nView and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/ito_calculus_basics.ipynb\n\nKey takeaway: Stochastic calculus isn't \"broken\" - it's accounting for the infinite roughness of random paths that ordinary calculus can't handle.",2434"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/ito_calculus_basics.ipynb",2435"category": "general",2436"date": "2026-04-08",2437"time": "09:00"2438},2439{2440"source": "yield_curve_bootstrapping",2441"content_type": "notebook",2442"subreddit": "CoCalc",2443"title": "I built a yield curve bootstrapping algorithm in Python - here's how it works",2444"body": "**What is yield curve bootstrapping?**\n\nIt's a fundamental technique in fixed income that constructs a zero-coupon yield curve from market prices. Think of it as reverse-engineering the \"true\" interest rates from observable bond prices.\n\n**Why does this matter?**\n\nThe zero curve is essential for:\n- Pricing any fixed income security\n- Calculating forward rates (what the market expects future rates to be)\n- Managing interest rate risk\n\n**The Algorithm (ELI5):**\n\n1. **Start simple**: Take short-term instruments like T-bills. These are already zero-coupon, so the rate is just: r = ln(Face Value / Price) / Time\n\n2. **Build iteratively**: For longer-dated coupon bonds, you already know the discount factors for earlier payment dates. Solve for the unknown discount factor at maturity.\n\n3. **Extract the rate**: r(t) = -ln(d(t)) / t\n\n**Key Insights:**\n\n- Zero rates > par yields when the curve slopes upward\n- Forward rates are more volatile than spot rates\n- The choice of interpolation method affects forward rates significantly\n\n**Validation:**\n\nRepriced all 10 input instruments with errors < 0.01 basis points. The curve perfectly fits the market data by construction.\n\n**Code highlights:**\n- ~150 lines of Python\n- Uses numpy, scipy (cubic splines), matplotlib\n- Semi-annual coupon payments assumed\n\nView the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/yield_curve_bootstrapping.ipynb\n\n---",2445"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/yield_curve_bootstrapping.ipynb",2446"category": "general",2447"date": "2026-04-08",2448"time": "09:00"2449},2450{2451"source": "multigrid_methods_poisson_equation",2452"content_type": "notebook",2453"subreddit": "CoCalc",2454"title": "I built a multigrid PDE solver from scratch in Python - here's how it achieves O(N) optimal complexity",2455"body": "Ever tried to solve a large system of linear equations with basic iterative methods like Jacobi? You'll wait forever. Here's why multigrid methods are a game-changer.\n\n**The Problem:**\nThe Poisson equation ∇²u = f shows up everywhere: electrostatics, heat transfer, fluid flow. When you discretize it on a grid, you get a huge sparse linear system.\n\n**Why Simple Methods Fail:**\nIterative methods like Jacobi are great at eliminating \"wiggly\" high-frequency errors but terrible at smooth, low-frequency ones. The smoother the error, the more iterations you need.\n\n**The Multigrid Insight:**\nA smooth error on a fine grid looks wiggly on a coarser grid! So multigrid:\n\n1. Smooths the solution on the fine grid (kills high-freq errors)\n2. Computes the residual and restricts it to a coarser grid\n3. Recursively solves for the error correction\n4. Interpolates the correction back and applies it\n5. Smooths again\n\n**Results:**\n- 66×66 grid: Multigrid converges in ~15 V-cycles\n- Same problem with Jacobi: needs thousands of iterations\n- Convergence rate ρ ≈ 0.1 is independent of grid size!\n\nThe full notebook with visualizations and code is available here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/multigrid_methods_poisson_equation.ipynb\n\nWhat's your experience with numerical PDE solvers?",2456"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/multigrid_methods_poisson_equation.ipynb",2457"category": "general",2458"date": "2026-04-09",2459"time": "09:00"2460},2461{2462"source": "bifurcation_diagram",2463"content_type": "notebook",2464"subreddit": "CoCalc",2465"title": "I created a bifurcation diagram showing how chaos emerges from the simplest nonlinear equation",2466"body": "Hey everyone! I've been exploring chaos theory through Python and wanted to share this classic visualization.\n\n**The Setup**\n\nThe logistic map is deceptively simple:\n\nxₙ₊₁ = r · xₙ · (1 - xₙ)\n\nThat's it. One equation. But when you plot what happens to x as you vary the parameter r from 2.5 to 4, you get this incredible structure.\n\n**What the Diagram Shows**\n\n- **r < 3:** The system settles to a single stable value (fixed point at x* = (r-1)/r)\n- **r = 3:** First bifurcation! The system oscillates between 2 values\n- **r ≈ 3.45:** Doubles again to period-4\n- **r ≈ 3.57:** Chaos begins after infinite period-doublings\n\n**The Cool Parts**\n\n1. **Feigenbaum Constant:** The ratio of successive bifurcation intervals converges to δ ≈ 4.669201... This number is UNIVERSAL—it appears in completely different chaotic systems!\n\n2. **Periodic Windows:** Even in chaos, there are regions of order. The period-3 window near r ≈ 3.83 is special because \"period three implies chaos\" (Li-Yorke theorem).\n\n3. **Self-Similarity:** Zoom into any chaotic region and you'll see the same period-doubling pattern. It's fractal!\n\n**ELI5 Version**\n\nImagine a population of rabbits. r controls how fast they breed. Too slow (low r) = population stabilizes. Medium r = population oscillates between years. High r = population goes haywire, bouncing unpredictably—but following deterministic rules.\n\n**What I Learned**\n\nThis project really drove home how \"deterministic\" doesn't mean \"predictable.\" The logistic map is completely determined by its equation, yet at high r values, tiny differences in initial conditions lead to completely different outcomes.\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bifurcation_diagram.ipynb\n\nHappy to answer questions about the implementation!",2467"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bifurcation_diagram.ipynb",2468"category": "general",2469"date": "2026-04-09",2470"time": "09:00"2471},2472{2473"source": "laplace_transform",2474"content_type": "notebook",2475"subreddit": "CoCalc",2476"title": "Exploring the Laplace Transform with Python: From Theory to Visualization",2477"body": "The Laplace transform is one of the most powerful tools in engineering mathematics. It converts functions of time f(t) into functions of complex frequency F(s):\n\nL{f(t)} = F(s) = ∫₀^∞ f(t)·e^(-st) dt\n\n**Why is this useful?**\n\nIt transforms differential equations into algebraic equations. Instead of solving:\n\ny'' + 2ζωₙy' + ωₙ²y = ωₙ²u(t)\n\nYou can work with the transfer function:\n\nH(s) = ωₙ² / (s² + 2ζωₙs + ωₙ²)\n\n**What I explored:**\n\n1. **Numerical Laplace transforms** - Computing transforms via numerical integration\n2. **Transform pairs** - Visualizing exponentials, sine waves, and damped oscillations\n3. **Step responses** - How damping ratio ζ affects system behavior (underdamped oscillates, overdamped is sluggish)\n4. **Pole-zero analysis** - Why poles in the left half-plane mean stability\n5. **Bode plots** - Frequency response in magnitude (dB) and phase\n6. **Convolution theorem** - Time-domain convolution = frequency-domain multiplication\n\n**Key insight:** A system is stable if and only if ALL poles have negative real parts (left side of s-plane).\n\nBuilt with NumPy, SciPy, and Matplotlib. The notebook demonstrates how these abstract mathematical concepts connect to real engineering applications in control systems, signal processing, and circuit analysis.\n\n**View the interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/laplace_transform.ipynb",2478"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/laplace_transform.ipynb",2479"category": "general",2480"date": "2026-04-10",2481"time": "09:00"2482},2483{2484"source": "prime_number_sieve",2485"content_type": "notebook",2486"subreddit": "CoCalc",2487"title": "Implemented the Sieve of Eratosthenes and visualized the Prime Number Theorem",2488"body": "I built a Python implementation of the Sieve of Eratosthenes and used it to explore prime number distribution. Sharing what I learned!\n\n**The Algorithm (ELI5):**\nImagine you have numbers 2 to 100 written down. Circle 2 (it's prime), then cross out all multiples of 2. Circle the next uncircled number (3), cross out its multiples. Repeat until you reach √n. All circled numbers are prime!\n\n**Why start marking from i²?**\nWhen eliminating multiples of prime p, we start from p² instead of 2p. Why? Because smaller multiples like 2p, 3p, etc. were already crossed out by smaller primes. Nice optimization!\n\n**Time Complexity:**\nO(n log log n) - this comes from the sum over primes: ∑(n/p) ≈ n·ln(ln(n))\n\n**Prime Number Theorem:**\nThe number of primes up to x is approximately x/ln(x). But an even better approximation is the logarithmic integral:\n\nLi(x) = ∫₂ˣ dt/ln(t)\n\nMy results at n=10,000:\n- Actual count: 1,229 primes\n- x/ln(x) estimate: 1,086 (11.7% error)\n- Li(x) estimate: 1,246 (only 1.4% error!)\n\n**Cool findings:**\n- Average prime gap near n is ≈ ln(n), confirming theory\n- Found 205 twin prime pairs (consecutive primes differing by 2)\n- The Ulam spiral visualization shows diagonal patterns - still not fully understood!\n\nThe code uses NumPy for efficiency and matplotlib for visualization. Happy to answer questions!\n\n**View the full notebook with code and interactive plots:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/prime_number_sieve.ipynb",2489"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/prime_number_sieve.ipynb",2490"category": "general",2491"date": "2026-04-10",2492"time": "09:00"2493},2494{2495"source": "taylor_series_expansion",2496"content_type": "notebook",2497"subreddit": "CoCalc",2498"title": "Visualizing Taylor Series Convergence in Python - How Your Calculator Actually Computes sin(x)",2499"body": "Ever wondered how calculators evaluate transcendental functions like eˣ or sin(x)? They use Taylor series—polynomial approximations that converge to the exact function value.\n\n**What is a Taylor Series?**\n\nFor any smooth function f(x) at point a:\n\nf(x) = f(a) + f'(a)(x-a) + f''(a)/2! · (x-a)² + f'''(a)/3! · (x-a)³ + ...\n\nOr more compactly: f(x) = Σ f⁽ⁿ⁾(a)/n! · (x-a)ⁿ for n = 0 to ∞\n\n**Key Examples:**\n\n• eˣ = 1 + x + x²/2! + x³/3! + ...\n• sin(x) = x - x³/3! + x⁵/5! - ...\n• cos(x) = 1 - x²/2! + x⁴/4! - ...\n\n**What I Learned:**\n\n1. Error decreases exponentially with more terms (for functions with infinite radius of convergence)\n2. ln(x) expanded around a=1 only converges for 0 < x ≤ 2\n3. Factorials in denominators make convergence surprisingly fast\n\n**Error Analysis:**\n\nAt x=1, approximating e with just 11 terms gives error < 10⁻¹⁰. The factorial growth in denominators dominates the polynomial growth in numerators.\n\n**Interactive Notebook:**\n\nYou can view and run the full notebook with visualizations here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/taylor_series_expansion.ipynb\n\nThe code implements taylor_exp(), taylor_sin(), taylor_cos(), and taylor_ln() functions using NumPy and visualizes convergence with Matplotlib.",2500"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/taylor_series_expansion.ipynb",2501"category": "general",2502"date": "2026-04-11",2503"time": "09:00"2504},2505{2506"source": "bezier_curve_drawing",2507"content_type": "notebook",2508"subreddit": "CoCalc",2509"title": "I built a complete Bézier curve implementation from scratch - here's how the math works",2510"body": "I just finished a deep dive into Bézier curves and wanted to share what I learned!\n\n**What are Bézier curves?**\n\nThey're parametric curves used everywhere - fonts, vector graphics, CAD, animation. Named after Pierre Bézier who developed them for car body design at Renault.\n\n**The core math (ELI5 version):**\n\nImagine you have a few \"control points\" - the curve smoothly flows from the first to the last, being \"pulled\" toward the middle ones. The formula uses Bernstein polynomials:\n\nB(t) = Σ C(n,i)·tⁱ·(1-t)ⁿ⁻ⁱ·Pᵢ\n\nWhere t goes from 0 to 1, and C(n,i) is the binomial coefficient.\n\n**Cool properties I discovered:**\n\n1. **Endpoint interpolation** - curve passes through P₀ and Pₙ exactly\n2. **Convex hull containment** - curve never leaves the \"boundary\" of control points\n3. **Tangent behavior** - slope at endpoints matches direction to adjacent control point\n4. **Affine invariance** - rotate/scale control points, curve transforms identically\n\n**De Casteljau's algorithm:**\n\nInstead of computing polynomials directly, you can recursively interpolate between points:\n\nP⁽ʳ⁾(t) = (1-t)·P⁽ʳ⁻¹⁾ + t·P⁽ʳ⁻¹⁾\n\nThis is more numerically stable and shows the geometric construction beautifully.\n\n**Rational Bézier curves:**\n\nBy adding weights to control points, you can draw exact conic sections. With weight 1/√2 on the middle control point of a quadratic curve, you get a perfect circular arc!\n\n**What I built:**\n\n- Bernstein polynomial evaluation\n- De Casteljau's algorithm\n- Rational Bézier curves\n- Composite splines with C¹ continuity\n- Fun shapes (heart, decorative patterns)\n\nCheck out the full notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bezier_curve_drawing.ipynb\n\nHappy to answer questions about the implementation!\n\n---",2511"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bezier_curve_drawing.ipynb",2512"category": "general",2513"date": "2026-04-11",2514"time": "09:00"2515},2516{2517"source": "martingales",2518"content_type": "notebook",2519"subreddit": "CoCalc",2520"title": "I simulated martingales in Python to understand why you can't beat a fair casino",2521"body": "Ever heard of the \"martingale betting strategy\" where you double your bet after each loss? Mathematically, it's doomed to fail — and the reason is the martingale property itself.\n\n**What's a martingale?**\n\nA martingale is a sequence of random variables where your best prediction of the next value, given everything you know, is just the current value:\n\nE[Xn₊₁ | current info] = X_n\n\nThink of it as a \"fair game\" — on average, you neither gain nor lose.\n\n**What I simulated:**\n\n1. **Simple random walk** — flip a coin, go up 1 or down 1. After 1000 paths of 500 steps each, the mean stays pinned near zero.\n\n2. **Exponential martingale** — M_n = exp(θ·S_n - n·log(cosh(θ))). Despite wild individual paths, the expected value stays at 1.\n\n3. **Optional Stopping Theorem** — stopped the walk at boundaries ±10. With 10,000 simulations, E[S_τ] ≈ 0 (theory says exactly 0).\n\n**Why it matters:**\n\n- Gambling: No strategy beats a fair game (Doob's theorem)\n- Finance: Discounted stock prices are martingales under risk-neutral measure → foundation of Black-Scholes\n- Statistics: Sequential testing and confidence sequences\n\nThe code uses NumPy and Matplotlib, runs in seconds, and the plots show the theory beautifully.\n\n**View the full interactive notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/martingales.ipynb",2522"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/martingales.ipynb",2523"category": "general",2524"date": "2026-04-12",2525"time": "09:00"2526},2527{2528"source": "quantum_zeno_effect_measurement",2529"content_type": "notebook",2530"subreddit": "CoCalc",2531"title": "I simulated the Quantum Zeno Effect in Python - watching a quantum system can freeze it in place",2532"body": "The Quantum Zeno Effect is one of the wildest things in quantum mechanics: if you measure a quantum system frequently enough, you can stop it from evolving. It's named after Zeno of Elea and his paradoxes about motion.\n\n**ELI5 version:** Imagine a pot of water trying to boil. Every time you check on it (\"a watched pot never boils\"), you reset it to cold. Check often enough, and it never gets hot. That's essentially what happens at the quantum level, except it's real physics, not just a saying.\n\n**The math (simplified):**\n- A quantum system oscillates between states with probability P(t) = cos²(Ωt/2)\n- With n measurements over time T, survival probability becomes P_n = [cos²(ΩT/2n)]^n\n- As n → ∞, P_n → 1 (the system is \"frozen\")\n\n**What I learned:**\n- The effect relies on quadratic short-time decay (not exponential)\n- Decay suppression scales as 1/n with measurement count\n- Monte Carlo simulation matches analytical predictions beautifully\n- There's also an Anti-Zeno Effect where measurements can accelerate decay!\n\nThe notebook includes derivations from the Schrödinger equation, Monte Carlo simulations, and visualizations showing how measurement frequency affects quantum evolution.\n\n**View and interact with the full notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_zeno_effect_measurement.ipynb",2533"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_zeno_effect_measurement.ipynb",2534"category": "physics",2535"date": "2026-04-12",2536"time": "09:00"2537},2538{2539"source": "koch_snowflake_fractal",2540"content_type": "notebook",2541"subreddit": "CoCalc",2542"title": "I implemented the Koch Snowflake fractal in Python - here's the math behind a shape with infinite perimeter but finite area",2543"body": "I built a visualization of the Koch snowflake fractal and wanted to share both the code approach and the fascinating mathematics.\n\n**The Construction**\n\nYou start with an equilateral triangle, then for each edge:\n1. Divide it into three equal parts\n2. Build an equilateral triangle on the middle third (pointing outward)\n3. Remove the base\n4. Repeat on all new edges\n\n**The Paradox**\n\nHere's what's wild:\n- **Perimeter** grows as (4/3)ⁿ per iteration → infinity\n- **Area** converges to exactly 8/5 of the original triangle\n\nSo you have infinite boundary enclosing finite space.\n\n**The Fractal Dimension**\n\nThe Hausdorff dimension is D = ln(4)/ln(3) ≈ 1.2619\n\nThis non-integer dimension tells you it's \"more than a line but less than a plane\" - it fills space more than 1D but doesn't cover 2D.\n\n**Python Implementation**\n\nI used complex numbers for the geometry - multiplying by e^(iπ/3) gives you the 60° rotation needed for equilateral triangles. Much cleaner than trig functions.\n\nThe recursive function builds each Koch curve segment, then three of them form the snowflake.\n\n**Interactive Notebook**\n\nYou can view and run the full code here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/koch_snowflake_fractal.ipynb\n\n**What I Learned**\n\n- Complex number arithmetic elegantly handles 2D rotations\n- Self-similarity means the same pattern at every zoom level\n- Simple recursive rules can generate infinite complexity\n\nHappy to answer questions about the implementation or math!\n\n---",2544"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/koch_snowflake_fractal.ipynb",2545"category": "general",2546"date": "2026-04-13",2547"time": "09:00"2548},2549{2550"source": "text_embeddings",2551"content_type": "notebook",2552"subreddit": "CoCalc",2553"title": "[Educational] Word Embeddings from Scratch - Complete Skip-gram Implementation in NumPy",2554"body": "I created an educational notebook that implements Word2Vec (Skip-gram with negative sampling) from scratch using only NumPy.\n\n**What you'll learn:**\n\n1. **Text Preprocessing**\n - Tokenization and vocabulary building\n - Word-to-index mapping\n - Generating (center, context) training pairs\n\n2. **Skip-gram Model**\n - Architecture: center word predicts context words\n - Negative sampling for efficient training\n - Gradient descent update rules\n - Loss tracking over epochs\n\n3. **Visualization**\n - PCA for dimensionality reduction (from scratch)\n - t-SNE implementation for clustering (simplified)\n - Color-coded by word categories\n\n4. **Applications**\n - Cosine similarity for finding related words\n - Word similarity heatmaps\n - Vector arithmetic for analogies (A - B + C = ?)\n\nThe notebook uses a small science/technology corpus for fast training while demonstrating all concepts.\n\n4 visualizations included:\n- Training loss curve\n- PCA and t-SNE embedding plots\n- Word similarity matrix\n- Analogy visualization in embedding space\n\nPerfect for understanding how LLMs and NLP models represent language internally.\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/text_embeddings/text_embeddings.ipynb\n\nSuggested subreddits: r/MachineLearning, r/learnmachinelearning, r/LanguageTechnology, r/learnpython",2555"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/text_embeddings/text_embeddings.ipynb",2556"category": "general",2557"date": "2026-04-13",2558"time": "09:00"2559},2560{2561"source": "quantum_entanglement_bell_states",2562"content_type": "notebook",2563"subreddit": "CoCalc",2564"title": "I built a quantum entanglement simulator in Python - here's how Bell states prove Einstein wrong about \"spooky action\"",2565"body": "I've been fascinated by quantum mechanics and decided to implement a Bell state simulator from scratch using just NumPy and SciPy. Here's what I learned:\n\n**ELI5: What's quantum entanglement?**\n\nImagine you have two magic coins. You flip them, and they always land the same way - both heads or both tails - even if one coin is on Earth and the other is on Mars. That's entanglement. Einstein thought this was ridiculous and called it \"spooky action at a distance.\"\n\n**What are Bell states?**\n\nBell states are the \"most entangled\" two-qubit states possible. There are four of them:\n\n- |Φ+⟩ = (1/√2)(|00⟩ + |11⟩) - both qubits always match\n- |Φ-⟩ = (1/√2)(|00⟩ - |11⟩) - both qubits match (with phase)\n- |Ψ+⟩ = (1/√2)(|01⟩ + |10⟩) - qubits are opposite\n- |Ψ-⟩ = (1/√2)(|01⟩ - |10⟩) - qubits are opposite (with phase)\n\n**The CHSH inequality - proving quantum weirdness**\n\nIn 1964, physicist John Bell devised a test: if the universe follows \"local realism\" (Einstein's view), then a certain value S must be ≤ 2. But quantum mechanics predicts S can reach 2√2 ≈ 2.83.\n\nMy simulation confirmed it! Using optimal measurement angles (0°, 90°, 45°, 135°), the Bell state |Φ+⟩ achieves S ≈ 2.83, violating the classical bound.\n\n**What I implemented:**\n- Quantum gate operations (Hadamard, CNOT, Pauli matrices)\n- Bell state construction from |00⟩ input\n- Measurement simulation with statistical sampling\n- CHSH parameter calculation and violation demonstration\n\n**Why it matters:**\n\nThis isn't just academic - Bell inequality violations are the foundation of quantum cryptography. The E91 protocol uses entanglement to create unbreakable encryption keys.\n\nCheck out the full interactive notebook with visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_entanglement_bell_states.ipynb\n\nHappy to answer questions about the implementation!",2566"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_entanglement_bell_states.ipynb",2567"category": "physics",2568"date": "2026-04-14",2569"time": "09:00"2570},2571{2572"source": "hypothesis_testing",2573"content_type": "notebook",2574"subreddit": "CoCalc",2575"title": "Title: I built an interactive Python notebook explaining hypothesis testing with visualizations and Monte Carlo simulations",2576"body": "Body:\n\n**What is this?**\n\nA comprehensive Jupyter notebook that teaches statistical hypothesis testing from first principles, with full Python implementations and visualizations.\n\n**ELI5: What is hypothesis testing?**\n\nImagine you're testing whether a factory produces widgets at the correct weight (100g). You grab 30 widgets and find the average is 102g. Is the factory broken, or did you just grab 30 slightly heavy ones by chance?\n\nHypothesis testing gives you a mathematical framework to answer this. You calculate a \"t-statistic\" that measures how far your sample is from expected, accounting for sample size and variability. Then you ask: \"If the factory IS working correctly, what's the probability I'd see results this extreme?\" That probability is your p-value.\n\n**What the notebook covers:**\n\n1. **One-sample t-test** - Testing if a sample mean differs from a hypothesized value\n2. **Two-sample Welch's t-test** - Comparing means between two groups\n3. **Confidence intervals** - The range where the true mean likely falls\n4. **Power analysis** - How sample size affects your ability to detect real effects\n5. **Effect sizes (Cohen's d)** - Measuring practical significance, not just statistical significance\n\n**The cool visualizations:**\n\n- t-distribution with rejection regions (the red zones where you reject H₀)\n- Power curves showing how detecting an effect depends on sample size\n- Monte Carlo simulation of 10,000 hypothesis tests showing p-value distributions under null (uniform!) vs alternative (skewed toward 0)\n\n**Key formulas implemented:**\n\n- t-statistic: t = (x̄ - μ₀) / (s/√n)\n- Standard error: SE = s/√n\n- Confidence interval: x̄ ± t(α/2) × SE\n- Cohen's d: d = (x̄₁ - x̄₂) / s_pooled\n\n**What I learned:**\n\nThe most eye-opening part was visualizing the p-value distributions. Under the null hypothesis, p-values are uniformly distributed (flat histogram). Under the alternative, they pile up near zero. This really drove home why we reject when p < α.\n\nAlso, power analysis is crucial. With n=30 and effect size=2, you need about 50 samples to achieve 80% power. Many studies are underpowered!\n\n**View and run the notebook:**\n\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hypothesis_testing.ipynb\n\nAll code is self-contained using numpy, scipy, and matplotlib. Feel free to modify parameters and see how results change!",2577"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hypothesis_testing.ipynb",2578"category": "general",2579"date": "2026-04-14",2580"time": "09:00"2581},2582{2583"source": "zeta_function_visualization",2584"content_type": "notebook",2585"subreddit": "CoCalc",2586"title": "I visualized the Riemann Zeta Function in Python - here's what the non-trivial zeros look like",2587"body": "I've always been fascinated by the Riemann Hypothesis, so I decided to implement and visualize ζ(s) from scratch in Python.\n\n**What is the zeta function?**\n\nFor Re(s) > 1, it's defined as:\n\nζ(s) = ∑(1/nˢ) for n = 1 to ∞\n\nBut here's the cool part—Euler discovered it connects directly to prime numbers:\n\nζ(s) = ∏(1/(1 - p⁻ˢ)) over all primes p\n\n**The Riemann Hypothesis**\n\nWhen you analytically continue ζ(s) to the complex plane, it has zeros at negative even integers (-2, -4, -6...). These are \"trivial.\"\n\nThe million-dollar question: Do ALL non-trivial zeros have Re(s) = 1/2?\n\n**What I implemented:**\n\n1. Complex zeta using Dirichlet series + reflection formula\n2. Domain coloring to visualize phase and magnitude\n3. 3D surface plot of |ζ(s)|\n4. Magnitude along the critical line showing zeros\n\n**Key findings from visualization:**\n\n- ζ(2) = π²/6 ≈ 1.6449 (Basel problem)\n- ζ(-1) = -1/12 (yes, that famous sum)\n- Non-trivial zeros at t ≈ 14.135, 21.022, 25.011, 30.425...\n- All observed zeros lie exactly on Re(s) = 1/2\n\nThe domain coloring plot is particularly beautiful—zeros appear as points where all colors converge.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/zeta_function_visualization.ipynb\n\nLibraries used: NumPy, SciPy, Matplotlib\n\nWould love feedback on the implementation or suggestions for other mathematical functions to visualize!",2588"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/zeta_function_visualization.ipynb",2589"category": "general",2590"date": "2026-04-15",2591"time": "09:00"2592},2593{2594"source": "decision_tree",2595"content_type": "notebook",2596"subreddit": "CoCalc",2597"title": "Built a Decision Tree Classifier from Scratch - Here's What I Learned About the Math",2598"body": "I implemented a decision tree classifier using only NumPy to really understand how they work. Here's the breakdown:\n\n**The Core Idea**\n\nDecision trees recursively partition your feature space into rectangular regions. At each node, they find the split that best separates the classes.\n\n**Gini Impurity**\n\nThe splitting criterion I used: G = 1 - Σpₖ²\n\nWhere pₖ is the proportion of class k in the node. Gini = 0 means pure node (all one class), Gini = 0.5 for binary means maximum impurity.\n\n**What I Learned**\n\n1. **Axis-aligned boundaries**: Trees can only split perpendicular to feature axes. This creates those characteristic \"staircase\" patterns. Diagonal decision boundaries require many splits.\n\n2. **Bias-variance tradeoff is real**:\n - Depth=2: Underfits badly, can't capture the pattern\n - Depth=5: Sweet spot, good generalization\n - Depth=15: Overfits, creates tiny regions around individual points\n\n3. **Why ensembles work**: Single trees are unstable. Small data changes → completely different trees. Random Forests average out this variance.\n\n**Results**\n\n- 300 samples, 3 classes\n- Test accuracy: 93%+ at optimal depth\n- Training accuracy approaches 100% as depth increases (overfitting signal)\n\nThe implementation is ~100 lines of Python. The predict function just walks the tree following if-then rules - that's the interpretability advantage.\n\n**Interactive Notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/decision_tree.ipynb",2599"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/decision_tree.ipynb",2600"category": "general",2601"date": "2026-04-15",2602"time": "09:00"2603},2604{2605"source": "simpson_rule_integration",2606"content_type": "notebook",2607"subreddit": "CoCalc",2608"title": "I implemented Simpson's Rule and finally understand why it's so much better than the trapezoid rule",2609"body": "I've been working through numerical integration methods and wanted to share what clicked for me about Simpson's Rule.\n\n**The basic idea:** Instead of connecting points with straight lines (trapezoid rule), Simpson's Rule fits parabolas through every three consecutive points. Sounds like a small change, but the results are dramatic.\n\n**Why it matters:**\n- Trapezoid rule: error scales as O(h²)\n- Simpson's rule: error scales as O(h⁴)\n\nThis means if you halve your step size:\n- Trapezoid error drops by 4×\n- Simpson's error drops by 16×\n\n**The formula:**\n∫f(x)dx ≈ (h/3)[f₀ + 4f₁ + 2f₂ + 4f₃ + 2f₄ + ... + 4fₙ₋₁ + fₙ]\n\nThe weights (1, 4, 2, 4, 2, ..., 4, 1) come from the parabolic approximation.\n\n**Cool discovery:** Simpson's Rule is *exact* for any polynomial of degree 3 or less. I tested it on f(x) = x³ - 2x² + x and got machine-precision accuracy with just 3 points.\n\n**Comparison test on ∫e^(-x²)dx from 0 to 1:**\n- n=32: Trapezoid error = 4.94e-06, Simpson error = 1.35e-10\n- That's almost 5 orders of magnitude better!\n\nI also implemented an adaptive version that recursively subdivides intervals where error is too high—useful for oscillatory functions.\n\nCheck out the full notebook with visualizations here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/simpson_rule_integration.ipynb\n\nHappy to answer questions about the implementation!",2610"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/simpson_rule_integration.ipynb",2611"category": "general",2612"date": "2026-04-16",2613"time": "09:00"2614},2615{2616"source": "gradient_boosting_xgboost",2617"content_type": "notebook",2618"subreddit": "CoCalc",2619"title": "I built Gradient Boosting and XGBoost from scratch to understand how they really work",2620"body": "I've always used XGBoost as a black box, so I decided to implement it from scratch to truly understand the algorithm. Here's what I learned:\n\n**The Core Idea (ELI5):**\nImagine you're trying to hit a target with darts. Gradient boosting is like having 100 people throw darts, where each person aims specifically at correcting the average miss of everyone who threw before them. By the end, the collective aim is incredibly accurate.\n\n**The Math (simplified):**\n- Start with a simple prediction (mean of target values)\n- Calculate residuals: how much each prediction is wrong\n- Train a decision tree to predict these residuals\n- Add the tree's predictions (scaled by learning rate) to your model\n- Repeat\n\n**What XGBoost adds:**\n1. Regularization to prevent overfitting: Ω(f) = γT + ½λΣw²\n2. Uses both first AND second derivatives (gradient + Hessian)\n3. Clever split-finding with gain formula\n\n**Key insight:** The learning rate controls how much we trust each tree. Small values (0.01-0.1) = slow but steady improvement. Large values (>0.3) = fast but risk overshooting.\n\n**What I built:**\n- GradientBoostingRegressorFromScratch: Basic gradient boosting\n- XGBoostStyleRegressor: With L2 regularization\n- GradientBoostingClassifierFromScratch: Using logistic loss\n\nTest MSE went from terrible to excellent as trees were added. The visualizations really helped me see how residuals shrink at each stage.\n\n**Interactive Notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gradient_boosting_xgboost.ipynb\n\nWould love to hear if others have done similar from-scratch implementations!",2621"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gradient_boosting_xgboost.ipynb",2622"category": "general",2623"date": "2026-04-16",2624"time": "09:00"2625},2626{2627"source": "voronoi_diagram_construction",2628"content_type": "notebook",2629"subreddit": "CoCalc",2630"title": "Built Voronoi diagrams in Python - here's the math and code explained",2631"body": "Just finished a notebook exploring Voronoi diagram construction and wanted to share what I learned!\n\n**What's a Voronoi diagram?**\n\nImagine dropping 20 coffee shops randomly in a city. A Voronoi diagram divides the city so that every location belongs to its nearest coffee shop. Mathematically, each cell V(pᵢ) contains all points x where the distance ‖x - pᵢ‖ is less than or equal to the distance to any other generator point.\n\n**Cool properties I discovered:**\n\n1. Every Voronoi cell is a convex polygon\n2. Cell boundaries lie on perpendicular bisectors between neighboring points\n3. Voronoi vertices are equidistant from at least 3 generator points\n4. There's a beautiful duality with Delaunay triangulations - they're essentially \"flipped\" versions of each other\n\n**Implementation:**\n\nUsed `scipy.spatial.Voronoi` which implements Fortune's sweep line algorithm with O(n log n) complexity. The tricky part was handling infinite regions at the boundaries - had to write a custom function to clip them to finite polygons.\n\n**Results:**\n\n- Tested with random points, clustered distributions, and perturbed grids\n- Average cell had about 5-6 edges\n- Verified the nearest-neighbor property with 5000 random test points: 100% accuracy\n\n**Real-world applications:**\n\n- Spatial analysis and nearest neighbor queries\n- Computer graphics (texture synthesis, mesh generation)\n- Crystallography (Wigner-Seitz cells)\n- Robotics path planning\n\nCheck out the full interactive notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/voronoi_diagram_construction.ipynb\n\nHappy to answer questions about the implementation!",2632"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/voronoi_diagram_construction.ipynb",2633"category": "general",2634"date": "2026-04-17",2635"time": "09:00"2636},2637{2638"source": "backpropagation_from_scratch",2639"content_type": "notebook",2640"subreddit": "CoCalc",2641"title": "I built a neural network from scratch to understand backpropagation - here's how it works",2642"body": "I always found backpropagation confusing until I implemented it myself. Here's an ELI5 breakdown of what's actually happening:\n\n**The Goal:** Find how much each weight contributes to the error so we can adjust it.\n\n**The Problem:** In a multi-layer network, changing one weight affects everything downstream. How do we trace that influence?\n\n**The Solution - Chain Rule:** Backpropagation breaks this into steps:\n\n1. Compute output error (easy: prediction minus truth)\n2. For each earlier layer, ask: \"How much did you contribute to the next layer's error?\"\n3. Multiply: (next layer's weights)ᵀ × (next layer's error) × (your activation derivative)\n\n**The Math (in plain terms):**\n\n- Forward pass: z = Wa + b, then a = activation(z)\n- Backward pass: δ⁽ˡ⁾ = (Wᵀδ⁽ˡ⁺¹⁾) ⊙ σ'(z⁽ˡ⁾)\n- Gradients: ∂L/∂W = δ × aᵀ (outer product)\n- Update: W = W - learning_rate × gradient\n\n**What I learned:**\n\n- Weight initialization matters (He for ReLU, Xavier for tanh/sigmoid)\n- Learning rate too high = divergence, too low = slow training\n- Deeper networks = smaller gradients in early layers (vanishing gradient problem)\n\nThe notebook trains on a spiral dataset - one of the hardest 2D classification problems because it requires nonlinear boundaries. The network goes from random guessing to 100% accuracy.\n\n**View the full notebook with code:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/backpropagation_from_scratch.ipynb\n\n---",2643"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/backpropagation_from_scratch.ipynb",2644"category": "general",2645"date": "2026-04-17",2646"time": "09:00"2647},2648{2649"source": "transfer_learning_fine_tuning",2650"content_type": "notebook",2651"subreddit": "CoCalc",2652"title": "TITLE: [OC] Implemented Transfer Learning from Scratch in NumPy - Comparing 4 Fine-Tuning Strategies",2653"body": "BODY:\n\nI built a complete transfer learning simulation using only NumPy to understand what's really happening under the hood. No TensorFlow or PyTorch - just pure math.\n\n**The Setup:**\n- Source domain: 1000 labeled samples\n- Target domain: Only 100 samples (10x less data)\n- Task: 5-class classification with domain shift between source and target\n\n**What I Compared:**\n\n1. **Training from scratch** - No transfer, random initialization\n2. **Feature extraction** - Freeze the pre-trained backbone, only train a new classifier head\n3. **Full fine-tuning** - Update all layers with a smaller learning rate\n4. **Discriminative fine-tuning** - Use different learning rates per layer (smaller for early layers)\n\n**Key Findings:**\n\nAll transfer learning methods outperformed training from scratch. The intuition is that neural networks learn hierarchical representations:\n- Early layers capture general features (useful across domains)\n- Later layers capture task-specific features (need adaptation)\n\nDiscriminative fine-tuning uses the formula: η_l = η_base · γ^(L-l) where γ < 1, so earlier layers get smaller updates.\n\n**The Code:**\n\nThe notebook includes a from-scratch neural network implementation with:\n- Xavier initialization\n- ReLU activation\n- Softmax + cross-entropy loss\n- Mini-batch gradient descent\n- Support for freezing layers and layer-wise learning rates\n\n**View the full interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/transfer_learning_fine_tuning.ipynb\n\nWould love to hear your thoughts! Has anyone experimented with other fine-tuning strategies like gradual unfreezing?",2654"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/transfer_learning_fine_tuning.ipynb",2655"category": "general",2656"date": "2026-04-18",2657"time": "09:00"2658},2659{2660"source": "eigenvalue_problems_power_method",2661"content_type": "notebook",2662"subreddit": "CoCalc",2663"title": "I implemented the Power Method for finding eigenvalues in Python—here's what I learned about convergence rates",2664"body": "If you've taken linear algebra, you know eigenvalues are everywhere—from Google's PageRank to quantum mechanics. But how do you actually compute them?\n\n**The Problem**\n\nGiven a matrix A, find scalar λ and vector v such that Av = λv. Solving the characteristic polynomial works in theory but is numerically unstable for large matrices.\n\n**The Power Method**\n\nThis iterative approach is surprisingly simple:\n\n1. Start with random vector v\n2. Multiply: w = Av\n3. Normalize: v = w/‖w‖\n4. Estimate eigenvalue: λ = vᵀAv\n5. Repeat until convergence\n\n**What I Learned**\n\nThe convergence rate is |λ₂/λ₁|ᵏ, where λ₁ is the dominant eigenvalue and λ₂ is the second largest.\n\nI tested this with different ratios:\n- |λ₂/λ₁| = 0.1 → converges in ~10 iterations\n- |λ₂/λ₁| = 0.5 → ~20 iterations\n- |λ₂/λ₁| = 0.99 → 100+ iterations\n\nWhen eigenvalues are close together, convergence is painfully slow!\n\n**Limitations**\n- Only finds the dominant eigenvalue\n- Fails for complex eigenvalues\n- Slow when eigenvalues aren't well-separated\n\n**Extensions**\n\nFor other eigenvalues, use:\n- Inverse Power Method: apply to A⁻¹ for smallest eigenvalue\n- Shifted Inverse Iteration: find eigenvalue closest to any value σ\n\nThe notebook includes visualization of convergence behavior and eigenvector component evolution.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/eigenvalue_problems_power_method.ipynb\n\nHappy to answer questions about the implementation!",2665"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/eigenvalue_problems_power_method.ipynb",2666"category": "general",2667"date": "2026-04-18",2668"time": "09:00"2669},2670{2671"source": "qr_decomposition",2672"content_type": "notebook",2673"subreddit": "CoCalc",2674"title": "Implemented QR Decomposition from Scratch - Classical vs Modified Gram-Schmidt Comparison",2675"body": "Hey everyone! I just finished a notebook exploring QR decomposition and wanted to share what I learned.\n\n**What is QR Decomposition?**\n\nIt breaks down a matrix A into two parts: A = QR\n- Q is an orthogonal matrix (its transpose equals its inverse: QᵀQ = I)\n- R is upper triangular\n\n**The Gram-Schmidt Process**\n\nThe classic way to compute QR is Gram-Schmidt orthogonalization. You take each column of A and subtract its projections onto previous orthonormal vectors, then normalize.\n\nBut here's the catch - there are TWO versions:\n\n1. **Classical GS**: Compute all projections from the original vector\n2. **Modified GS**: Update the vector after each projection subtraction\n\n**Why Does This Matter?**\n\nI tested both on Hilbert matrices (famous for being numerically awful). The results were eye-opening:\n\n- For a 12x12 Hilbert matrix with condition number ~10¹⁶\n- Classical GS: loss of orthogonality around 10⁰ (basically garbage)\n- Modified GS: loss around 10⁻⁵ (much better!)\n- NumPy (Householder): loss around 10⁻¹⁵ (machine precision)\n\n**Practical Application**\n\nQR decomposition is great for solving least squares problems. Instead of computing (AᵀA)⁻¹Aᵀb (which squares the condition number), you solve x = R⁻¹Qᵀb via back-substitution.\n\nI fitted a cubic polynomial to noisy data and got results matching NumPy's lstsq.\n\n**Key Takeaway**\n\nAlways use Modified Gram-Schmidt over Classical. But for production, use library functions - they typically use Householder reflections which are numerically superior.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/qr_decomposition.ipynb\n\n---",2676"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/qr_decomposition.ipynb",2677"category": "general",2678"date": "2026-04-19",2679"time": "09:00"2680},2681{2682"source": "symmetric_polynomials",2683"content_type": "notebook",2684"subreddit": "SageMath",2685"title": "Title: Implementing Symmetric Polynomials in Python - Verifying Newton's Identities and Vieta's Formulas",2686"body": "Body:\n\nI created a Jupyter notebook exploring symmetric polynomials - polynomials that remain unchanged when you permute their variables.\n\n**What are symmetric polynomials?**\n\nA polynomial f(x₁, x₂, ..., xₙ) is symmetric if swapping any variables doesn't change it. For example, x+y+z or xy+xz+yz.\n\n**The elementary symmetric polynomials** are the building blocks:\n- e₀ = 1\n- e₁ = x₁ + x₂ + ... + xₙ (sum of variables)\n- e₂ = sum of all pairs multiplied together\n- eₙ = x₁·x₂·...·xₙ (product of all)\n\n**Why do they matter?**\n\n1. **Vieta's Formulas**: The coefficients of a polynomial ARE the elementary symmetric polynomials of its roots! If r₁, r₂, r₃ are roots of x³ + ax² + bx + c = 0, then:\n - a = -(r₁+r₂+r₃) = -e₁\n - b = r₁r₂+r₁r₃+r₂r₃ = e₂\n - c = -r₁r₂r₃ = -e₃\n\n2. **Fundamental Theorem**: Every symmetric polynomial can be written as a polynomial in e₁, e₂, ..., eₙ\n\n3. **Newton's Identities**: Connect elementary symmetric polynomials to power sums (x₁ᵏ + x₂ᵏ + ...)\n\nThe notebook includes Python implementations, numerical verification of all identities, and visualizations.\n\nView and run the notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/symmetric_polynomials_out.ipynb",2687"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/symmetric_polynomials_out.ipynb",2688"category": "general",2689"date": "2026-04-19",2690"time": "09:00"2691},2692{2693"source": "coupled_oscillators",2694"content_type": "notebook",2695"subreddit": "CoCalc",2696"title": "I simulated coupled oscillators in Python and finally understood the beat phenomenon",2697"body": "Ever seen two pendulums connected by a spring \"take turns\" swinging? That's the beat phenomenon, and I finally grokked it by coding it myself.\n\n**The Setup**\n\nTwo identical pendulums connected by a weak spring. When you pull one back and release, something counterintuitive happens: the first pendulum gradually stops while the second one starts swinging. Then the energy flows back. It's like they're passing a ball back and forth.\n\n**The Math (ELI5)**\n\nThe trick is finding \"normal modes\"—special ways the system can move where both pendulums oscillate at a single frequency:\n\n- **Symmetric mode**: Both swing together (spring stays relaxed)\n- **Antisymmetric mode**: They swing opposite (spring gets stretched)\n\nAny motion is just these two modes superimposed. The beat comes from their frequency difference.\n\n**What I Learned**\n\n1. Normal mode analysis is incredibly powerful for simplifying coupled systems\n2. Energy conservation is maintained throughout (good sanity check!)\n3. The phase space plot creates beautiful Lissajous-like patterns\n\n**The Code**\n\nUsed numpy and scipy's odeint to solve the ODEs, matplotlib for visualization. The notebook shows displacement, energy transfer, phase space, and normal mode decomposition.\n\n**Interactive Notebook**\n\nYou can view and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/coupled_oscillators.ipynb\n\nThis same physics appears in molecular vibrations, crystal lattices, and even quantum systems!\n\n---",2698"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/coupled_oscillators.ipynb",2699"category": "general",2700"date": "2026-04-20",2701"time": "09:00"2702},2703{2704"source": "multigrid_method",2705"content_type": "notebook",2706"subreddit": "CoCalc",2707"title": "I implemented a multigrid solver in Python - here's why it's 50x faster than basic iteration",2708"body": "I've been learning numerical methods and just finished implementing a multigrid solver for the 2D Poisson equation. The performance difference compared to classical methods blew my mind, so I wanted to share.\n\n**The Problem**\n\nWhen you discretize an elliptic PDE (like the Poisson equation -∇²u = f) on a grid, you get a huge sparse linear system. Classical iterative methods like Gauss-Seidel work, but they're painfully slow for large grids.\n\nWhy? They're great at smoothing out high-frequency (oscillatory) errors, but terrible at eliminating low-frequency (smooth) errors. The damping factor for low modes is nearly 1, so those errors barely shrink each iteration.\n\n**The Multigrid Insight**\n\nHere's the brilliant idea: what looks like a smooth, low-frequency error on a fine grid appears as an oscillatory, high-frequency error on a coarser grid!\n\nSo the algorithm does this (V-cycle):\n1. Smooth the error on the fine grid (pre-smoothing)\n2. Compute the residual r = f - Au\n3. Restrict to a coarser grid\n4. Solve/smooth on the coarse grid (recursively)\n5. Interpolate the correction back to fine grid\n6. Smooth again (post-smoothing)\n\n**Results**\n\nOn a 64×64 grid:\n- Multigrid: ~10 iterations to converge\n- Gauss-Seidel: 500+ iterations and still not there\n\nThe best part? The convergence rate is independent of grid size. Whether you have 16×16 or 128×128 points, multigrid converges in roughly the same number of iterations. That's O(N) total complexity!\n\n**What I Learned**\n\n- Grid transfer operators matter: full-weighting restriction and bilinear interpolation preserve accuracy\n- Red-black Gauss-Seidel ordering is more efficient than lexicographic\n- The coarse grid doesn't need to be solved exactly - a few smoothing sweeps work fine\n\nExplore the interactive notebook with full implementation:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/multigrid_method.ipynb",2709"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/multigrid_method.ipynb",2710"category": "general",2711"date": "2026-04-20",2712"time": "09:00"2713},2714{2715"source": "lu_matrix_factorization",2716"content_type": "notebook",2717"subreddit": "CoCalc",2718"title": "I implemented LU Matrix Factorization from scratch - here's why it's so powerful for solving linear systems",2719"body": "I put together a Jupyter notebook exploring LU matrix factorization - one of the fundamental algorithms in numerical linear algebra.\n\n**ELI5 Version:** Imagine you have a complicated math problem Ax = b. Instead of solving it directly (expensive!), you split matrix A into two simpler pieces: L (lower triangular - only has values on and below the diagonal) and U (upper triangular - only on and above). These triangular matrices are easy to solve through substitution.\n\n**What the notebook covers:**\n\n1. **Doolittle's Algorithm** - Implemented from scratch to compute L and U\n2. **Partial Pivoting** - PA = LU for numerical stability with tiny pivots\n3. **Forward & Backward Substitution** - How to solve Ly = b and Ux = y\n4. **Determinant Computation** - det(A) = plus/minus product of diag(U)\n5. **Matrix Inversion** - Solve Ax = eᵢ for each basis vector\n6. **Performance Analysis** - Compared against NumPy/SciPy\n\n**Key benchmark:** Solving 100 systems with a 200×200 matrix:\n- Direct solve each time: baseline\n- LU (factor once): significant speedup!\n\nThe O(n³) factorization is a one-time cost. Every subsequent solve is only O(n²).\n\n**View and run the notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lu_matrix_factorization.ipynb\n\n---",2720"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lu_matrix_factorization.ipynb",2721"category": "general",2722"date": "2026-04-21",2723"time": "09:00"2724},2725{2726"source": "singular_value_decomposition_svd",2727"content_type": "notebook",2728"subreddit": "CoCalc",2729"title": "I built an interactive notebook exploring Singular Value Decomposition (SVD) - here's what I learned",2730"body": "SVD is one of those topics that sounds intimidating but is actually beautifully intuitive once you see it in action.\n\n**The core idea:** Any matrix A can be decomposed as A = UΣVᵀ, which breaks a linear transformation into three simple operations:\n1. Rotate/reflect the input (Vᵀ)\n2. Scale along coordinate axes (Σ)\n3. Rotate/reflect to output (U)\n\n**What surprised me:**\n\nThe singular values σᵢ decay incredibly fast. I created a noisy rank-5 matrix and found that just 5 singular values captured ~90% of the total \"energy.\" This is why SVD is so powerful for compression—you can throw away most components and barely lose information.\n\n**The Eckart-Young-Mirsky theorem** proves this rank-k approximation is mathematically optimal. No other method can do better (in Frobenius norm).\n\n**Practical demo:**\n\nI compressed a synthetic image using SVD. Results:\n- Rank 1: 4% storage (barely recognizable)\n- Rank 5: 11% storage (main features visible)\n- Rank 10: 21% storage (good quality)\n- Rank 20: 41% storage (nearly indistinguishable)\n\nThe notebook includes visualizations of the singular value spectrum, cumulative energy curves, and approximation errors.\n\n**View the full notebook here:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/singular_value_decomposition_svd.ipynb\n\nHappy to answer questions about the implementation!",2731"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/singular_value_decomposition_svd.ipynb",2732"category": "general",2733"date": "2026-04-21",2734"time": "09:00"2735},2736{2737"source": "geometric_brownian_motion",2738"content_type": "notebook",2739"subreddit": "CoCalc",2740"title": "Title: Simulating Stock Prices with Geometric Brownian Motion in Python - Complete Tutorial",2741"body": "Body:\n\n**What is GBM?**\n\nGeometric Brownian Motion is the mathematical model behind the famous Black-Scholes options pricing formula. It describes how stock prices evolve over time with two components:\n\n- **Drift (μ)**: The expected return rate (like 5% annual growth)\n- **Volatility (σ)**: How much prices fluctuate (like 20% annual volatility)\n\n**The Math (simplified)**\n\nThe price S at time t follows:\n\nS(t) = S₀ × exp[(μ - σ²/2)t + σW(t)]\n\nWhere W(t) is a Wiener process (random walk). The key insight is that log-returns are normally distributed, so prices are log-normally distributed.\n\n**What I Built**\n\nI simulated 1000 price paths over 252 trading days (1 year) using:\n1. Exact analytical solution\n2. Euler-Maruyama numerical method\n\n**Results**\n\n- Starting price: 100\n- Expected final price: 105.13 (from e^(0.05×1))\n- Simulated mean: ~105 (both methods)\n- Kolmogorov-Smirnov test confirms log-normality\n\n**Key Takeaway**\n\nGBM is elegant but limited - it assumes constant volatility, which real markets violate (see: 2008, 2020). Extensions like stochastic volatility models address this.\n\n**View the full notebook with code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/geometric_brownian_motion.ipynb",2742"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/geometric_brownian_motion.ipynb",2743"category": "general",2744"date": "2026-04-22",2745"time": "09:00"2746},2747{2748"source": "brownian_bridge",2749"content_type": "notebook",2750"subreddit": "CoCalc",2751"title": "Simulating Brownian Bridges in Python - When Random Walks Must Return Home",2752"body": "Ever heard of a Brownian bridge? It's one of those elegant mathematical objects that sounds fancy but has a simple intuition.\n\n**ELI5:** Imagine you're a drunk person doing a random walk, but you MUST end up exactly where you started. How does that constraint change your journey? That's the Brownian bridge.\n\n**The Math (in plain terms):**\n- Regular Brownian motion W(t) can wander anywhere\n- A Brownian bridge B(t) is conditioned to return to zero: B(0) = B(T) = 0\n- Formula: B(t) = W(t) - (t/T) × W(T)\n\n**Interesting Properties:**\n1. The mean is always zero\n2. Variance = t(T-t)/T - it's a parabola! Maximum uncertainty at the midpoint\n3. The process is Gaussian at every time point\n\n**What I Built:**\n- Simulated 10,000 sample paths\n- Verified boundary conditions (exact to machine precision)\n- Confirmed variance formula (<1% error)\n- Ran Shapiro-Wilk normality test (passed)\n\n**Why It Matters:**\n- Statistics: The Kolmogorov-Smirnov test uses Brownian bridge properties\n- Finance: Variance reduction in Monte Carlo pricing\n- Physics: Modeling constrained polymer chains\n\nThe code uses numpy for simulation and scipy for statistical tests. Each path is generated by first simulating standard Brownian motion, then applying the bridge transformation.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/brownian_bridge.ipynb",2753"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/brownian_bridge.ipynb",2754"category": "general",2755"date": "2026-04-22",2756"time": "09:00"2757},2758{2759"source": "sequence_alignment",2760"content_type": "notebook",2761"subreddit": "CoCalc",2762"title": "DNA Sequence Alignment from Scratch: Needleman-Wunsch and Smith-Waterman in Python",2763"body": "I built a notebook implementing the two fundamental algorithms of bioinformatics - Needleman-Wunsch (global alignment) and Smith-Waterman (local alignment).\n\n**What's covered:**\n\n1. **Scoring Systems** - Match rewards, mismatch penalties, and gap costs. How these parameters affect alignment quality.\n\n2. **Needleman-Wunsch Algorithm** - The classic dynamic programming approach for global alignment. Builds the scoring matrix, then traces back to find the optimal alignment.\n\n3. **Smith-Waterman Algorithm** - Modified for local alignment. Key difference: negative scores reset to 0, allowing the algorithm to find the best matching subsequences anywhere in the sequences.\n\n4. **Visualizations** - Heatmaps of the DP matrices showing how scores propagate, plus dot matrices for quick homology detection.\n\n5. **Real DNA Example** - Aligning original vs mutated sequences to calculate sequence identity and find conserved regions.\n\n**Why this matters:**\nThese algorithms are the foundation of BLAST and other tools that made the genomics revolution possible. Understanding how they work helps you interpret results from bioinformatics pipelines.\n\n**Complexity:** Both are O(mn) in time and space, where m and n are sequence lengths.\n\nTry it: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/sequence_alignment/sequence_alignment.ipynb\n\nSuggested subreddits: r/bioinformatics, r/learnpython, r/algorithms, r/biology, r/compsci",2764"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/sequence_alignment/sequence_alignment.ipynb",2765"category": "general",2766"date": "2026-04-23",2767"time": "09:00"2768},2769{2770"source": "trapezoidal_rule_integration",2771"content_type": "notebook",2772"subreddit": "CoCalc",2773"title": "TITLE: Understanding the Trapezoidal Rule - A Visual Guide to Numerical Integration",2774"body": "BODY:\n\n**What is the trapezoidal rule?**\n\nIt's a method to approximate definite integrals when you can't solve them analytically. Instead of finding the exact area under a curve, you approximate it using trapezoids.\n\n**How it works:**\n\nDivide your interval [a, b] into n equal parts. At each point, evaluate your function. Connect consecutive points with straight lines (forming trapezoids) and sum their areas.\n\nThe formula is:\nTₙ = h × [f(a)/2 + f(x₁) + f(x₂) + ... + f(xₙ₋₁) + f(b)/2]\n\nwhere h = (b-a)/n is the width of each trapezoid.\n\n**Why is it useful?**\n\nThe error is O(n⁻²), meaning if you double the number of trapezoids, your error decreases by a factor of 4. With n=1024, I achieved errors below 10⁻⁸ for smooth functions.\n\n**What I tested:**\n- ∫₀¹ x² dx = 1/3 (polynomial)\n- ∫₀π sin(x) dx = 2 (trigonometric)\n- ∫₀¹ e⁻ˣ² dx ≈ 0.7468 (Gaussian)\n- ∫₀¹ 1/(1+x²) dx = π/4 (rational)\n\nAll showed beautiful O(n⁻²) convergence on the log-log plots.\n\n**The code:**\n\nIt's just ~10 lines of NumPy. The built-in `np.trapezoid()` does the same thing. My implementation matched it to machine precision.\n\n**Limitations:**\n\nSimpson's rule has O(n⁻⁴) convergence for smooth functions, so it's more efficient. But the trapezoidal rule is simpler to understand and implement.\n\nView the full notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/trapezoidal_rule_integration.ipynb",2775"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/trapezoidal_rule_integration.ipynb",2776"category": "general",2777"date": "2026-04-23",2778"time": "09:00"2779},2780{2781"source": "value_at_risk_simulation",2782"content_type": "notebook",2783"subreddit": "CoCalc",2784"title": "Built a Value at Risk (VaR) Monte Carlo Simulation in Python - Comparing 3 Risk Calculation Methods",2785"body": "I created a Jupyter notebook exploring Value at Risk (VaR), the standard risk metric used by banks and investment firms to measure potential portfolio losses.\n\n**What is VaR?**\n\nVaR answers: \"What's the maximum amount I could lose over one day, with 95% (or 99%) confidence?\"\n\nFor example, if your 95% daily VaR is 19,000, that means on 95% of days, your losses won't exceed 19K. But on that remaining 5% of days... it could be worse.\n\n**The Three Methods I Implemented:**\n\n1. **Parametric VaR** - Assumes returns follow a normal (Gaussian) distribution. Simple formula: VaR = -(μ + z × σ), where z is the standard normal quantile.\n\n2. **Historical Simulation** - No assumptions! Just take the 5th percentile (for 95% VaR) of your actual historical returns.\n\n3. **Monte Carlo Simulation** - Generate thousands of random scenarios based on estimated mean and volatility, then find the percentile.\n\n**Results for a 1M Portfolio (20% annual volatility):**\n\n| Confidence | Parametric | Historical | Monte Carlo |\n|------------|------------|------------|-------------|\n| 95% | 19,214 | 19,041 | ~19,200 |\n| 99% | 28,073 | 29,118 | ~28,500 |\n\n**The Big Insight: VaR's Limitation**\n\nVaR only tells you the threshold - not how bad things get beyond it. That's why I also calculated Expected Shortfall (ES), which averages the losses in the worst 5% (or 1%) of cases.\n\nAt 99% confidence, ES is about 15% higher than VaR, showing there's significant tail risk VaR doesn't capture. This is why regulators moved to ES after 2008.\n\n**Tech Stack:** NumPy, SciPy, Pandas, Matplotlib\n\n**View the full interactive notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/valueₐt_risk_simulation.ipynb\n\nHappy to answer questions about the implementation or the math!",2786"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/valueₐt_risk_simulation.ipynb",2787"category": "general",2788"date": "2026-04-24",2789"time": "09:00"2790},2791{2792"source": "duration_and_convexity",2793"content_type": "notebook",2794"subreddit": "CoCalc",2795"title": "Title: Built a Python Bond Analytics Tool - Duration, Convexity, and Immunization",2796"body": "Body:\n\n**What I Built**\n\nA comprehensive Jupyter notebook implementing duration and convexity calculations for fixed-income bonds. These are fundamental risk measures that tell you how sensitive a bond's price is to changes in interest rates.\n\n**ELI5: What are Duration and Convexity?**\n\nImagine you're holding a bond. If interest rates go up, your bond becomes less attractive (why hold a 5% bond when new bonds pay 6%?), so its price drops. But by how much?\n\n- **Duration** answers: \"For each 1% increase in yield, my bond price drops by approximately X%\"\n- **Convexity** refines this: \"Actually, the relationship is curved, not linear, so here's a correction factor\"\n\nThink of duration as the slope of a curve, and convexity as how much that curve bends.\n\n**Key Formulas (in plain terms)**\n\n- Bond Price = Sum of all discounted cash flows\n- Macaulay Duration = Weighted average time to receive cash flows\n- Modified Duration = Macaulay Duration / (1 + yield per period)\n- Price Change ≈ -Modified Duration × Yield Change + ½ × Convexity × (Yield Change)²\n\n**What the Code Does**\n\n1. `Bond` class with methods for price, yield_to_maturity, duration, and convexity\n2. Comparison of duration-only vs duration+convexity approximations\n3. Visualization of price-yield curves and approximation errors\n4. Portfolio duration/convexity as weighted averages\n5. Duration immunization strategy example\n\n**Example Output**\n\nFor a 1000 face value, 6% coupon, 10-year bond at 5% yield:\n- Price: 1,077.95\n- Macaulay Duration: 7.99 years\n- Modified Duration: 7.79\n- Convexity: 73.34\n\n**Key Takeaway**\n\nDuration alone underestimates gains and overestimates losses. Convexity captures why bonds with higher convexity are more valuable - they gain more when rates fall than they lose when rates rise.\n\n**View the Interactive Notebook**\n\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/durationₐnd_convexity.ipynb\n\nHappy to answer questions about the implementation!",2797"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/durationₐnd_convexity.ipynb",2798"category": "general",2799"date": "2026-04-24",2800"time": "09:00"2801},2802{2803"source": "elliptic_curves_cryptography",2804"content_type": "notebook",2805"subreddit": "SageMath",2806"title": "Title: I implemented Elliptic Curve Cryptography from scratch in Python - here's what I learned about why Bitcoin uses 256-bit keys",2807"body": "Body:\n\n**TL;DR:** Built a complete ECC implementation including point addition, scalar multiplication, and Diffie-Hellman key exchange. The math is beautiful and the security comes from a simple geometric operation being computationally irreversible.\n\n**What are Elliptic Curves?**\n\nAn elliptic curve is defined by: y² = x³ + ax + b\n\nNothing to do with ellipses! The name comes from elliptic integrals. The cool part: points on this curve form a mathematical group.\n\n**The Group Operation**\n\nTo \"add\" two points P and Q:\n1. Draw a line through them\n2. Find where it intersects the curve (always a third point)\n3. Reflect that point over the x-axis\n\nThat's it. This geometric operation, when done in a finite field (mod p), creates a one-way function.\n\n**Why It's Secure**\n\nGiven points P and Q where Q = kP (meaning you added P to itself k times), finding k is the Elliptic Curve Discrete Logarithm Problem (ECDLP).\n\nBest known attack: O(√n) - meaning 256-bit keys give 128-bit security.\n\n**Key Size Comparison:**\n- 256-bit ECC = 3072-bit RSA\n- 384-bit ECC = 7680-bit RSA\n\nSame security, much smaller keys = faster operations and less bandwidth.\n\n**What I Built:**\n- Point addition and doubling over reals and finite fields\n- Scalar multiplication using double-and-add\n- Full ECDH key exchange demonstration\n- Visualization of curve geometry\n\n**Explore the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/elliptic_curves_cryptography.ipynb\n\nHappy to answer questions about the implementation!",2808"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/elliptic_curves_cryptography.ipynb",2809"category": "cryptography",2810"date": "2026-04-25",2811"time": "09:00"2812},2813{2814"source": "qr_algorithm_eigenvalues",2815"content_type": "notebook",2816"subreddit": "CoCalc",2817"title": "I implemented the QR Algorithm for eigenvalues from scratch - here's what I learned about this foundational numerical method",2818"body": "Just built an implementation of the QR algorithm, which is the backbone of how NumPy/LAPACK actually compute eigenvalues. Wanted to share what I learned!\n\n**The Core Idea (ELI5)**\n\nFinding eigenvalues means solving det(A - λI) = 0, but that polynomial approach is numerically unstable. Instead, the QR algorithm iteratively transforms the matrix:\n\n1. Decompose: A = QR (Q is orthogonal, R is upper triangular)\n2. Reverse multiply: A_new = RQ\n3. Repeat until A becomes triangular\n\nThe eigenvalues magically appear on the diagonal!\n\n**Why It Works**\n\nEach iteration is a similarity transformation: Ak₊₁ = QᵀAQ. Similar matrices have identical eigenvalues, but the off-diagonal elements shrink toward zero.\n\n**The Speed Hack: Shifts**\n\nBasic QR converges linearly (slow). By subtracting a \"shift\" μ before decomposition:\n- A - μI = QR\n- A_new = RQ + μI\n\nUsing the Rayleigh quotient shift (μ = bottom-right element) gives CUBIC convergence for symmetric matrices. My tests showed ~100 iterations → ~10 iterations.\n\n**Key Takeaway**\n\nThis algorithm from 1961 is still fundamental today. Modern implementations add Hessenberg preprocessing and implicit QR steps, but the core idea remains the same.\n\n**Interactive Notebook:** View and run the full implementation with visualizations here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/qrₐlgorithmₑigenvalues.ipynb",2819"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/qrₐlgorithmₑigenvalues.ipynb",2820"category": "general",2821"date": "2026-04-25",2822"time": "09:00"2823},2824{2825"source": "birthday_paradox",2826"content_type": "notebook",2827"subreddit": "CoCalc",2828"title": "I built a Birthday Paradox simulator in Python - here's why only 23 people need for 50% collision probability",2829"body": "I've always found the Birthday Paradox counterintuitive, so I decided to implement it properly and really understand why it works.\n\n**The Problem:** In a group of n people, what's the probability at least two share a birthday?\n\n**The Counterintuitive Answer:** Only 23 people needed for >50% probability!\n\n**Why Our Intuition Fails:**\nWe think about the chance someone shares OUR birthday (1/365). But we should think about ALL pairs. With 23 people, there are C(23,2) = 253 unique pairs to compare.\n\n**The Math:**\nThe probability all n people have different birthdays:\nP_different = (365/365) × (364/365) × (363/365) × ... × (365-n+1)/365\n\nSo probability of at least one match:\nP(n) = 1 - ∏(365-k)/365 for k from 0 to n-1\n\n**My Implementation:**\n- Exact analytical calculation\n- Exponential approximation: P(n) ≈ 1 - e^(-n(n-1)/730)\n- Monte Carlo simulation (10,000 trials per group size)\n\n**Key Results:**\n- P(10) = 11.7%\n- P(23) = 50.7%\n- P(50) = 97.0%\n- P(70) = 99.9%\n\nThe approximation has <1.5% error - surprisingly accurate!\n\n**Real-World Applications:**\n- Hash table collision analysis\n- Cryptographic birthday attacks (O(√N) instead of O(N))\n- Any system where you need to estimate collision probability\n\n**View the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/birthday_paradox.ipynb\n\nHappy to answer questions about the implementation!",2830"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/birthday_paradox.ipynb",2831"category": "general",2832"date": "2026-04-26",2833"time": "09:00"2834},2835{2836"source": "galois_field_arithmetic",2837"content_type": "notebook",2838"subreddit": "CoCalc",2839"title": "I built a Galois Field arithmetic implementation in Python - here's what I learned about the math behind AES and error correction",2840"body": "I've been curious about how encryption and error-correcting codes actually work at the mathematical level, so I built a Python implementation of Galois field (finite field) arithmetic from scratch.\n\n**ELI5 version:** A Galois field is like doing math with a limited set of numbers where you can still add, subtract, multiply, and divide (except by zero). The cool part? They only exist when the number of elements is a prime or prime power.\n\n**What the notebook covers:**\n\n- **Prime fields GF(p):** Regular integers mod a prime. Uses the Extended Euclidean Algorithm to find multiplicative inverses\n- **Extension fields GF(2^n):** Binary polynomials! Addition becomes XOR, which is why computers love these\n- **Primitive elements:** Special elements whose powers generate every non-zero element in the field\n- **Cayley tables:** Visual representations showing how addition and multiplication work\n\n**Why this matters:**\n\n- AES encryption uses GF(2^8) with the irreducible polynomial x^8 + x^4 + x^3 + x + 1\n- QR codes use Reed-Solomon error correction over GF(2^8)\n- Elliptic curve crypto uses curves over these fields\n\nThe implementation includes both GF(7) as a prime field example and GF(2^4) = GF(16) for extension fields. I also demonstrated simple error detection by computing syndromes.\n\n**Key mathematical insight:** For any finite field of order q, the multiplicative group (all non-zero elements) is cyclic of order q-1. This means there's always a \"generator\" element whose powers give you every other non-zero element.\n\nCheck out the interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/galois_field_arithmetic.ipynb\n\nHappy to answer questions about the implementation or the math!",2841"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/galois_field_arithmetic.ipynb",2842"category": "general",2843"date": "2026-04-26",2844"time": "09:00"2845},2846{2847"source": "chua_circuit",2848"content_type": "notebook",2849"subreddit": "CoCalc",2850"title": "Simulating Chaos: The Chua Circuit Double Scroll Attractor in Python",2851"body": "**What is the Chua Circuit?**\n\nThe Chua circuit, designed in 1983, is the simplest electronic circuit that exhibits chaotic behavior. It uses just 5 components: 2 capacitors, 1 inductor, 1 resistor, and a special nonlinear element called Chua's diode.\n\n**Why does it matter?**\n\nIt's become the \"fruit fly\" of chaos research - simple enough to study, yet rich enough to show all the hallmarks of chaotic systems:\n- Strange attractors\n- Sensitive dependence on initial conditions\n- Period-doubling route to chaos\n\n**The Math (simplified)**\n\nThe system is described by three coupled differential equations. In dimensionless form:\n\ndx/dτ = α[y - x - f(x)]\ndy/dτ = x - y + z\ndz/dτ = -βy\n\nThe nonlinearity f(x) is a piecewise-linear function that acts like a negative resistance in certain voltage ranges.\n\n**What I learned:**\n\n1. **Sensitivity is real** - Changing initial x from 0.7 to 0.7001 (0.01% difference) leads to trajectories that diverge by 1000x after just τ = 50\n\n2. **Bifurcation diagrams are beautiful** - As you vary α from 8 to 16, you can see the system go from periodic orbits → period doubling → full chaos\n\n3. **Lyapunov exponents quantify chaos** - A positive largest Lyapunov exponent (≈ 0.3 for standard parameters) mathematically confirms exponential divergence\n\n**Code highlights:**\n\nUsed `scipy.integrate.solve_ivp` with RK45 method and tight tolerances (rtol=10⁻⁸) to accurately capture the chaotic dynamics. The piecewise Chua diode function uses absolute values for a vectorized implementation.\n\n**View the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/chua_circuit.ipynb",2852"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/chua_circuit.ipynb",2853"category": "general",2854"date": "2026-04-27",2855"time": "09:00"2856},2857{2858"source": "sierpinski_triangle_fractal",2859"content_type": "notebook",2860"subreddit": "CoCalc",2861"title": "I built the Sierpinski Triangle 3 different ways in Python - here's what I learned about fractals",2862"body": "Hey everyone! I just finished a deep dive into the Sierpinski Triangle and wanted to share what I learned.\n\n**What is it?**\n\nThe Sierpinski Triangle is a fractal - a shape that looks the same at every scale. Imagine an equilateral triangle where you keep removing the middle piece, forever. What's left has some mind-bending properties:\n\n- **Zero area** (you keep removing pieces until nothing's left)\n- **Infinite perimeter** (the boundary gets longer with each iteration)\n- **Fractal dimension ≈ 1.585** (it's \"rougher\" than a line but \"thinner\" than a plane)\n\n**Three Ways to Build It:**\n\n1. **Recursive Method** - Literally implement the definition. Start with a triangle, find midpoints, remove the center, repeat. At depth 6, you get 3⁶ = 729 tiny triangles.\n\n2. **Chaos Game** - This one blew my mind. Pick a random starting point. Then randomly choose one of the three vertices and move HALFWAY there. Plot the point. Repeat 50,000 times. Somehow, pure randomness creates perfect fractal structure!\n\n3. **Pascal's Triangle Mod 2** - Color the odd numbers in Pascal's Triangle. You get the Sierpinski pattern! This connects combinatorics to fractal geometry.\n\n**The Cool Part:**\n\nI verified the fractal dimension using box-counting (count how many boxes of size ε contain points, see how that scales). Got D ≈ 1.585, matching the theoretical value of log(3)/log(2).\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/sierpinski_triangle_fractal.ipynb\n\nLibraries used: NumPy, Matplotlib\n\nHappy to answer questions about the implementation!",2863"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/sierpinski_triangle_fractal.ipynb",2864"category": "general",2865"date": "2026-04-27",2866"time": "09:00"2867},2868{2869"source": "cholesky_decomposition",2870"content_type": "notebook",2871"subreddit": "CoCalc",2872"title": "I implemented Cholesky decomposition from scratch - here's what I learned about efficient matrix factorization",2873"body": "Hey everyone! I just finished implementing the Cholesky decomposition algorithm and wanted to share what I learned.\n\n**What is Cholesky Decomposition?**\n\nIt's a way to factor a symmetric positive-definite matrix A into A = LLᵀ, where L is a lower triangular matrix. Think of it like the \"square root\" of a matrix.\n\n**Why should you care?**\n\n1. **Speed**: It requires n³/3 floating-point operations vs 2n³/3 for LU decomposition - literally twice as fast\n2. **Stability**: For positive-definite matrices, it's numerically well-behaved\n3. **Applications**: Solving linear systems, Gaussian processes in ML, generating correlated random samples\n\n**The Algorithm (ELI5)**\n\nFor each column j:\n- Compute the diagonal: Lⱼⱼ = √(Aⱼⱼ - sum of squares of previous elements in row j)\n- Compute below diagonal: subtract dot products and divide by diagonal\n\n**Cool Finding**\n\nI benchmarked it against LU decomposition for matrices from 50×50 to 1000×1000. Cholesky was consistently 1.5-2× faster!\n\n**Practical Application**\n\nThe notebook includes generating correlated random variables. If you have a covariance matrix Σ and want samples from N(μ, Σ), just compute L from Σ = LLᵀ, then transform standard normal samples: x = μ + Lz\n\nThe condition number relationship κ(L)² ≈ κ(A) means solving via Cholesky is as stable as the problem allows.\n\nCheck out the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/cholesky_decomposition.ipynb\n\nHappy to answer questions!",2874"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/cholesky_decomposition.ipynb",2875"category": "general",2876"date": "2026-04-28",2877"time": "09:00"2878},2879{2880"source": "schwarzschild_metric",2881"content_type": "notebook",2882"subreddit": "CoCalc",2883"title": "Visualizing the Schwarzschild Metric: Black Hole Spacetime in Python",2884"body": "I created a Jupyter notebook exploring the Schwarzschild metric—Einstein's exact solution for spacetime around a non-rotating black hole.\n\n**What is it?**\n\nThe metric describes how distances and time intervals are measured near a massive object. The line element is:\n\nds² = -(1 - rₛ/r)c²dt² + (1 - rₛ/r)⁻¹dr² + r²dΩ²\n\nwhere rₛ = 2GM/c² is the Schwarzschild radius.\n\n**Key visualizations:**\n\n1. **Metric components** - Shows how gₜₜ → 0 and gᵣᵣ → ∞ at the event horizon\n2. **Time dilation** - At r = 1.5rₛ, clocks run at 58% speed compared to infinity\n3. **Flamm's paraboloid** - An embedding diagram showing spatial curvature as a funnel\n4. **Effective potential** - Reveals the ISCO at 6rₛ and photon sphere at 3rₛ\n\n**Why it matters:**\n\n• GPS satellites correct for gravitational time dilation daily\n• Matter falling to the ISCO releases 5.7% of its rest mass as energy (powers quasars!)\n• The photon sphere explains gravitational lensing around black holes\n\n**View the interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/schwarzschild_metric.ipynb\n\nAll code uses numpy and matplotlib—fully self-contained for anyone to run and modify.",2885"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/schwarzschild_metric.ipynb",2886"category": "general",2887"date": "2026-04-28",2888"time": "09:00"2889},2890{2891"source": "lstm_cell_implementation",2892"content_type": "notebook",2893"subreddit": "CoCalc",2894"title": "Built an LSTM Cell from Scratch - Here's How the Math Works",2895"body": "I implemented a Long Short-Term Memory (LSTM) cell using only NumPy to understand how these networks actually work under the hood. Here's what I learned:\n\n**ELI5 Version:**\nImagine you're reading a book and need to remember important plot points while forgetting irrelevant details. LSTMs do exactly this with three \"gates\":\n\n1. **Forget Gate** - Like deciding \"I don't need to remember what the character ate for breakfast\"\n2. **Input Gate** - \"This new information about the villain is important, let me store it\"\n3. **Output Gate** - \"When asked about the story, here's what I'll share\"\n\n**The Key Equations (in Unicode):**\n\n- Forget: fₜ = σ(Wf·[hₜ₋₁, xₜ] + bf)\n- Input: iₜ = σ(Wi·[hₜ₋₁, xₜ] + bi)\n- Cell update: cₜ = fₜ⊙cₜ₋₁ + iₜ⊙c̃ₜ\n- Output: hₜ = oₜ⊙tanh(cₜ)\n\nWhere σ is sigmoid, ⊙ is element-wise multiplication.\n\n**Cool Findings:**\n\n- With input dim=3 and hidden dim=8, we get 4×8×(8+3+1) = 384 parameters\n- Initializing forget gate bias to 1 (not 0) helps gradient flow\n- Cell states can grow unbounded; hidden states are bounded by tanh\n- Gate activations show beautiful patterns when processing sinusoidal inputs\n\nThe visualization shows how different hidden units specialize in tracking different aspects of the input signal.\n\n**Full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lstm_cell_implementation.ipynb",2896"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lstm_cell_implementation.ipynb",2897"category": "general",2898"date": "2026-04-29",2899"time": "09:00"2900},2901{2902"source": "adams_bashforth_method",2903"content_type": "notebook",2904"subreddit": "CoCalc",2905"title": "Implemented Adams-Bashforth Methods for Solving ODEs - Here's How They Work",2906"body": "I created a Jupyter notebook implementing Adams-Bashforth methods (orders 1-4) and wanted to share what I learned.\n\n**What are Adams-Bashforth methods?**\n\nThey're \"multistep\" methods for solving ordinary differential equations like dy/dt = f(t,y). Unlike Runge-Kutta which evaluates f multiple times per step, Adams-Bashforth methods are clever: they reuse function values from previous time steps.\n\n**The core idea:**\n\nTo get yn₊₁, integrate f from t_n to tn₊₁. But instead of evaluating f at multiple points, fit a polynomial through the known values f_n, fn₋₁, fn₋₂, etc., and integrate that polynomial.\n\n**The formulas:**\n\n- AB1 (Forward Euler): yn₊₁ = y_n + h*f_n\n- AB2: yn₊₁ = y_n + (h/2)*(3*f_n - fn₋₁)\n- AB3: yn₊₁ = y_n + (h/12)*(23*f_n - 16*fn₋₁ + 5*fn₋₂)\n- AB4: yn₊₁ = y_n + (h/24)*(55*f_n - 59*fn₋₁ + 37*fn₋₂ - 9*fn₋₃)\n\n**What I learned:**\n\n1. Higher orders = much better accuracy for the same step size\n2. Only ONE function evaluation per step (after startup)\n3. You need a \"starter\" method (I used RK4) to get the first few values\n4. Convergence study confirms O(h^k) error for k-th order method\n\nThe local truncation error follows predictable bounds:\n- AB1: O(h²)\n- AB2: O(h³)\n- AB3: O(h⁴)\n- AB4: O(h⁵)\n\n**Limitations:**\n\n- Smaller stability regions than implicit methods\n- Not great for stiff problems\n- Startup phase adds some complexity\n\nI tested on exponential decay (dy/dt = -2y, exact solution: e⁽-2t)) and the convergence plots show each method hitting its theoretical accuracy.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/adams_bashforth_method.ipynb",2907"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/adams_bashforth_method.ipynb",2908"category": "general",2909"date": "2026-04-29",2910"time": "09:00"2911},2912{2913"source": "electromagnetic_waves",2914"content_type": "notebook",2915"subreddit": "CoCalc",2916"title": "Visualizing Electromagnetic Waves with Python - From Maxwell's Equations to the Poynting Vector",2917"body": "I created a Jupyter notebook that visualizes electromagnetic wave propagation, and wanted to share some insights!\n\n**The Physics:**\n\nMaxwell's equations in free space lead to the wave equation:\n\n∇²E = μ₀ε₀(∂²E/∂t²)\n\nThis tells us electromagnetic waves travel at c = 1/√(μ₀ε₀) ≈ 3×10⁸ m/s - the speed of light emerges naturally from the permittivity and permeability of free space!\n\n**Key Findings:**\n\n1. E and B fields oscillate **in phase** - they reach maxima/minima simultaneously\n2. The fields are mutually perpendicular (E ⊥ B ⊥ k)\n3. The amplitude ratio E₀/B₀ = c always\n4. Energy is equally distributed: u_E = u_B = ½ε₀E₀²\n\n**The Poynting Vector:**\n\nEnergy flow is described by S = (1/μ₀)E×B\n\nThe time-averaged intensity works out to ⟨S⟩ = ½ε₀cE₀²\n\n**What I Learned:**\n\nThe simulation really drove home how the same mathematical framework describes everything from radio waves (λ > 1m) to gamma rays (λ < 0.01nm). Only the wavelength/frequency changes - the fundamental physics is identical.\n\n**View the notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/electromagnetic_waves.ipynb\n\nThe notebook includes 3D visualization of the perpendicular E and B fields, time evolution plots, and energy flux analysis. All code is self-contained with numpy and matplotlib.\n\n---",2918"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/electromagnetic_waves.ipynb",2919"category": "general",2920"date": "2026-04-30",2921"time": "09:00"2922},2923{2924"source": "expectation_maximization",2925"content_type": "notebook",2926"subreddit": "CoCalc",2927"title": "Implemented Expectation Maximization for Gaussian Mixture Models from scratch - here's how it works",2928"body": "I just finished implementing the EM algorithm in Python and wanted to share what I learned!\n\n**The Problem**\n\nYou have data points but no labels. You suspect they come from K different Gaussian distributions (clusters), but you don't know which point belongs to which cluster. How do you estimate the cluster parameters?\n\n**Why It's Hard**\n\nDirect maximum likelihood estimation fails because the log-likelihood has a sum over all possible cluster assignments inside the log:\n\nL(θ) = log Σ p(X, Z | θ)\n\nThis is intractable to optimize directly.\n\n**The EM Solution (ELI5)**\n\nEM uses a clever two-step dance:\n\n1. **E-step**: Pretend you know the parameters. Calculate the probability each point belongs to each cluster (these are called \"responsibilities\")\n\n2. **M-step**: Pretend you know the cluster assignments. Update the parameters using weighted averages\n\nRepeat until convergence. The magic is that each iteration is guaranteed to improve (or maintain) the log-likelihood!\n\n**What I Implemented**\n\n- Generated 500 points from 3 Gaussians with known parameters\n- Ran EM to recover the parameters from unlabeled data\n- Visualized the clustering and convergence\n\n**Key Takeaways**\n\n- EM converges to a local maximum (not global) - use multiple random restarts\n- Initialization matters - k-means++ style init helps\n- Need to regularize covariances to prevent singularities\n\nThe full notebook with code and math derivations is available here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/expectation_maximization.ipynb\n\nHappy to answer questions about the implementation!",2929"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/expectation_maximization.ipynb",2930"category": "general",2931"date": "2026-04-30",2932"time": "09:00"2933},2934{2935"source": "logistic_regression_classification",2936"content_type": "notebook",2937"subreddit": "CoCalc",2938"title": "I built Logistic Regression from scratch in Python - here's what I learned",2939"body": "I just finished implementing logistic regression classification without using sklearn, and wanted to share what I learned!\n\n**The Core Idea (ELI5):**\nImagine you want to predict yes/no outcomes (spam/not spam, pass/fail). Logistic regression takes your input features, combines them linearly (like regular regression), then squishes the result through a sigmoid function to get a probability between 0 and 1.\n\n**The Math (simplified):**\n- Sigmoid function: σ(z) = 1/(1+e⁻ᶻ)\n- Prediction: P(y=1|x) = σ(wᵀx + b)\n- We minimize cross-entropy loss using gradient descent\n\n**What I implemented:**\n- `_sigmoid()` - the activation function\n- `_compute_loss()` - binary cross-entropy\n- `fit()` - gradient descent training\n- `predict_proba()` and `predict()` - inference\n\n**Key insights:**\n1. The decision boundary is linear (wᵀx + b = 0)\n2. Weights are interpretable - each wⱼ shows how feature j affects log-odds\n3. Unlike SVMs, you get calibrated probabilities, not just class labels\n4. The loss is convex, so gradient descent finds the global minimum\n\n**Results:**\n- 97% test accuracy\n- Clean separation of classes\n- Smooth probability contours\n\nThe visualization shows decision boundaries, training loss curve, probability contours, and the sigmoid function.\n\n**View the full interactive notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/logistic_regression_classification.ipynb\n\nHappy to answer questions about the implementation!\n\n---",2940"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/logistic_regression_classification.ipynb",2941"category": "general",2942"date": "2026-05-01",2943"time": "09:00"2944},2945{2946"source": "gauss_seidel_method",2947"content_type": "notebook",2948"subreddit": "CoCalc",2949"title": "Implementing the Gauss-Seidel Method: An Iterative Solver That Beats Jacobi by 2x",2950"body": "I built an interactive notebook exploring the **Gauss-Seidel iterative method** for solving linear systems Ax = b.\n\n**ELI5 Version:**\nImagine you're trying to solve a puzzle where each piece depends on its neighbors. The Jacobi method says \"everyone make your best guess, then everyone update at once.\" Gauss-Seidel is smarter: \"Person 1 updates, then Person 2 uses that new info immediately, then Person 3 uses both updates...\" This \"use it as soon as you have it\" approach converges about twice as fast!\n\n**The Math:**\nDecompose your matrix A = L + D + U (lower, diagonal, upper triangular parts).\n\nFor each variable i, the update formula is:\nxᵢ⁽ᵏ⁺¹⁾ = (bᵢ - ∑ⱼ<ᵢ aᵢⱼxⱼ⁽ᵏ⁺¹⁾ - ∑ⱼ>ᵢ aᵢⱼxⱼ⁽ᵏ⁾) / aᵢᵢ\n\n**When does it converge?**\n- Strictly diagonally dominant: |aᵢᵢ| > ∑ⱼ≠ᵢ |aᵢⱼ|\n- Symmetric positive definite matrices\n- Spectral radius ρ(G) < 1 where G = -(L+D)⁻¹U\n\n**Actual Results from the Notebook:**\n- 50×50 tridiagonal Poisson system\n- Gauss-Seidel: 698 iterations\n- Jacobi: 1393 iterations (2x slower!)\n- Both converged to 10⁻¹⁰ tolerance\n- Spectral radius ratio explains it: ρ_GS ≈ ρ_J²\n\n**Why the speedup?** The notebook includes eigenvalue plots showing that Gauss-Seidel's iteration matrix has eigenvalues clustered much closer to the origin than Jacobi's. Smaller spectral radius = faster convergence.\n\n**Trade-offs:**\n- Jacobi is embarrassingly parallel (great for GPUs)\n- Gauss-Seidel uses O(n) memory with in-place updates\n\n**Check out the full notebook with 4 plots:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gauss_seidel_method.ipynb",2951"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gauss_seidel_method.ipynb",2952"category": "general",2953"date": "2026-05-01",2954"time": "09:00"2955},2956{2957"source": "levy_processes",2958"content_type": "notebook",2959"subreddit": "CoCalc",2960"title": "Title: I built a simulation of Lévy processes - here's what I learned about \"wild\" randomness",2961"body": "Body:\n\n**TL;DR:** Lévy processes generalize Brownian motion to allow for jumps and heavy tails. I simulated α-stable processes and visualized how the stability index α controls the \"wildness\" of the paths.\n\n---\n\n**What are Lévy processes?**\n\nA stochastic process with:\n1. X₀ = 0\n2. Independent increments\n3. Stationary increments\n4. Stochastic continuity\n\nBrownian motion and Poisson processes are both special cases!\n\n**The key parameter: α (stability index)**\n\n- α = 2: You get standard Brownian motion (Gaussian)\n- α = 1.5: Heavy tails, infinite variance\n- α = 1.0: Cauchy process - even the mean is undefined!\n- α = 0.5: Extremely heavy tails with dramatic jumps\n\n**Cool findings:**\n\n1. **Heavy tails follow power laws:** P(|X| > x) ~ x⁽-α)\n\n2. **Self-similarity:** These processes look statistically identical at any time scale when rescaled by c⁽1/α)\n\n3. **Jump structure:** Smaller α means more frequent large jumps. The Lévy measure ν(dx) ~ |x|⁽-1-α) dx controls this.\n\n**The simulation method:**\n\nUsed the Chambers-Mallows-Stuck algorithm to generate stable random variables, then constructed paths via cumulative sums with proper time scaling.\n\n**Why it matters:**\n\nLévy processes are used in mathematical finance (modeling asset price jumps), physics (anomalous diffusion), and anywhere extreme events matter.\n\n**View the interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/levy_processes.ipynb",2962"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/levy_processes.ipynb",2963"category": "general",2964"date": "2026-05-02",2965"time": "09:00"2966},2967{2968"source": "orbital_mechanics_kepler",2969"content_type": "notebook",2970"subreddit": "CoCalc",2971"title": "I simulated Kepler's Laws of Planetary Motion in Python - here's how it works",2972"body": "**TL;DR:** Built a numerical simulation of planetary orbits that verifies Kepler's three laws. The math is beautiful and the code is surprisingly simple.\n\n---\n\n**What are Kepler's Laws?**\n\nBack in the early 1600s, Johannes Kepler figured out three rules that govern how planets orbit the Sun:\n\n1. **Ellipse Law:** Orbits are ellipses with the Sun at one focus (not the center!)\n2. **Equal Areas:** A line from planet to Sun sweeps equal areas in equal times - so planets speed up when closer to the Sun\n3. **Period Law:** T² = (4π²/GM) × a³ - orbital period squared is proportional to orbital size cubed\n\n**The Simulation**\n\nI used scipy's solve_ivp with the DOP853 integrator (8th order Runge-Kutta) to solve the equations of motion:\n\n• ẍ = -GM·x/r³\n• ÿ = -GM·y/r³\n\nFor an orbit with eccentricity e=0.6, the planet's distance from the Sun varies from 0.4 AU at perihelion to 1.6 AU at aphelion.\n\n**Key Results:**\n\n• Angular momentum conserved to 10⁻¹⁰ precision\n• Total energy conserved to same precision\n• T²/a³ ratio constant across different orbits (tested with a = 0.4 to 2.0 AU)\n• Vis-viva equation verified: v² = GM(2/r - 1/a)\n\n**Why This Matters**\n\nThese same equations are used for:\n• Spacecraft mission planning\n• Exoplanet detection\n• Satellite orbit prediction\n• Binary star analysis\n\nThe visualization shows colored sectors of equal area (demonstrating Second Law) and a log-log plot confirming the T² ∝ a³ relationship.\n\n**View the full notebook with code:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/orbital_mechanics_kepler.ipynb\n\n---\n\nQuestions welcome! Happy to explain any part of the physics or code.",2973"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/orbital_mechanics_kepler.ipynb",2974"category": "aerospace",2975"date": "2026-05-02",2976"time": "09:00"2977},2978{2979"source": "gmres_algorithm",2980"content_type": "notebook",2981"subreddit": "CoCalc",2982"title": "Implemented GMRES from scratch - here's what I learned about solving non-symmetric linear systems",2983"body": "I just finished implementing the GMRES (Generalized Minimal Residual) algorithm in Python, and wanted to share some insights.\n\n**What is GMRES?**\n\nWhen you have a system Ax = b where A isn't symmetric (so you can't use Conjugate Gradient), GMRES is your friend. It's an iterative method that finds the solution minimizing ||b - Ax||₂.\n\n**The core idea (ELI5):**\n\nImagine you're searching for treasure in a field. Instead of searching everywhere, GMRES cleverly expands its search area step by step using something called a Krylov subspace: K_k = span{r₀, Ar₀, A²r₀, ...}\n\nAt each step, it guarantees finding the best possible approximation in its current search space.\n\n**Key components I implemented:**\n\n1. **Arnoldi iteration** - Builds an orthonormal basis using modified Gram-Schmidt\n2. **Givens rotations** - Efficiently solves the least squares problem\n3. **Convergence monitoring** - Residual decreases monotonically (guaranteed!)\n\n**What I learned:**\n\n- Eigenvalue distribution matters hugely. When I tested on convection-diffusion problems (-εu'' + cu' = f), increasing convection c spread eigenvalues into the complex plane and slowed convergence\n- Eigenvalue clustering away from zero = faster convergence\n- Memory grows as O(nk) for k iterations - that's why restarted GMRES exists\n\nThe math is beautiful: the convergence bound involves finding the optimal polynomial p(λ) with p(0)=1 that minimizes max|p(λ)| over all eigenvalues.\n\n**Interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gmresₐlgorithm.ipynb\n\nHappy to answer questions about the implementation!",2984"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gmresₐlgorithm.ipynb",2985"category": "general",2986"date": "2026-05-03",2987"time": "09:00"2988},2989{2990"source": "hidden_markov_model_viterbi",2991"content_type": "notebook",2992"subreddit": "CoCalc",2993"title": "I implemented Hidden Markov Models with Viterbi decoding from scratch - here's what I learned",2994"body": "Ever wondered how speech recognition, DNA sequence analysis, or predictive text work under the hood? Many use Hidden Markov Models (HMMs).\n\n**ELI5 Version:**\nImagine you have a friend who texts you what they did today: \"Walked\", \"Shopped\", or \"Cleaned\". You can't see the weather where they are, but you know:\n- They're more likely to walk when it's sunny\n- They're more likely to clean when it's rainy\n\nGiven a week of their activities, can you guess what the weather was each day? That's exactly what HMMs solve!\n\n**The Math (simplified):**\nAn HMM has:\n- Hidden states (Sunny/Rainy) that follow a Markov chain\n- Observable outputs (Walk/Shop/Clean) that depend on the hidden state\n- Transition probabilities: P(tomorrow's weather | today's weather)\n- Emission probabilities: P(activity | weather)\n\n**The Viterbi Algorithm:**\nInstead of greedily picking the most likely state at each step, Viterbi uses dynamic programming to find the globally optimal sequence. The key recurrence:\n\nδₜ(j) = max[δₜ₋₁(i) · aᵢⱼ] · bⱼ(xₜ)\n\nwhere δₜ(j) is the probability of the best path ending in state j at time t.\n\n**What I Built:**\n- Complete Python implementation with NumPy\n- Log-probability handling to prevent numerical underflow\n- Weather inference demo achieving 70%+ accuracy\n- Visualization of the Viterbi trellis and state comparisons\n\n**Key Takeaways:**\n1. Use log probabilities for long sequences (avoid underflow)\n2. Viterbi gives the globally optimal path, not locally optimal\n3. O(N²T) complexity makes it practical for real applications\n\nView the full interactive notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hidden_markov_model_viterbi.ipynb\n\nHappy to answer questions about the implementation!",2995"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hidden_markov_model_viterbi.ipynb",2996"category": "general",2997"date": "2026-05-03",2998"time": "09:00"2999},3000{3001"source": "tensor_algebras",3002"content_type": "notebook",3003"subreddit": "SageMath",3004"title": "I built a computational notebook exploring tensor algebras with NumPy - here's what I learned",3005"body": "If you've ever wondered what tensors really are beyond \"multidimensional arrays,\" this might help.\n\n**What's a tensor?**\n\nA tensor of type (r, s) is a multilinear map. Think of it this way:\n- Scalars are rank-0 tensors\n- Vectors are rank-1 tensors\n- Matrices are rank-2 tensors\n- And it keeps going...\n\nThe key property is how they *transform* under basis changes. A tensor's components follow specific transformation laws involving the change-of-basis matrix and its inverse.\n\n**What I implemented:**\n\n1. **Tensor products** using `np.tensordot` - verified bilinearity properties\n2. **Contraction** with `np.einsum` - the generalization of matrix trace\n3. **Transformation laws** - showing how vectors/tensors change under rotation\n4. **Metric tensors** - how to raise/lower indices, fundamental for geometry\n5. **Symmetric/antisymmetric decomposition** - every tensor splits into these parts\n6. **Levi-Civita symbol** - computed cross products using εᵢjk\n\n**Cool visualization:**\n\nThe plot shows how different metric tensors turn the \"unit circle\" into ellipses. The Euclidean metric gives you the standard circle, but oblique coordinates stretch it. Also verified that trace and determinant are true invariants under similarity transforms.\n\n**Why it matters:**\n\n- General relativity uses the Riemann curvature tensor R^μ_νρσ\n- Machine learning uses tensor decompositions (CP, Tucker)\n- Quantum states of multiple particles live in tensor product spaces\n\n**View the full interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/tensorₐlgebras.ipynb\n\nHappy to answer questions about the implementation!",3006"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/tensorₐlgebras.ipynb",3007"category": "general",3008"date": "2026-05-04",3009"time": "09:00"3010},3011{3012"source": "cellular_automata_game_of_life",3013"content_type": "notebook",3014"subreddit": "CoCalc",3015"title": "Built a Conway's Game of Life simulator - here's what I learned about emergent complexity",3016"body": "I just finished a Python implementation of Conway's Game of Life and wanted to share some insights about this classic cellular automaton.\n\n**What is the Game of Life?**\n\nIt's a \"zero-player game\" where you set up initial conditions and watch patterns evolve. Each cell on a 2D grid is either alive (1) or dead (0), and the next state depends only on its 8 neighbors:\n\n- **Survival:** A live cell with 2 or 3 neighbors stays alive\n- **Birth:** A dead cell with exactly 3 neighbors becomes alive\n- **Death:** All other cells die or stay dead\n\nThat's it—4 simple rules.\n\n**What emerges from simplicity**\n\nThe fascinating part is what happens when you run it:\n\n- **Still lifes** (like the \"block\") never change\n- **Oscillators** (like the \"pulsar\") repeat every few generations\n- **Spaceships** (like the \"glider\") travel across the grid\n- **Methuselahs** (like the \"R-pentomino\") start small and explode chaotically\n\nFrom my simulation: the R-pentomino starts with just 5 cells but grows to 116 cells over 100 generations, with wild population swings along the way.\n\n**Implementation trick**\n\nInstead of checking each cell's 8 neighbors with loops, I used scipy's `convolve2d` with a 3x3 kernel (all 1s except center 0). Much faster and cleaner!\n\n**Why it matters**\n\nThe Game of Life is:\n- Turing complete (can simulate any computation)\n- A model for studying emergence in complex systems\n- Used in research on self-organization and artificial life\n\nIf you want to explore the full notebook with code and visualizations:\n\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/cellular_automata_game_of_life.ipynb\n\nHappy to answer questions about the implementation!",3017"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/cellular_automata_game_of_life.ipynb",3018"category": "general",3019"date": "2026-05-04",3020"time": "09:00"3021},3022{3023"source": "elliptic_integrals",3024"content_type": "notebook",3025"subreddit": "CoCalc",3026"title": "Exploring Elliptic Integrals with Python - From Pendulums to Ellipses",3027"body": "Ever wondered why some integrals can't be solved with elementary functions? Elliptic integrals are a perfect example - they arise naturally but require special treatment.\n\n**What are they?**\n\nThe complete elliptic integral of the first kind is:\n\nK(k) = ∫₀^(π/2) dθ/√(1 - k²sin²θ)\n\nThis integral cannot be expressed using basic functions like sin, cos, exp, or log. That's what makes it \"special.\"\n\n**Why should you care?**\n\n1. **Pendulum physics**: The exact period of a pendulum is T = 4√(L/g)·K(sin(θ₀/2)). The small-angle approximation T = 2π√(L/g) fails badly at large amplitudes - at 179°, the true period is ~3.5x longer!\n\n2. **Ellipse geometry**: The circumference of an ellipse is C = 4a·E(e), where E is the elliptic integral of the second kind and e is eccentricity.\n\n3. **Legendre's relation**: A beautiful identity connecting K and E: E(k)K(k') + E(k')K(k) - K(k)K(k') = π/2, where k' = √(1-k²)\n\n**Python implementation:**\n\nSciPy makes this easy:\n```python\nfrom scipy import special\nK = special.ellipk(m) # Note: uses m = k²\nE = special.ellipe(m)\n```\n\nThe notebook includes visualizations of how these functions behave and numerical verification of the identities.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/elliptic_integrals.ipynb\n\n---",3028"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/elliptic_integrals.ipynb",3029"category": "general",3030"date": "2026-05-05",3031"time": "09:00"3032},3033{3034"source": "double_pendulum_chaos",3035"content_type": "notebook",3036"subreddit": "CoCalc",3037"title": "Title: I simulated a double pendulum to visualize chaos theory - here's what I learned about determinism vs predictability",3038"body": "---\n\n**What is this?**\n\nI built a Python simulation of a double pendulum—two pendulums connected end-to-end—to explore chaos theory. Despite being governed by completely deterministic equations (derived from Lagrangian mechanics), this system exhibits extreme sensitivity to initial conditions.\n\n**The ELI5 version:**\n\nImagine you have two identical double pendulums. You start them at almost exactly the same position—differing by just 0.001 radians (about 0.06 degrees). Within seconds, they're doing completely different things. One might be swinging left while the other swings right.\n\nThis isn't randomness—both follow the exact same physics rules. It's chaos.\n\n**What I learned:**\n\n1. **Chaos ≠ Randomness**: Total mechanical energy is conserved perfectly (within ~0.004% relative error). The system follows deterministic equations, but outcomes are unpredictable because tiny measurement errors get amplified exponentially.\n\n2. **Exponential divergence**: Plotting trajectory separation on a log scale shows linear growth—the signature of exponential divergence and positive Lyapunov exponents.\n\n3. **Phase space structure**: The phase portrait shows the system exploring a complex region without ever repeating—bounded but aperiodic motion.\n\n4. **Numerical precision matters**: I used scipy's odeint and verified energy conservation to validate results.\n\n**The math (simplified):**\n\nThe kinetic energy involves terms like:\n- T = ½(m₁+m₂)L₁²ω₁² + ½m₂L₂²ω₂² + m₂L₁L₂ω₁ω₂cos(θ₁-θ₂)\n\nWhere θ are angles and ω are angular velocities. The coupled nonlinear nature of these equations is what creates chaos.\n\n**Why it matters:**\n\nThis is why weather forecasting has fundamental limits. The atmosphere is a chaotic system—small measurement errors grow exponentially, making predictions beyond ~2 weeks essentially impossible regardless of computing power.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/double_pendulum_chaos.ipynb\n\nLibraries used: NumPy, SciPy, Matplotlib",3039"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/double_pendulum_chaos.ipynb",3040"category": "mathematics",3041"date": "2026-05-05",3042"time": "09:00"3043},3044{3045"source": "garch_volatility_model",3046"content_type": "notebook",3047"subreddit": "CoCalc",3048"title": "I built a GARCH(1,1) volatility model from scratch in Python - here's what I learned about why market turbulence comes in waves",3049"body": "I've been studying financial econometrics and wanted to really understand GARCH models, so I implemented one from first principles using only NumPy and SciPy.\n\n**What is GARCH?**\n\nGARCH (Generalized Autoregressive Conditional Heteroskedasticity) models time-varying volatility. The key observation is that volatility clusters—big market moves tend to follow big market moves.\n\n**The Math (in Unicode since Reddit doesn't render LaTeX):**\n\nThe GARCH(1,1) variance equation:\nσ²ₜ = ω + αε²ₜ₋₁ + βσ²ₜ₋₁\n\nWhere:\n- ω > 0: baseline variance\n- α ≥ 0: ARCH effect (how yesterday's shock affects today's variance)\n- β ≥ 0: GARCH effect (persistence of variance)\n- α + β < 1: stationarity condition\n\n**What I implemented:**\n\n1. Simulation function to generate synthetic GARCH returns\n2. Maximum likelihood estimation to recover parameters\n3. Volatility forecasting that reverts to long-run average\n\n**Key results:**\n\n- My estimated parameters matched the true simulation values closely\n- Persistence (α+β) ≈ 0.95 means volatility shocks are very persistent\n- The unconditional variance is ω/(1-α-β)\n- Returns show excess kurtosis (fat tails) even with normal innovations\n\n**The \"aha\" moment:**\n\nThe news impact curve! It shows σ²ₜ as a function of εₜ₋₁. It's a parabola—both positive AND negative shocks increase volatility equally. This is why the basic GARCH is symmetric (unlike EGARCH or GJR-GARCH which capture leverage effects).\n\n**View the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/garch_volatility_model.ipynb\n\nHappy to answer questions about the implementation!\n\n---",3050"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/garch_volatility_model.ipynb",3051"category": "general",3052"date": "2026-05-06",3053"time": "09:00"3054},3055{3056"source": "data_pipeline_etl",3057"content_type": "notebook",3058"subreddit": "CoCalc",3059"title": "[Educational] Build an ETL Pipeline from Scratch in Python - Complete Notebook with Visualizations",3060"body": "I created an educational Jupyter notebook that implements a complete ETL (Extract, Transform, Load) pipeline from scratch using Python.\n\n**What you'll learn:**\n\n1. **Extract phase**: Reading from multiple data sources (CSV, JSON, simulated APIs) and handling different formats\n\n2. **Transform phase**:\n - Handling missing values with intelligent imputation\n - Fixing invalid data entries\n - Enriching data by joining reference tables\n - Creating derived features (revenue, profit margins, time dimensions)\n\n3. **Load phase**: Creating aggregated summary tables for analytics\n\n4. **Quality tracking**: Monitoring data quality metrics at each pipeline stage\n\nThe notebook includes 4 visualizations showing:\n- Pipeline architecture diagram\n- Before/after data quality metrics\n- Business analytics dashboard\n- Pipeline performance metrics\n\nAll code is well-commented and uses only standard libraries (pandas, numpy, matplotlib).\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/data_pipeline_etl/data_pipeline_etl.ipynb\n\nFeedback welcome!\n\nSuggested subreddits: r/learnpython, r/datascience, r/dataengineering",3061"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/data_pipeline_etl/data_pipeline_etl.ipynb",3062"category": "general",3063"date": "2026-05-06",3064"time": "09:00"3065},3066{3067"source": "population_genetics_hardy_weinberg",3068"content_type": "notebook",3069"subreddit": "CoCalc",3070"title": "TITLE: [OC] Simulating Hardy-Weinberg Equilibrium and Genetic Drift in Python",3071"body": "BODY:\n\nI built a Python simulation to explore one of the foundational concepts in population genetics: the Hardy-Weinberg principle.\n\n**ELI5 Version:**\nImagine a population of organisms with two versions of a gene (alleles). The Hardy-Weinberg equation tells us what percentage of the population will have each combination IF nothing is changing the gene frequencies (no natural selection, everyone mates randomly, etc.).\n\nThe equation is: p² + 2pq + q² = 1\n\nWhere:\n- p = frequency of allele A\n- q = frequency of allele a (and p + q = 1)\n- p² = frequency of AA individuals\n- 2pq = frequency of Aa individuals\n- q² = frequency of aa individuals\n\n**What I learned:**\n\n1. **Genetic drift is powerful in small populations** - With N=50, allele frequencies swung wildly and sometimes hit fixation (0 or 1). With N=5000, frequencies barely budged from the starting point.\n\n2. **Maximum heterozygosity occurs at p = 0.5** - The \"Hardy-Weinberg parabola\" shows that genetic diversity (heterozygotes) peaks when both alleles are equally common.\n\n3. **Chi-squared testing** - Used χ² = Σ(O-E)²/E to test whether observed genotype counts match Hardy-Weinberg expectations.\n\nLibraries used: NumPy, SciPy, Matplotlib\n\nView and run the full notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/population_genetics_hardy_weinberg.ipynb",3072"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/population_genetics_hardy_weinberg.ipynb",3073"category": "general",3074"date": "2026-05-07",3075"time": "09:00"3076},3077{3078"source": "interference_simulation",3079"content_type": "notebook",3080"subreddit": "CoCalc",3081"title": "I built a wave interference simulation in Python - here's how the physics works",3082"body": "I've been working on understanding wave interference better, so I coded up a simulation of two-source interference in Python. Wanted to share both the physics and the implementation.\n\n**The Core Idea**\n\nWhen two waves occupy the same space, they add together (superposition principle). If they're in phase, you get constructive interference (bright spots). If they're out of phase, destructive interference (dark spots).\n\n**The Math (in plain terms)**\n\n- Two sources emit waves: ψ₁ = A·cos(k·r₁) and ψ₂ = A·cos(k·r₂)\n- Wave number k = 2π/λ\n- Total wave: ψ_total = ψ₁ + ψ₂\n- Intensity: I = (ψ_total)²\n\nThe path difference Δr = r₂ - r₁ determines what happens:\n- Δr = mλ → constructive (bright)\n- Δr = (m + ½)λ → destructive (dark)\n\n**What I Learned**\n\n1. The lines of constant intensity form hyperbolas with the sources as foci\n2. In the far field, these become the evenly-spaced fringes from Young's experiment\n3. Fringe spacing follows Δy = λL/d (verified numerically with < 1% error)\n\n**Implementation**\n\nUsed NumPy meshgrid for the observation plane (500×500), calculated distances to each source, summed the wave amplitudes, and squared for intensity. Matplotlib handles the visualization.\n\nThe code also uses scipy.signal.find_peaks to measure actual fringe spacing and compare to theory.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/interference_simulation.ipynb\n\nHappy to answer questions about the physics or the code!",3083"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/interference_simulation.ipynb",3084"category": "general",3085"date": "2026-05-07",3086"time": "09:00"3087},3088{3089"source": "path_integral_monte_carlo",3090"content_type": "notebook",3091"subreddit": "CoCalc",3092"title": "I implemented Path Integral Monte Carlo in Python to simulate quantum particles - here's how it works",3093"body": "Ever heard that quantum mechanics is impossible to simulate classically? Well, Path Integral Monte Carlo (PIMC) is one clever workaround!\n\n**The Big Idea**\n\nRichard Feynman showed that quantum statistical mechanics can be rewritten as a classical problem. The trick: a single quantum particle at temperature T becomes equivalent to a classical \"ring polymer\" - a chain of M connected beads forming a closed loop.\n\n**How it works (ELI5):**\n\n1. Break imaginary time into M slices (like frames in a movie)\n2. Connect particle positions at each time slice with \"springs\"\n3. Sample configurations using standard Metropolis Monte Carlo\n4. The quantum partition function Z = Tr[e^(-βH)] becomes a classical integral!\n\n**What I learned:**\n\n- At low temperature: quantum effects dominate (zero-point energy E₀ = ℏω/2)\n- At high temperature: classical behavior emerges (E → kT)\n- The ring polymer \"spreads out\" at low T, showing quantum delocalization\n- Trotter error scales as O(β²/M²), so more time slices = better accuracy\n\n**Key equations in Unicode:**\n\n- Action: S = Σᵢ [m(xᵢ - xᵢ₊₁)²/2τ + τV(xᵢ)]\n- Acceptance: min(1, exp(-ΔS/ℏ))\n\nThe results match exact analytical solutions for the harmonic oscillator within error bars. Pretty satisfying when theory meets simulation!\n\n**Explore the notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/path_integral_monte_carlo.ipynb\n\nHappy to answer questions about the implementation!",3094"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/path_integral_monte_carlo.ipynb",3095"category": "general",3096"date": "2026-05-08",3097"time": "09:00"3098},3099{3100"source": "transformer_architecture_attention",3101"content_type": "notebook",3102"subreddit": "CoCalc",3103"title": "I implemented the Transformer attention mechanism from scratch - here's what I learned about the math behind ChatGPT",3104"body": "I built the core attention mechanism from the \"Attention Is All You Need\" paper using just NumPy. Here's an ELI5 breakdown:\n\n**What is attention?**\n\nThink of it like a smart search. You have:\n- Queries (Q): What you're looking for\n- Keys (K): Labels on information\n- Values (V): The actual information\n\nAttention computes: softmax(QK^T/√d_k)V\n\nThis gives you a weighted combination of Values, where the weights depend on how well Queries match Keys.\n\n**Why the √d_k scaling?**\n\nThis was my \"aha\" moment. Without it, the dot products can get huge (variance = d_k). Large values push softmax outputs toward 0 or 1, causing vanishing gradients. Dividing by √d_k keeps variance at 1.\n\n**Multi-head attention**\n\nInstead of one attention operation, we run several in parallel with different learned projections. This lets the model attend to different types of relationships simultaneously.\n\n**Causal masking**\n\nFor autoregressive models (like GPT), we mask future positions so the model can't \"cheat\" by looking ahead. It's just a lower-triangular matrix applied before softmax.\n\n**The quadratic problem**\n\nAttention is O(n²) in sequence length - this is why context windows are limited. My timing tests confirmed the quadratic scaling perfectly.\n\nInteractive notebook with visualizations: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/transformer_architecture_attention.ipynb",3105"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/transformer_architecture_attention.ipynb",3106"category": "general",3107"date": "2026-05-08",3108"time": "09:00"3109},3110{3111"source": "ornstein_uhlenbeck_process",3112"content_type": "notebook",3113"subreddit": "CoCalc",3114"title": "TITLE: I simulated the Ornstein-Uhlenbeck process and verified it statistically - here's what I learned about mean-reverting randomness",3115"body": "BODY:\n\n**What is it?**\n\nThe Ornstein-Uhlenbeck (OU) process is like Brownian motion with a rubber band attached. Regular Brownian motion wanders off forever, but OU always gets pulled back toward a central value.\n\n**The equation (in plain terms):**\n\ndX = θ(μ - X)dt + σdW\n\n- θ (theta): How strong the rubber band is - higher means faster snap-back\n- μ (mu): The \"home\" value it wants to return to\n- σ (sigma): How much random noise there is\n- dW: The random Brownian kicks\n\n**ELI5 version:** Imagine a drunk person walking home. Regular Brownian motion = drunk person in an infinite field. OU process = drunk person in a valley - they stumble around randomly, but gravity always pulls them back toward the bottom.\n\n**What I verified:**\n\n1. The mean converges exponentially to μ with half-life = ln(2)/θ\n2. Variance grows from 0 to σ²/(2θ) as the stationary limit\n3. Terminal distribution passes Kolmogorov-Smirnov normality test\n4. Autocovariance decays exponentially as theory predicts\n\n**Why it matters:**\n\n- Physics: Models velocity of particles experiencing friction\n- Finance: The Vasicek model for interest rates is exactly this\n- Biology: Population sizes fluctuating around carrying capacity\n\n**View the interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/ornstein_uhlenbeck_process.ipynb\n\nCode uses numpy, scipy, and matplotlib. The \"exact discretization\" method gives perfect samples because we know the transition distribution analytically!",3116"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/ornstein_uhlenbeck_process.ipynb",3117"category": "general",3118"date": "2026-05-09",3119"time": "09:00"3120},3121{3122"source": "lie_groups_so3",3123"content_type": "notebook",3124"subreddit": "CoCalc",3125"title": "Title: Implemented the Lie Group SO(3) in Python - The Math Behind 3D Rotations",3126"body": "Body:\n\n**TL;DR:** Built a Python implementation of SO(3), the mathematical structure describing all 3D rotations. Includes the exponential map, logarithm, and geodesic interpolation (SLERP).\n\n**What is SO(3)?**\n\nImagine you have a rigid object in 3D space. Any way you can rotate it (without flipping it inside-out) corresponds to an element of SO(3). Mathematically, it's the set of 3x3 matrices R where:\n\n- R^T R = I (orthogonal - preserves lengths and angles)\n- det(R) = 1 (no reflections allowed)\n\n**Why should you care?**\n\nSO(3) appears everywhere:\n- Robotics (end-effector orientations)\n- Computer graphics (camera rotations, animation)\n- Physics (rigid body dynamics, quantum mechanics)\n- Aerospace (satellite attitude control)\n\n**Cool discovery: Rotations don't commute!**\n\nIf you rotate around X-axis then Y-axis, you get a different result than Y then X. My code calculates the \"commutator angle\" between these - about 24 degrees for pi/4 and pi/3 rotations!\n\n**The Rodrigues Formula**\n\nThe exponential map connects the Lie algebra (infinitesimal rotations) to actual rotation matrices:\n\nexp([omega]_x) = I + (sin(theta)/theta)[omega]_x + ((1-cos(theta))/theta^2)[omega]_x^2\n\nThis formula is numerically stable and efficient.\n\n**What I implemented:**\n- hat/vee maps (vector <-> skew-symmetric matrix)\n- Exponential map (Rodrigues' formula)\n- Logarithm map (inverse of exp)\n- SLERP (geodesic interpolation between rotations)\n\nView and run the notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lie_groups_so3.ipynb\n\nHappy to answer questions about Lie groups or the implementation!",3127"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lie_groups_so3.ipynb",3128"category": "general",3129"date": "2026-05-09",3130"time": "09:00"3131},3132{3133"source": "secant_method",3134"content_type": "notebook",3135"subreddit": "CoCalc",3136"title": "Implemented the Secant Method in Python - Here's why its convergence rate is the Golden Ratio",3137"body": "I just built a Python implementation of the Secant Method for root finding, and the math behind it is fascinating!\n\n**The Problem:** You want to solve f(x) = 0 but computing derivatives is expensive or impossible.\n\n**The Solution:** Instead of Newton's method (which needs f'(x)), approximate the derivative using two points:\n\nf'(x) ≈ (f(x_n) - f(xn₋₁)) / (x_n - xn₋₁)\n\nThis gives us the iteration formula:\nxn₊₁ = x_n - f(x_n) · (x_n - xn₋₁) / (f(x_n) - f(xn₋₁))\n\n**Why it works:** Geometrically, you're drawing a line through two points on the curve and finding where it crosses zero. That's your next guess!\n\n**The cool part:** The convergence order is φ = (1 + √5)/2 ≈ 1.618 - literally the golden ratio! This means:\n- Faster than bisection (order 1)\n- Slower than Newton (order 2)\n- But only ONE function evaluation per iteration\n\n**Results:**\n- Finding √2: Converged in 6 iterations to 15 decimal places\n- Finding root of x - cos(x) = 0: Also 6 iterations\n\nThe visualization shows how secant lines progressively approach the root. Pretty elegant!\n\n**View the full notebook with code and plots:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/secant_method.ipynb",3138"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/secant_method.ipynb",3139"category": "general",3140"date": "2026-05-10",3141"time": "09:00"3142},3143{3144"source": "projectile_motion_air_resistance",3145"content_type": "notebook",3146"subreddit": "CoCalc",3147"title": "I built a Python simulation comparing projectile motion with and without air resistance - here's what I learned",3148"body": "I created a Jupyter notebook to explore how air resistance affects projectile motion, and the results really helped me understand why real-world physics differs so much from idealized textbook problems.\n\n**The Problem:**\nTextbooks usually ignore air resistance, giving us the simple parabolic trajectory. But real projectiles experience quadratic drag proportional to v². I wanted to see exactly how much this matters.\n\n**What I Built:**\n- Used scipy.integrate.solve_ivp with RK45 method to solve the coupled differential equations\n- Modeled drag as F = -b|v|v (quadratic in velocity)\n- Parameterized drag using terminal velocity (for a baseball, about 35 m/s)\n\n**Key Findings:**\n\n1. **Range reduced by ~40%** - A 50 m/s projectile at 45° travels much shorter with drag\n2. **Asymmetric trajectory** - The descent is steeper than ascent because gravity and drag oppose each other differently going up vs down\n3. **Optimal angle shifts below 45°** - Lower angles mean less time in the air = less cumulative drag\n4. **Speed is capped by terminal velocity** - You can't exceed v_t during descent\n\n**ELI5 Version:**\nImagine throwing a ball through honey vs air. The honey slows it down more when it's moving fast. Air does the same thing, just less dramatically. This \"slowing down\" force grows with the square of speed, so fast-moving objects get hit much harder by drag.\n\n**The Code:**\nEverything is self-contained using NumPy, SciPy, and Matplotlib. The ODE system tracks position (x, y) and velocity (vₓ, vᵧ) as a 4-element state vector.\n\nView and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/projectile_motion_air_resistance.ipynb\n\nHappy to answer questions about the implementation or physics!",3149"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/projectile_motion_air_resistance.ipynb",3150"category": "general",3151"date": "2026-05-10",3152"time": "09:00"3153},3154{3155"source": "capm_model",3156"content_type": "notebook",3157"subreddit": "CoCalc",3158"title": "Built a Capital Asset Pricing Model (CAPM) Implementation in Python - Here's What I Learned",3159"body": "Just finished building a complete CAPM implementation and wanted to share what I learned!\n\n**What is CAPM?**\n\nCAPM answers: \"What return should I expect given the risk I'm taking?\" The formula is:\n\nE(Rᵢ) = Rf + βᵢ × [E(Rm) - Rf]\n\nIn plain English: Expected return = Risk-free rate + (Beta × Market risk premium)\n\n**What is Beta?**\n\nBeta (β) measures how sensitive an asset is to market movements:\n\nβ = Cov(Rᵢ, Rm) / Var(Rm)\n\n- β = 1: Moves exactly with market\n- β > 1: More volatile than market (aggressive)\n- β < 1: Less volatile than market (defensive)\n\n**What I Built:**\n\n1. Simulated market returns and 10 assets with different betas (0.5 to 1.8)\n2. Estimated betas using OLS regression (scipy.stats.linregress)\n3. Compared expected vs realized returns\n4. Visualized the Security Market Line (SML)\n5. Calculated Jensen's Alpha (abnormal returns)\n\n**Key Takeaways:**\n\n- R² tells you what proportion of return variance comes from market movements (systematic risk)\n- Assets above the SML are \"undervalued\" - they earned more than CAPM predicted\n- Portfolio beta is just the weighted average of individual betas\n- CAPM has limitations - it's a single-factor model. Fama-French adds size, value, and momentum factors\n\n**Libraries used:** numpy, pandas, matplotlib, scipy\n\nCheck out the full interactive notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/capm_model.ipynb\n\nHappy to answer questions!\n\n---",3160"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/capm_model.ipynb",3161"category": "general",3162"date": "2026-05-11",3163"time": "09:00"3164},3165{3166"source": "autoencoder",3167"content_type": "notebook",3168"subreddit": "CoCalc",3169"title": "Built an Autoencoder from Scratch with NumPy - Full Backpropagation Implementation",3170"body": "I created a complete autoencoder implementation using only NumPy to understand how these networks actually learn. No frameworks, just math and code.\n\n**What's an Autoencoder?**\n\nThink of it as a neural network that learns to compress and decompress data. It has two parts:\n- Encoder: Squeezes your data into a smaller representation\n- Decoder: Reconstructs the original from that compressed version\n\n**The Math (simplified):**\n\nEncoder: z = σ(Wₑ · x + bₑ)\nDecoder: x̂ = σ(Wₐ · z + bₐ)\nLoss: L = average of (x - x̂)²\n\nThe network learns by minimizing reconstruction error using gradient descent.\n\n**What I Built:**\n\n- 2D input → 1D latent space → 2D output\n- That's 2x compression!\n- Trained on spiral data with noise\n\n**Key Findings:**\n\n1. Even a 1D bottleneck captures meaningful structure\n2. The latent code naturally orders data along the spiral\n3. Reconstruction error varies spatially - some regions compress better\n\n**What I Learned:**\n\nThe backpropagation gradients flow backward through the network:\n- Output gradient: δ_out = (x̂ - x) ⊙ σ'(pre-activation)\n- Hidden gradient: δ_hidden = (Wₐᵀ · δ_out) ⊙ σ'(pre-activation)\n\nXavier initialization really helps with training stability.\n\nThe notebook includes full visualizations: loss curves, reconstruction comparisons, latent space distributions, and the learned decoder manifold.\n\n**Interactive Notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/autoencoder.ipynb\n\nQuestions welcome! Happy to explain any part of the implementation.",3171"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/autoencoder.ipynb",3172"category": "general",3173"date": "2026-05-11",3174"time": "09:00"3175},3176{3177"source": "wavelet_transform_analysis",3178"content_type": "notebook",3179"subreddit": "CoCalc",3180"title": "Implemented Continuous Wavelet Transform from scratch - here's what I learned about time-frequency analysis",3181"body": "I just finished a notebook implementing the Continuous Wavelet Transform (CWT) using the Morlet wavelet, and wanted to share some insights.\n\n**The Problem with Fourier:**\nRegular FFT tells you which frequencies exist in a signal, but not when they occur. For non-stationary signals (where frequency changes over time), you need something better.\n\n**Enter Wavelets:**\nThe CWT uses a \"mother wavelet\" that gets stretched (scaled) and shifted (translated) across your signal:\n\nW(a,b) = (1/√a) · ∫ f(t) · ψ*((t-b)/a) dt\n\n- a = scale (inversely related to frequency)\n- b = translation (time position)\n- ψ = mother wavelet\n\n**My Test Signal:**\n- Linear chirp: frequency sweeps from 10 Hz to 100 Hz\n- Transient burst: 50 Hz Gaussian-windowed pulse at t=1.0s\n- Added noise for realism\n\n**Key Insight - The Uncertainty Principle:**\nYou literally cannot have perfect time AND frequency resolution simultaneously. Δt·Δf ≥ 1/4π\n\nAt low frequencies (large scales): great frequency resolution, poor time resolution\nAt high frequencies (small scales): great time resolution, poor frequency resolution\n\nThis is why the scalogram shows wider \"blobs\" at low frequencies and narrower ones at high frequencies.\n\n**Why Morlet?**\nψ(t) = π^(-1/4) · exp(iω₀t) · exp(-t²/2)\n\nIt's a complex sinusoid modulated by a Gaussian - provides optimal joint time-frequency localization.\n\n**Practical Applications:**\n- Seismic analysis (detecting P and S waves)\n- Medical: ECG/EEG pattern detection\n- JPEG 2000 uses discrete wavelet transform\n- Financial time series anomaly detection\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/wavelet_transform_analysis.ipynb\n\nHappy to answer questions about the implementation!",3182"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/wavelet_transform_analysis.ipynb",3183"category": "general",3184"date": "2026-05-12",3185"time": "09:00"3186},3187{3188"source": "reinforcement_learning_q_learning",3189"content_type": "notebook",3190"subreddit": "CoCalc",3191"title": "I implemented Q-Learning from scratch to solve a gridworld maze - here's what I learned about RL fundamentals",3192"body": "I built a tabular Q-Learning agent in Python that learns to navigate from start to goal in a 5×5 gridworld with obstacles. Wanted to share my implementation and key takeaways.\n\n**The Core Idea**\n\nQ-Learning finds the optimal action-value function Q*(s,a) without needing a model of the environment. The agent learns by interacting with the world and updating its Q-table using temporal difference (TD) learning:\n\nQ(s,a) ← Q(s,a) + α[r + γ·max Q(s',a') - Q(s,a)]\n\nWhere:\n- α (0.1) = learning rate\n- γ (0.99) = discount factor\n- r = immediate reward\n- max Q(s',a') = best estimated future value\n\n**Exploration vs Exploitation**\n\nUsed ε-greedy policy starting at ε=1.0 (100% random) decaying to 0.01. This lets the agent explore early, then exploit learned knowledge.\n\n**Results**\n\nAfter 500 training episodes:\n- 100% success rate reaching the goal\n- Optimal policy discovered\n- Clear convergence in learning curves\n\n**Key Takeaways**\n\n1. The discount factor γ determines how \"far-sighted\" the agent is\n2. Sufficient exploration is critical - without it, the agent gets stuck in local optima\n3. Watching the learning curve converge is satisfying - it shows Q-values stabilizing\n\nThe notebook includes visualizations of the learning curve, policy arrows, and Q-value heatmap.\n\n**Interactive Notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/reinforcement_learning_q_learning.ipynb\n\nHappy to answer questions about the implementation!",3193"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/reinforcement_learning_q_learning.ipynb",3194"category": "general",3195"date": "2026-05-12",3196"time": "09:00"3197},3198{3199"source": "metric_tensor",3200"content_type": "notebook",3201"subreddit": "CoCalc",3202"title": "I built a Python class to compute metric tensors, Christoffel symbols, and Gaussian curvature - here's what I learned",3203"body": "Hey everyone! I've been diving into differential geometry and decided to implement the core concepts in Python.\n\n**What's a metric tensor?**\n\nThink of it as a generalization of the Pythagorean theorem. In flat space, distance is just ds² = dx² + dy². But on a curved surface like a sphere, the formula becomes:\n\nds² = g_μν dx^μ dx^ν\n\nThe metric tensor g_μν encodes how distances and angles work at every point.\n\n**What I implemented:**\n\n1. A `MetricTensor` class that computes:\n - Line elements (infinitesimal distances)\n - Vector norms and inner products\n - Angles between vectors\n - Area elements via √det(g)\n\n2. Several standard metrics:\n - Euclidean (flat space)\n - Polar coordinates (still flat, but metric varies with r)\n - Sphere: ds² = R²(dθ² + sin²θ dφ²)\n - Torus: non-uniform curvature!\n - Poincaré disk (hyperbolic geometry)\n\n3. Numerical computation of:\n - Christoffel symbols Γ^λ_μν (using finite differences)\n - Gaussian curvature K from the Riemann tensor\n\n**Cool results:**\n\n- Unit sphere gives K = 1.0000 (exactly as predicted!)\n- Flat space in polar coordinates gives K ≈ 0 (numerical precision)\n- The visualization shows how \"unit circles\" become ellipses in curved coordinates\n\n**Why this matters:**\n\nThis is the mathematical foundation for General Relativity. Spacetime has a metric tensor, and gravity IS the curvature of that metric.\n\nNotebook with full code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/metric_tensor.ipynb\n\nHappy to answer questions!",3204"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/metric_tensor.ipynb",3205"category": "general",3206"date": "2026-05-13",3207"time": "09:00"3208},3209{3210"source": "homology_groups",3211"content_type": "notebook",3212"subreddit": "CoCalc",3213"title": "Title: I built a Python implementation of homology group computation from scratch",3214"body": "Body:\n\n**TL;DR:** Homology groups detect \"holes\" in shapes. I wrote Python code to compute them using boundary matrices.\n\n**What are homology groups?**\n\nImagine you want to mathematically distinguish a coffee mug from a ball. They're both smooth surfaces, but the mug has a handle - a \"hole\" you can stick your finger through. Homology groups formalize this intuition.\n\nThe Betti numbers tell you:\n- b0 = number of connected pieces\n- b1 = number of 1D holes (loops)\n- b2 = number of 2D holes (cavities)\n\n**How it works:**\n\n1. Represent your space as a simplicial complex (vertices, edges, triangles, etc.)\n2. Build boundary matrices that encode how n-simplices connect to (n-1)-simplices\n3. The key property: the boundary of a boundary is always zero\n4. Homology = cycles that aren't boundaries\n\n**What I learned:**\n\n- A hollow triangle (circle) has b1=1 - one loop\n- A filled triangle (disk) has b1=0 - the triangle \"fills\" the hole\n- A hollow tetrahedron (sphere) has b2=1 - one cavity\n- A torus has b1=2 - two independent loops (meridian and longitude)\n\nThe Euler characteristic X = b0 - b1 + b2 + ... works out to 2 for a sphere and 0 for a torus.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/homology_groups.ipynb",3215"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/homology_groups.ipynb",3216"category": "general",3217"date": "2026-05-13",3218"time": "09:00"3219},3220{3221"source": "jump_diffusion_model",3222"content_type": "notebook",3223"subreddit": "CoCalc",3224"title": "Implementing Merton's Jump Diffusion Model in Python—Why Markets Don't Follow the Normal Distribution",3225"body": "**TL;DR:** Standard stock price models assume smooth, continuous changes. But markets crash suddenly. Merton's Jump Diffusion fixes this by adding random \"jumps\" to the price process.\n\n---\n\n**The Problem with Black-Scholes**\n\nThe classic Black-Scholes model assumes stock prices follow Geometric Brownian Motion (GBM):\n\ndS = μS dt + σS dW\n\nThis gives you a log-normal distribution of returns—nice and symmetric with thin tails. But real markets have:\n- Fat tails (extreme events happen more often)\n- Skewness (crashes are sharper than rallies)\n- Sudden discontinuous moves\n\n**Merton's Solution (1976)**\n\nAdd a compound Poisson jump process:\n\ndS = μS dt + σS dW + S dJ\n\nWhere:\n- J_t = Σ(Yᵢ - 1) is the cumulative jump\n- N_t ~ Poisson(λ) counts jumps\n- Yᵢ are log-normal jump multipliers\n\n**What I Implemented**\n\nMonte Carlo simulation with:\n- S₀ = 100 (initial price)\n- λ = 0.75 jumps/year\n- μⱼ = -0.05 (negative jump bias)\n- σⱼ = 0.1 (jump volatility)\n- 1000 paths, daily steps for 1 year\n\n**Results**\n\n| Metric | Jump Diffusion | GBM |\n|--------|---------------|-----|\n| Kurtosis | Higher | ~0 (normal) |\n| Skewness | Negative | ~0 |\n| Tail risk | Captured | Underestimated |\n\n**Key Takeaway**\n\nIf you're doing risk management or option pricing, GBM underestimates tail risk. Jump Diffusion gives you those fat tails that match real market behavior.\n\n**View the notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/jump_diffusion_model.ipynb\n\n*Uses: numpy, scipy, matplotlib*\n\n---",3226"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/jump_diffusion_model.ipynb",3227"category": "general",3228"date": "2026-05-14",3229"time": "09:00"3230},3231{3232"source": "particle_in_a_box",3233"content_type": "notebook",3234"subreddit": "CoCalc",3235"title": "[OC] Visualizing the Particle in a Box - Quantum Mechanics with Python",3236"body": "I built a Python simulation of the particle in a box problem, one of the most fundamental models in quantum mechanics.\n\n**ELI5:** Imagine trapping an electron in a tiny box. In classical physics, the electron could have any energy. But quantum mechanics says no - the electron can only have specific energy values, like steps on a ladder. The smallest energy (ground state) isn't zero - the particle always has some \"jiggle\" even at minimum energy.\n\n**What I learned:**\n\nThe energy levels follow E_n = n²π²ℏ²/(2mL²), which means:\n- n=1 (ground state): 0.38 eV\n- n=2: 1.50 eV (4x ground state)\n- n=5: 9.40 eV (25x ground state)\n\nThe wave functions are sine waves: ψ_n(x) = √(2/L)·sin(nπx/L)\n\nEach higher state has one more node (zero crossing) inside the box. The probability density |ψ|² tells you where you're likely to find the particle.\n\n**Cool insight:** I verified that the wave functions are orthonormal - the integral ⟨ψ_m|ψ_n⟩ = 1 when m=n and 0 otherwise. This is why they form a complete basis in quantum mechanics.\n\n**Code highlights:**\n- Used scipy.constants for physical constants (ℏ, m_e)\n- Numerical integration with np.trapz for expectation values\n- Clean visualization with matplotlib showing wave functions, probability densities, and energy diagrams\n\n**View and run the notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/particle_in_a_box.ipynb\n\nThe visualization shows the wave functions offset by their energy levels - a standard way to see how higher energy states have more oscillations.\n\nThis model is the foundation for understanding quantum dots, molecular orbitals, and semiconductor physics!",3237"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/particle_in_a_box.ipynb",3238"category": "general",3239"date": "2026-05-14",3240"time": "09:00"3241},3242{3243"source": "risk_parity_portfolio",3244"content_type": "notebook",3245"subreddit": "CoCalc",3246"title": "Built a Risk Parity Portfolio Optimizer - Here's What I Learned About \"Equal Risk\"",3247"body": "I just implemented a Risk Parity portfolio optimization from scratch in Python, and wanted to share what I learned.\n\n**The Problem with Equal Weights**\n\nIf you split money equally across 4 assets (25% each), you're NOT splitting risk equally. Why? Volatility matters.\n\nExample from my code:\n- Equities: 18% volatility\n- Bonds: 5% volatility\n- Commodities: 22% volatility\n- Real Estate: 14% volatility\n\nWith equal capital weights, Equities contribute 36% of portfolio risk while Bonds contribute only 5%. That's not diversified - you're basically just holding Equities with some decoration.\n\n**The Risk Parity Solution**\n\nRisk Parity equalizes risk contributions, not capital. The math:\n\nTotal Risk Contribution for asset i = wᵢ × (Σw)ᵢ / σₚ\n\nWhere:\n- wᵢ = weight of asset i\n- Σ = covariance matrix\n- σₚ = portfolio volatility\n\nWe want TRCᵢ = TRCⱼ for all assets.\n\n**My Results**\n\nRisk Parity weights: Equities 15.4%, Bonds 51.1%, Commodities 12.4%, Real Estate 21.1%\n\nNow each asset contributes exactly 25% of portfolio risk. That's true diversification.\n\n**The Catch**\n\nRisk Parity overweights low-return assets (Bonds). Expected return drops from 5.5% (equal weight) to 4.5%. In practice, funds like Bridgewater use leverage to boost returns while keeping the balanced risk profile.\n\n**Code Highlights**\n\nUsed scipy.optimize.minimize with SLSQP method. The objective function is just the sum of squared differences in risk contributions.\n\nFull notebook with visualizations available here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/risk_parity_portfolio.ipynb\n\nHappy to answer questions about the implementation!",3248"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/risk_parity_portfolio.ipynb",3249"category": "general",3250"date": "2026-05-15",3251"time": "09:00"3252},3253{3254"source": "legendre_polynomials",3255"content_type": "notebook",3256"subreddit": "SageMath",3257"title": "Built a notebook exploring Legendre Polynomials - from theory to Gauss quadrature",3258"body": "I created an interactive Jupyter notebook diving deep into Legendre polynomials. Here's what I learned:\n\n**What are Legendre polynomials?**\n\nThey're a family of orthogonal polynomials Pₙ(x) defined on [-1, 1]. \"Orthogonal\" means when you multiply two different ones together and integrate, you get zero - like perpendicular vectors but for functions.\n\n**Why should you care?**\n\n1. **Physics everywhere**: They appear in quantum mechanics, electrodynamics (multipole expansions), and anywhere you solve Laplace's equation in spherical coordinates\n\n2. **Gauss-Legendre quadrature**: The roots of these polynomials give you the optimal points for numerical integration. With just n points, you get exact results for polynomials up to degree 2n-1!\n\n**Key implementation insight:**\n\nDon't use the Rodrigues formula directly (involves high-order derivatives). Use the recurrence relation instead:\n\n(n+1)Pₙ₊₁(x) = (2n+1)xPₙ(x) - nPₙ₋₁(x)\n\nStarting with P₀(x) = 1 and P₁(x) = x, you can compute any Pₙ efficiently.\n\n**Cool properties I verified:**\n- Pₙ(1) = 1 for all n\n- Pₙ(-1) = (-1)ⁿ\n- They satisfy the differential equation: d/dx[(1-x²)dPₙ/dx] + n(n+1)Pₙ = 0\n\nThe notebook includes visualizations of the polynomials, orthogonality matrix, root distributions, and Gauss quadrature convergence.\n\n**View the full interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/legendre_polynomials.ipynb",3259"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/legendre_polynomials.ipynb",3260"category": "general",3261"date": "2026-05-15",3262"time": "09:00"3263},3264{3265"source": "bayesian_inference",3266"content_type": "notebook",3267"subreddit": "CoCalc",3268"title": "Title: Bayesian Inference in Python: How Your Beliefs Update with Data",3269"body": "---\n\n**What is Bayesian Inference?**\n\nUnlike frequentist statistics which gives you point estimates, Bayesian inference treats parameters as random variables with probability distributions. You start with a prior belief, observe data, and update to get a posterior.\n\n**The Core Equation (ELI5 version)**\n\nP(θ|D) = P(D|θ) × P(θ) / P(D)\n\nIn plain English: Your updated belief equals (how likely you'd see this data if θ were true) times (your prior belief), normalized.\n\n**What I Built**\n\nI simulated 100 coin flips from a biased coin (true probability = 0.7) and tested three different priors:\n\n1. **Uniform** - \"I have no idea, could be anything from 0 to 1\"\n2. **Weakly informative** - \"Probably around 0.5\"\n3. **Strong prior** - \"I'm pretty sure it's around 0.75\"\n\n**The Cool Result**\n\nAll three posteriors converged to approximately the same answer! The uniform prior gave a posterior mean of 0.687, the weak prior gave 0.673, and even the strong prior updated to 0.684.\n\n**Key Takeaways**\n\n- Credible intervals are intuitive: \"There's a 95% probability θ lies in [0.58, 0.79]\"\n- Conjugate priors (Beta-Binomial) give closed-form solutions\n- Bayes factors let you formally compare models\n- Grid approximation works when conjugacy isn't available\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bayesian_inference.ipynb\n\nLibraries used: NumPy, SciPy, Matplotlib",3270"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bayesian_inference.ipynb",3271"category": "general",3272"date": "2026-05-16",3273"time": "09:00"3274},3275{3276"source": "lie_algebras_basics",3277"content_type": "notebook",3278"subreddit": "SageMath",3279"title": "[Tutorial] Lie Algebras in Python: From Theory to Rotation Matrices with NumPy",3280"body": "Hey everyone!\n\nI built an interactive notebook exploring **Lie algebras** - the mathematical structures that describe continuous symmetries in physics. Here's what I learned:\n\n**What's a Lie algebra?**\n\nThink of it as a vector space with a special operation called the \"bracket\" [A,B] = AB - BA (for matrices, this is the commutator). This bracket must satisfy:\n\n1. Bilinearity (linear in both arguments)\n2. Antisymmetry: [X,Y] = -[Y,X]\n3. Jacobi identity: [X,[Y,Z]] + [Y,[Z,X]] + [Z,[X,Y]] = 0\n\n**The Cool Part: so(3)**\n\nThe algebra so(3) consists of 3x3 skew-symmetric matrices. Its generators Lx, Ly, Lz satisfy:\n- [Lx, Ly] = Lz\n- [Ly, Lz] = Lx\n- [Lz, Lx] = Ly\n\nThese are infinitesimal rotations! Using the exponential map exp(theta * L), you get actual rotation matrices in SO(3).\n\n**Code highlights:**\n\n```python\ndef lie_bracket(A, B):\n return A @ B - B @ A\n\ndef verify_jacobi_identity(X, Y, Z):\n term1 = lie_bracket(X, lie_bracket(Y, Z))\n term2 = lie_bracket(Y, lie_bracket(Z, X))\n term3 = lie_bracket(Z, lie_bracket(X, Y))\n return np.allclose(term1 + term2 + term3, 0)\n```\n\n**Why it matters:**\n\n- so(3): Angular momentum in mechanics\n- su(2): Quantum spin (Pauli matrices!)\n- su(3): Quantum chromodynamics (the strong force)\n\nThe notebook also covers structure constants, the Killing form for testing semisimplicity, and visualizes the Baker-Campbell-Hausdorff formula.\n\n**View the full notebook here:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lie_algebras_basics.ipynb\n\nHappy to answer questions!",3281"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lie_algebras_basics.ipynb",3282"category": "general",3283"date": "2026-05-16",3284"time": "09:00"3285},3286{3287"source": "aharonov_bohm_effect",3288"content_type": "notebook",3289"subreddit": "CoCalc",3290"title": "[OC] I simulated the Aharonov-Bohm effect in Python - electrons \"feeling\" magnetic fields without touching them",3291"body": "I created a Python simulation demonstrating one of quantum mechanics' most mind-bending phenomena: the Aharonov-Bohm effect.\n\n**What is it?**\nIn classical physics, only the electric and magnetic fields (E and B) matter. The potentials (A and phi) are just mathematical conveniences. But in 1959, Aharonov and Bohm predicted that charged particles can be influenced by electromagnetic potentials even in regions where the actual fields are zero!\n\n**The setup:**\nImagine electrons in a double-slit experiment, but with a solenoid (basically a magnet with all its field confined inside) placed between the slits. Outside the solenoid, B=0 - there's literally no magnetic field. Yet the electrons' interference pattern shifts based on the magnetic flux inside the solenoid.\n\n**The key equation:**\nThe phase shift is: Delta_phi = 2pi * (Phi/Phi_0)\n\nWhere Phi_0 = h/e is the magnetic flux quantum (about 4.14 x 10^-15 Weber).\n\n**What I learned:**\n- At Phi = Phi_0/2, the entire pattern shifts by half a fringe (maxima become minima)\n- The central intensity oscillates as I(0) = cos²(pi * Phi/Phi_0)\n- This effect is topological - it only depends on the flux enclosed by the electron paths, not local field values\n- This is why electromagnetic potentials are considered \"real\" in quantum mechanics\n\nThe simulation uses numpy/matplotlib and models 50 keV electrons (typical transmission electron microscope energy).\n\n**Interactive notebook:** View and run it yourself:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/aharonov_bohm_effect.ipynb",3292"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/aharonov_bohm_effect.ipynb",3293"category": "general",3294"date": "2026-05-17",3295"time": "09:00"3296},3297{3298"source": "molecular_dynamics",3299"content_type": "notebook",3300"subreddit": "CoCalc",3301"title": "TITLE: I built a molecular dynamics simulation from scratch - here's how it works",3302"body": "BODY:\n\n**What is Molecular Dynamics?**\n\nMD simulates atoms/molecules by numerically solving Newton's equations of motion. Think of it as a virtual chemistry set where you can watch particles bounce around and measure thermodynamic properties.\n\n**The Physics (ELI5 version)**\n\nImagine you have marbles that:\n- Push each other away when too close (like magnets repelling)\n- Gently pull toward each other at medium distance\n- Don't interact when far apart\n\nThat's the Lennard-Jones potential! The force looks like: F ∝ 24ε[2(σ/r)¹² - (σ/r)⁶]\n\n**How the Simulation Works**\n\n1. Place particles on a grid\n2. Give them random velocities (Maxwell-Boltzmann distribution)\n3. Calculate forces between all pairs\n4. Update positions and velocities using Velocity Verlet algorithm\n5. Repeat thousands of times\n\n**Why Velocity Verlet?**\n\nIt's \"symplectic\" - meaning it preserves phase space volume. Translation: energy stays conserved over long simulations. My 5000-step run had <0.01% energy drift!\n\n**Cool Results**\n\n- Started with particles in a lattice → ended with fluid-like disorder\n- Temperature fluctuates around target (as expected for NVE ensemble)\n- The interplay between kinetic and potential energy is mesmerizing\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/molecular_dynamics.ipynb\n\n**What I learned:** Even \"simple\" physics simulations involve careful numerical methods. The choice of integrator, time step, and boundary conditions all matter enormously.",3303"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/molecular_dynamics.ipynb",3304"category": "general",3305"date": "2026-05-17",3306"time": "09:00"3307},3308{3309"source": "lorentz_transformation",3310"content_type": "notebook",3311"subreddit": "CoCalc",3312"title": "I built a Python notebook visualizing the Lorentz transformation and relativistic effects",3313"body": "Hey everyone! I created an interactive Jupyter notebook exploring the Lorentz transformation - the mathematical heart of Einstein's special relativity.\n\n**What I learned:**\n\nThe Lorentz transformation relates space and time measurements between two observers moving relative to each other. The key quantity is the Lorentz factor:\n\nγ = 1/√(1 - v²/c²)\n\nThis simple formula explains some mind-bending physics:\n\n**Time Dilation:** Moving clocks run slower. If a spacecraft travels at 0.8c, for every year onboard, 1.67 years pass on Earth. The formula is Δt = γΔτ.\n\n**Length Contraction:** Moving objects appear shorter. A 100-meter rod moving at 0.9c appears only 43.6 meters long. The formula is L = L₀/γ.\n\n**Simultaneity is Relative:** Two events simultaneous in one reference frame are NOT simultaneous in another. This was the most counterintuitive result for me.\n\n**The Code:**\n\nUsed numpy and matplotlib to:\n- Plot the Lorentz factor vs velocity (it diverges as v approaches c)\n- Visualize time dilation and length contraction curves\n- Create Minkowski spacetime diagrams\n- Transform events between reference frames\n\nThe spacetime interval Δs² = c²Δt² - Δx² is invariant - confirmed numerically to machine precision.\n\n**View the full notebook here:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lorentz_transformation.ipynb\n\nWould love feedback on the visualizations or suggestions for extending this to 4-momentum and relativistic dynamics!",3314"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lorentz_transformation.ipynb",3315"category": "general",3316"date": "2026-05-18",3317"time": "09:00"3318},3319{3320"source": "quantum_tunneling",3321"content_type": "notebook",3322"subreddit": "CoCalc",3323"title": "I simulated quantum tunneling in Python - here's how particles pass through \"impossible\" barriers",3324"body": "**What is quantum tunneling?**\n\nIn classical physics, if you roll a ball toward a hill and it doesn't have enough energy to get over, it bounces back. Simple.\n\nIn quantum mechanics, particles don't play by these rules. Due to their wave-like nature, they have a non-zero probability of appearing on the other side of energy barriers even when they \"shouldn't\" have enough energy. This is quantum tunneling.\n\n**What I built:**\n\nA Python simulation that:\n1. Solves the Schrodinger equation for a particle hitting a rectangular potential barrier\n2. Calculates the exact transmission coefficient\n3. Visualizes the wave function showing exponential decay inside the barrier\n\n**Key parameters:**\n- Barrier height: V₀ = 5 eV\n- Barrier width: a = 0.5 nm\n- Particle: electron\n\n**Results:**\n\nAt E = 2.5 eV (half the barrier height), transmission is about 9%. At E = 4.0 eV, it jumps to 42%.\n\nThe formula for tunneling probability:\nT = 1/(1 + V₀²sinh²(κa)/(4E(V₀-E)))\n\nWhere κ = √(2m(V₀-E))/h-bar\n\n**Why this matters:**\n\nQuantum tunneling enables:\n- Nuclear fusion in stars (protons tunnel through Coulomb barriers)\n- Scanning tunneling microscopes (atomic-scale imaging)\n- Tunnel diodes and flash memory\n- Enzyme catalysis in biology\n\n**What I learned:**\n\nThe exponential dependence on barrier width is striking. Doubling the width from 0.5nm to 1.0nm drops transmission from ~15% to ~0.09%. This sensitivity is what makes STM imaging possible.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_tunneling.ipynb\n\nLibraries used: NumPy, SciPy, Matplotlib",3325"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_tunneling.ipynb",3326"category": "physics",3327"date": "2026-05-18",3328"time": "09:00"3329},3330{3331"source": "compound_interest_calculator",3332"content_type": "notebook",3333"subreddit": "CoCalc",3334"title": "Built a Compound Interest Calculator in Python - Here's What I Learned About Exponential Growth",3335"body": "I created a Jupyter notebook to explore compound interest mathematics, and wanted to share some insights that surprised me.\n\n**The Core Formula**\n\nFuture Value = P(1 + r/n)⁽nt)\n\nWhere P is principal, r is annual rate, n is compounding frequency, and t is time.\n\n**Key Findings:**\n\n1. **Compounding frequency has diminishing returns** - Going from annual to monthly compounding at 5% increases your effective rate from 5% to 5.116%. But monthly to daily? Only 5.116% to 5.127%. The jump to continuous compounding (using e⁽rt)) adds just another 0.001%.\n\n2. **The Rule of 72 actually works** - Divide 72 by your interest rate percentage to estimate doubling time. At 8%, money doubles in ~9 years. I tested this against the exact formula ln(2)/ln(1+r) and the error stays under 3% for reasonable rates.\n\n3. **Time beats everything** - The visualization of simple vs compound interest over 30 years is striking. At 7%, 10K grows to ~31K with simple interest but 76K+ with compound. That's the power of exponential growth.\n\n**Technical Implementation:**\n\n- Used numpy for calculations and matplotlib for visualizations\n- Created functions for discrete compounding, continuous compounding, EAR, and present value\n- Generated 4-panel figure comparing frequencies, rates, doubling times, and simple vs compound\n\n**View the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/compoundᵢnterest_calculator.ipynb\n\nThe visualizations really help internalize why Einstein (allegedly) called compound interest the \"eighth wonder of the world.\"",3336"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/compoundᵢnterest_calculator.ipynb",3337"category": "general",3338"date": "2026-05-19",3339"time": "09:00"3340},3341{3342"source": "caesar_cipher_implementation",3343"content_type": "notebook",3344"subreddit": "CoCalc",3345"title": "I implemented the Caesar cipher in Python with automatic cryptanalysis - here's how frequency analysis breaks ancient encryption",3346"body": "I've been exploring classical cryptography and just finished a complete Python implementation of the Caesar cipher, including a tool that automatically breaks it.\n\n**The Basics**\n\nThe Caesar cipher (named after Julius Caesar) shifts each letter by a fixed amount. If your key is 3:\n- A → D\n- B → E\n- HELLO → KHOOR\n\nMathematically: E(p) = (p + k) mod 26\n\n**Why It's Insecure**\n\nTwo major vulnerabilities:\n\n1. **Tiny key space** - Only 26 possible keys means brute force takes microseconds\n2. **Frequency preservation** - The statistical distribution of letters is preserved\n\n**The Cool Part: Automatic Cryptanalysis**\n\nI implemented a frequency analysis attack that:\n- Tries all 26 possible keys\n- Computes letter frequencies for each decryption attempt\n- Uses chi-squared test to compare against expected English frequencies\n- Returns the key with the best statistical match\n\nWith enough ciphertext (a paragraph or more), it reliably recovers the key without any human intervention.\n\n**What I Learned**\n\n- Modular arithmetic is fundamental to cryptography\n- Statistical attacks exploit patterns that encryption fails to hide\n- This is why modern ciphers (AES, etc.) are designed to be statistically indistinguishable from random noise\n\nThe notebook includes visualizations of the encryption mapping, letter frequency distributions, and chi-squared scores for all candidate keys.\n\nView and run the full notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/caesar_cipher_implementation.ipynb\n\nHappy to answer questions about the implementation or cryptanalysis techniques!\n\n---",3347"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/caesar_cipher_implementation.ipynb",3348"category": "general",3349"date": "2026-05-19",3350"time": "09:00"3351},3352{3353"source": "elastic_collision_simulation",3354"content_type": "notebook",3355"subreddit": "CoCalc",3356"title": "I built an elastic collision simulator in Python - here's what I learned about conservation laws",3357"body": "Hey everyone! I just finished building a 2D elastic collision simulator and wanted to share what I learned.\n\n**The Physics**\n\nAn elastic collision conserves both momentum and kinetic energy. The key equations are:\n\n- Momentum: m₁v₁ᵢ + m₂v₂ᵢ = m₁v₁f + m₂v₂f\n- Kinetic Energy: ½m₁v₁² + ½m₂v₂² = constant\n\nFor 1D collisions, solving these gives:\n- v₁f = [(m₁-m₂)v₁ᵢ + 2m₂v₂ᵢ]/(m₁+m₂)\n- v₂f = [(m₂-m₁)v₂ᵢ + 2m₁v₁ᵢ]/(m₁+m₂)\n\n**The Cool Part**\n\nFor 2D collisions, you decompose velocities along the collision normal (line between centers) and tangent. The normal components follow the 1D formulas, while tangential components stay unchanged. This is why billiard balls behave the way they do!\n\n**Results**\n\nI simulated 6 particles with different masses (0.8 to 2.0 kg) bouncing around for 15 seconds. The results verified:\n- Kinetic energy stayed constant throughout\n- Both x and y momentum components were conserved\n- Coefficient of restitution e=1 (perfectly elastic)\n\n**What I Learned**\n\n1. Heavier particles transfer more momentum during collisions\n2. Equal-mass particles swap velocities along the collision normal\n3. Small numerical errors can accumulate, so use small time steps (dt=0.005s worked well)\n\nYou can view and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/elastic_collision_simulation.ipynb\n\nHappy to answer questions about the implementation!",3358"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/elastic_collision_simulation.ipynb",3359"category": "general",3360"date": "2026-05-20",3361"time": "09:00"3362},3363{3364"source": "stratonovich_calculus",3365"content_type": "notebook",3366"subreddit": "CoCalc",3367"title": "I built a Python notebook comparing Itô vs Stratonovich stochastic calculus — here's what I learned",3368"body": "Hey everyone! I've been diving into stochastic differential equations (SDEs) and created an interactive notebook exploring the differences between Itô and Stratonovich calculus.\n\n**ELI5 version:** When you have a system with random noise (like a particle bouncing around), there are two ways to mathematically define how that noise integrates over time. They give DIFFERENT answers, and picking the wrong one introduces systematic errors.\n\n**The key difference:**\n- Itô integral evaluates the function at the START of each interval\n- Stratonovich integral evaluates at the MIDPOINT\n\nThis seems minor but has huge consequences:\n\n1. **Chain rule**: In Stratonovich calculus, the ordinary chain rule df(X) = f'(X)dX works! In Itô, you need a correction term (the famous Itô formula with the ½f''(X)b²dt term).\n\n2. **Physical meaning**: The Wong-Zakai theorem proves that if your \"white noise\" is actually the limit of smooth colored noise (which is true in real physics), the system converges to the Stratonovich solution.\n\n3. **Conversion formula**: You can convert between them:\n a_Itô(x) = a_Strat(x) + ½b(x)b'(x)\n\n**What I implemented:**\n- Euler-Maruyama scheme (for Itô SDEs) — strong order 0.5\n- Heun predictor-corrector scheme (for Stratonovich SDEs) — strong order 1.0\n- Monte Carlo verification with 10,000 paths showing the bias from using wrong interpretation\n\nThe notebook includes geometric Brownian motion and a nonlinear multiplicative noise example with α=0.5.\n\n**Interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/stratonovich_calculus.ipynb\n\nWould love to hear from anyone working on SDEs in physics or finance!",3369"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/stratonovich_calculus.ipynb",3370"category": "general",3371"date": "2026-05-20",3372"time": "09:00"3373},3374{3375"source": "bayesian_inference_coin_flip",3376"content_type": "notebook",3377"subreddit": "CoCalc",3378"title": "Implemented Bayesian Inference from Scratch to Estimate Coin Bias - Here's What I Learned",3379"body": "I built a Python notebook demonstrating Bayesian inference for the classic coin flip problem. Here's the ELI5 and what I learned.\n\n**The Problem:**\nYou have a coin with unknown probability θ of landing heads. You flip it n times, see k heads. What's θ?\n\n**The Bayesian Approach:**\n\nInstead of just calculating k/n, Bayesian inference gives you a full probability distribution over θ.\n\nThe formula: P(θ|Data) ∝ P(Data|θ) × P(θ)\n\nTranslation: Your updated belief = (how likely this data is if θ is true) × (your prior belief about θ)\n\n**Why Beta Distribution?**\n\nThe magic is using a Beta prior, which is \"conjugate\" to the binomial likelihood. This means:\n- Prior: Beta(α, β)\n- Posterior: Beta(α + heads, β + tails)\n\nThat's it! No complex integrals needed.\n\n**The Experiment:**\n\nI simulated 100 flips from a coin with true bias θ = 0.7, then tested 4 different priors:\n1. Uniform (no prior knowledge)\n2. Skeptical (believes coin is fair)\n3. Strong prior at 0.5\n4. Prior favoring heads\n\n**Key Finding:**\n\nAfter 100 observations, ALL priors converged to approximately θ = 0.70 with narrow credible intervals. The data overwhelms the prior.\n\n**Takeaways:**\n- Bayesian inference quantifies uncertainty, not just point estimates\n- Conjugate priors make computation elegant\n- With enough data, prior choice matters less\n- You get credible intervals that actually mean \"95% probability θ is in this range\"\n\nView the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bayesian_inference_coin_flip.ipynb",3380"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bayesian_inference_coin_flip.ipynb",3381"category": "general",3382"date": "2026-05-21",3383"time": "09:00"3384},3385{3386"source": "runge_kutta_fourth_order",3387"content_type": "notebook",3388"subreddit": "CoCalc",3389"title": "Implemented Runge-Kutta 4th Order from Scratch - Here's What I Learned About ODE Solvers",3390"body": "I built a Runge-Kutta Fourth Order (RK4) solver to understand how numerical ODE integration really works, and wanted to share some insights.\n\n**What is RK4?**\n\nWhen you have a differential equation dy/dt = f(t, y) and want to find y(t), you need to integrate. RK4 does this by computing four slope estimates per step:\n\n- k₁ = f(t, y) — slope at start\n- k₂ = f(t + h/2, y + h·k₁/2) — slope at midpoint using k₁\n- k₃ = f(t + h/2, y + h·k₂/2) — slope at midpoint using k₂\n- k₄ = f(t + h, y + h·k₃) — slope at end using k₃\n\nThen combine: ynₑw = y + (h/6)(k₁ + 2k₂ + 2k₃ + k₄)\n\n**Why the weird weighting?**\n\nThe (1, 2, 2, 1)/6 weighting follows Simpson's rule for integration. It's not arbitrary - it's derived from Taylor series matching to achieve fourth-order accuracy.\n\n**Test Results**\n\n1. **Exponential decay** (dy/dt = -0.5y): Max error ~10⁻⁶ with h=0.5\n2. **Harmonic oscillator**: Converted the 2nd-order ODE to a system. Phase portrait was nearly perfect.\n3. **Convergence test**: Confirmed slope of 4.0 on log-log error plot\n\n**Key Takeaway**\n\nHalving your step size reduces error by 16x (2⁴). That's the power of fourth-order methods!\n\nFor those wanting to explore: you can view and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/runge_kutta_fourthₒrder.ipynb",3391"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/runge_kutta_fourthₒrder.ipynb",3392"category": "general",3393"date": "2026-05-21",3394"time": "09:00"3395},3396{3397"source": "kuramoto_model_synchronization",3398"content_type": "notebook",3399"subreddit": "CoCalc",3400"title": "I simulated the Kuramoto model to understand how oscillators synchronize - here's what I learned",3401"body": "**The Question:** How do independent oscillators (like neurons, fireflies, or power generators) spontaneously synchronize?\n\n**The Model:**\n\nThe Kuramoto model describes N oscillators, each with its own natural frequency ωᵢ. They're coupled through sine interactions:\n\ndθᵢ/dt = ωᵢ + (K/N)∑ⱼsin(θⱼ - θᵢ)\n\nThe coupling term pulls each oscillator toward the others. When coupling strength K is weak, everyone does their own thing. But increase K past a critical value K_c, and order emerges!\n\n**Key Findings:**\n\n1. **Phase transition exists** - Below K_c, r ≈ 0 (no sync). Above K_c, synchronization emerges suddenly.\n\n2. **Partial sync is normal** - Even with strong coupling, oscillators with extreme frequencies can't lock to the group. You get r ≈ 0.8, not perfect r = 1.\n\n3. **Mean-field magic** - Oscillators don't \"see\" each other directly; they couple to the collective rhythm.\n\n**The Code:**\n\nUsed NumPy + SciPy's odeint for integration. The trick is computing pairwise phase differences efficiently with broadcasting.\n\n**Real applications:** neural synchronization, cardiac pacemakers, power grid stability, chemical oscillators.\n\n**Interactive Notebook:** You can view and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/kuramoto_model_synchronization.ipynb\n\n---",3402"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/kuramoto_model_synchronization.ipynb",3403"category": "general",3404"date": "2026-05-22",3405"time": "09:00"3406},3407{3408"source": "nash_equilibrium_finding",3409"content_type": "notebook",3410"subreddit": "CoCalc",3411"title": "I built a Nash Equilibrium solver in Python - here's what I learned about game theory",3412"body": "I've been exploring game theory and decided to implement a Nash Equilibrium finder from scratch using NumPy. Here's my breakdown:\n\n**What is Nash Equilibrium?**\n\nIt's a stable state in a game where no player can improve their outcome by changing only their own strategy. Mathematically, if s* is the strategy profile, then for every player i:\n\nu_i(s*_i, s*_-i) >= u_i(s_i, s*_-i) for all strategies s_i\n\n**The Algorithm: Support Enumeration**\n\nThe algorithm works by:\n1. Listing all possible \"supports\" (sets of strategies with non-zero probability)\n2. For each support pair, solving the indifference equations\n3. Verifying the solution is valid (probabilities sum to 1, non-negative)\n\n**Results from Classic Games:**\n\n- **Prisoner's Dilemma:** Unique equilibrium at (Defect, Defect) with payoff (1, 1) - even though (Cooperate, Cooperate) gives (3, 3)!\n\n- **Matching Pennies:** Both players randomize 50-50, expected payoff is 0\n\n- **Battle of the Sexes:** Three equilibria - two pure (Opera, Opera) and (Football, Football), plus one mixed\n\n- **Rock-Paper-Scissors:** Uniform 1/3 mixture over all strategies\n\n**Key Insight:** The conflict between individual rationality (Nash Equilibrium) and collective welfare (Pareto optimality) explains many real-world coordination failures.\n\nCheck out the full notebook with visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/nash_equilibrium_finding.ipynb",3413"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/nash_equilibrium_finding.ipynb",3414"category": "general",3415"date": "2026-05-22",3416"time": "09:00"3417},3418{3419"source": "clifford_algebras",3420"content_type": "notebook",3421"subreddit": "SageMath",3422"title": "Title: I implemented Clifford Algebras from scratch in Python - here's why they're fascinating",3423"body": "Body:\n\nClifford algebras (also called geometric algebras) are one of those mathematical structures that seem abstract until you realize they unify almost everything you know about rotations, complex numbers, and physics.\n\n**What I built:**\nA complete Python implementation of Cl(p,q) algebras supporting:\n- Arbitrary signatures (Euclidean, Minkowski, etc.)\n- Full geometric product with grade extraction\n- Rotors for n-dimensional rotations\n- Reflections through hyperplanes\n\n**The key insight:**\n\nThe geometric product combines the dot and wedge products:\n\nuv = u·v + u∧v\n\nFor orthonormal basis vectors, you get the fundamental relation eᵢeⱼ + eⱼeᵢ = 2ηᵢⱼ, meaning distinct basis vectors anticommute (eᵢeⱼ = -eⱼeᵢ).\n\n**Why this matters:**\n\n1. Complex numbers are just Cl(0,1) where one basis vector squares to -1\n2. Quaternions are Cl(0,2) - Hamilton's i,j,k fall out naturally\n3. 3D rotations use \"rotors\" R = exp(-Bθ/2) instead of matrices\n4. Spacetime physics uses Cl(1,3) for special relativity\n\n**The beautiful part:**\n\nRotations become: v' = R·v·R†\n\nThis formula works in ANY dimension, handles gimbal lock gracefully, and the rotor R = cos(θ/2) - B·sin(θ/2) directly encodes the rotation plane as a bivector B.\n\nI verified quaternion identities (i²=j²=k²=ijk=-1) emerge automatically from the Clifford product rules.\n\n**Interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/clifford_algebras.ipynb\n\nThe visualization shows rotation trajectories in 3D and the multiplication table of Cl(3,0).\n\nHas anyone else explored geometric algebra? I'm curious about applications in physics simulations or computer graphics.",3424"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/clifford_algebras.ipynb",3425"category": "general",3426"date": "2026-05-23",3427"time": "09:00"3428},3429{3430"source": "fokker_planck_equation",3431"content_type": "notebook",3432"subreddit": "CoCalc",3433"title": "I built a Fokker-Planck equation solver in Python - simulating how probability evolves in noisy systems",3434"body": "The Fokker-Planck equation (FPE) is one of those beautiful pieces of physics that connects random microscopic motion to smooth, predictable probability distributions. Developed by Fokker and Planck around 1914-1917, it describes how the probability density of a stochastic process changes over time.\n\n**What is it?**\n\nImagine dropping a particle into a fluid. It gets buffeted randomly by molecules (Brownian motion), but also feels forces pushing it around. The FPE tells you: given this randomness and these forces, where will the particle probably be at time t?\n\nThe equation: dP/dt = -d(AP)/dx + (1/2)d²(DP)/dx²\n\nWhere A(x) is the drift (deterministic force) and D(x) is the diffusion coefficient (noise strength).\n\n**What I simulated:**\n\n1. **Ornstein-Uhlenbeck process**: A particle in a harmonic trap with thermal noise. Starts displaced, relaxes to a Gaussian equilibrium. My numerical solution matched the analytical prediction with < 1% error.\n\n2. **Double-well potential**: A particle that can exist in two stable states separated by an energy barrier. Watched probability \"leak\" from one well to the other via noise-driven barrier crossing (Kramers escape).\n\n**The implementation:**\n\nUsed numpy and scipy with a Crank-Nicolson finite difference scheme (unconditionally stable, 2nd order accurate). The solver handles arbitrary drift and diffusion functions.\n\n**Key takeaways:**\n\n- The interplay between drift (pulling toward equilibrium) and diffusion (spreading out) determines the final distribution\n- Higher noise = faster barrier crossing but broader distributions\n- Probability is conserved (the FPE is a continuity equation)\n\nView and run the full notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/fokker_planck_equation.ipynb\n\nHappy to answer questions about the physics or implementation!",3435"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/fokker_planck_equation.ipynb",3436"category": "general",3437"date": "2026-05-23",3438"time": "09:00"3439},3440{3441"source": "options_pricing",3442"content_type": "notebook",3443"subreddit": "CoCalc",3444"title": "Options Pricing from First Principles: Black-Scholes and Monte Carlo Implementation in Python",3445"body": "I created an interactive notebook that implements options pricing from the ground up, covering both the analytical Black-Scholes model and Monte Carlo simulation.\n\n**What's covered:**\n\n1. **Black-Scholes Model** - The famous formula C = SN(d₁) - Ke⁻ʳᵀN(d₂), including the assumptions and when they break down\n\n2. **The Greeks** - Complete implementation of Delta, Gamma, Theta, Vega, and Rho with visualizations showing how they vary with stock price\n\n3. **Monte Carlo Simulation** - Simulating geometric Brownian motion paths and watching the estimate converge to the analytical solution as simulations increase (demonstrating the O(1/√n) convergence rate)\n\n4. **Implied Volatility** - Using Brent's method to back out the volatility from market prices, plus visualization of the volatility smile\n\n5. **Put-Call Parity** - Numerical verification that C - P = S - Ke⁻ʳᵀ\n\nThe notebook includes 4 publication-quality visualizations and compares Monte Carlo results at different simulation counts (1K to 1M) against Black-Scholes.\n\nPerfect for anyone learning quantitative finance, preparing for quant interviews, or just curious about how derivatives are priced.\n\nTry it: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/options_pricing/options_pricing.ipynb\n\nSuggested subreddits: r/algotrading, r/quant, r/learnpython, r/FinancialCareers, r/options",3446"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/options_pricing/options_pricing.ipynb",3447"category": "general",3448"date": "2026-05-24",3449"time": "09:00"3450},3451{3452"source": "mean_variance_optimization",3453"content_type": "notebook",3454"subreddit": "CoCalc",3455"title": "Title: I implemented Markowitz's Mean-Variance Optimization in Python - here's what I learned about diversification",3456"body": "**What is this?**\n\nMean-Variance Optimization (MVO) is the mathematical foundation of Modern Portfolio Theory. Harry Markowitz won the Nobel Prize for this work. The core idea: you can get better risk-adjusted returns by combining assets intelligently rather than just picking the \"best\" one.\n\n**The Math (simplified)**\n\nFor a portfolio with weights w:\n- Expected return: μₚ = sum of (weight × asset return)\n- Portfolio variance: σₚ² = wᵀΣw\n\nThe key insight is that variance isn't just a weighted average - it depends on how assets move together (covariance). When correlations are low, combining assets reduces total risk.\n\n**What I Built**\n\nI optimized a 5-asset portfolio (US stocks, international stocks, bonds, real estate, commodities) using scipy.optimize. The code finds:\n\n1. **Global Minimum Variance portfolio**: The least risky combination possible\n2. **Maximum Sharpe Ratio portfolio**: Best risk-adjusted return\n3. **Efficient Frontier**: All optimal portfolios between these two\n\n**Key Results**\n\n- The minimum variance portfolio had only 5.41% volatility - lower than bonds alone (6%)!\n- Optimal Sharpe ratio: 0.526 (8% return for 11.4% volatility)\n- Bonds get heavy allocation (70%+) in low-risk portfolios\n\n**What I Learned**\n\n1. Diversification genuinely works when correlations are imperfect\n2. Input estimates (returns, covariances) matter enormously - garbage in, garbage out\n3. The analytical solution matches numerical optimization perfectly\n\nFull notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/mean_variance_optimization.ipynb",3457"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/mean_variance_optimization.ipynb",3458"category": "general",3459"date": "2026-05-24",3460"time": "09:00"3461},3462{3463"source": "particle_swarm_optimization",3464"content_type": "notebook",3465"subreddit": "CoCalc",3466"title": "I implemented Particle Swarm Optimization from scratch - here's how swarm intelligence finds global optima",3467"body": "Hey everyone!\n\nI just completed an implementation of Particle Swarm Optimization (PSO) and wanted to share what I learned.\n\n**What is PSO?**\n\nImagine a flock of birds searching for food. Each bird:\n- Remembers the best spot it personally found\n- Communicates with others about the best spot anyone found\n- Balances exploring new areas vs. returning to known good spots\n\nPSO works the same way! Each \"particle\" is a candidate solution that moves through the search space.\n\n**The Core Equation**\n\nEach particle updates its velocity:\n\nv(t+1) = w·v(t) + c₁r₁(personal_best - x) + c₂r₂(global_best - x)\n\nThen updates position: x(t+1) = x(t) + v(t+1)\n\nWhere:\n- w = inertia weight (decreases from 0.9 to 0.4)\n- c₁ = cognitive coefficient (individual learning)\n- c₂ = social coefficient (swarm learning)\n- r₁, r₂ = random numbers between 0 and 1\n\n**The Challenge: Rastrigin Function**\n\nI tested it on the Rastrigin function - a brutal benchmark with 80+ local minima in just 2 dimensions:\n\nf(x) = 10n + Σ[xᵢ² - 10·cos(2πxᵢ)]\n\nThe global minimum is at x* = (0, 0) where f(x*) = 0.\n\n**Results**\n\nWith 40 particles and 100 iterations, PSO found the global minimum! The convergence plot shows rapid initial improvement followed by fine-tuning.\n\n**Key Takeaways**\n\n1. PSO doesn't need gradients - great for non-differentiable functions\n2. The exploration→exploitation transition is elegant (decreasing inertia)\n3. Parameter tuning matters: c₁ and c₂ balance individual vs. social learning\n\nThe full notebook with visualizations is available here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/particle_swarm_optimization.ipynb\n\nHappy to answer questions about the implementation!\n\n---",3468"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/particle_swarm_optimization.ipynb",3469"category": "general",3470"date": "2026-05-25",3471"time": "09:00"3472},3473{3474"source": "stopping_times",3475"content_type": "notebook",3476"subreddit": "CoCalc",3477"title": "I simulated 10,000 Brownian motion paths to explore stopping times — here's what I learned about infinite expectations",3478"body": "Today I explored one of probability theory's most elegant concepts: stopping times.\n\n**What's a stopping time?**\n\nImagine you're watching a stock price (modeled as random motion). A stopping time is any rule for when to stop watching that only uses information you've already seen — no crystal balls allowed!\n\nMathematically: τ is a stopping time if the event {τ ≤ t} belongs to the information available at time t.\n\n**The simulation**\n\nI ran 10,000 Brownian motion paths and tracked when each one first hit various target levels (±0.5, ±1.0, ±2.0).\n\n**The wild result**\n\nThe first passage time follows a Lévy distribution:\n\nf(t) = a / √(2πt³) · exp(-a² / 2t)\n\nHere's the paradox that blew my mind:\n- P(τ < ∞) = 1 → The particle WILL reach any level eventually\n- E[τ] = ∞ → But the average time to get there is INFINITE\n\nThis happens because the distribution has \"heavy tails\" — it decays like t^(-3/2), which isn't fast enough to make the mean finite.\n\n**Why does this matter?**\n\nStopping times appear in:\n- Option pricing (barrier options)\n- Quality control (when to stop a production line)\n- Biology (molecular motors, neural spike timing)\n- Any \"first passage\" problem\n\n**Code verification**\n\nKolmogorov-Smirnov tests confirmed the empirical distributions match theory. The simulation used numpy for path generation and scipy.stats for statistical tests.\n\nView the full notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/stopping_times.ipynb",3479"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/stopping_times.ipynb",3480"category": "general",3481"date": "2026-05-25",3482"time": "09:00"3483},3484{3485"source": "christoffel_symbols",3486"content_type": "notebook",3487"subreddit": "CoCalc",3488"title": "Built a Christoffel symbol calculator in Python - visualizing curved space and geodesics",3489"body": "I've been learning differential geometry and wanted to understand Christoffel symbols intuitively, so I built a numerical calculator from scratch.\n\n**What are Christoffel symbols?**\n\nThink of them as \"connection coefficients\" that tell you how basis vectors rotate as you move through a curved space. The formula is:\n\nGamma^lambda_{mu,nu} = (1/2) g^{lambda,sigma} (dg_{sigma,mu}/dx^nu + dg_{sigma,nu}/dx^mu - dg_{mu,nu}/dx^sigma)\n\nwhere g is the metric tensor.\n\n**The implementation:**\n\n- Takes any metric function as input\n- Uses central finite differences for derivatives\n- Computes inverse metric with numpy.linalg.inv\n- Returns a 3D array of all symbol values\n\n**Test cases I ran:**\n\n1. **Polar coordinates** - flat space, but curvilinear coordinates give non-zero symbols. Gamma^r_{theta,theta} = -r explains the \"centrifugal force.\"\n\n2. **Sphere surface** - intrinsically curved 2D manifold. The symbols explain why great circles oscillate in latitude.\n\n3. **Schwarzschild metric** - spacetime around a black hole. Symbols diverge at the event horizon (r = 2M).\n\n**Coolest part:** Using the geodesic equation with these symbols to trace great circles on a sphere. The curves naturally follow the shortest paths!\n\n**What I learned:**\n\n- Christoffel symbols aren't tensors (they depend on coordinate choice)\n- They're the bridge between metric and equations of motion\n- Numerical differentiation works well but watch out for coordinate singularities\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/christoffel_symbols.ipynb\n\nHas anyone implemented symbolic computation for these using SymPy? I'd love to compare accuracy.",3490"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/christoffel_symbols.ipynb",3491"category": "general",3492"date": "2026-05-26",3493"time": "09:00"3494},3495{3496"source": "spring_mass_system",3497"content_type": "notebook",3498"subreddit": "CoCalc",3499"title": "[OC] Visualizing Spring-Mass Damping: From Undamped Oscillations to Critical Damping",3500"body": "I created a Python simulation exploring how damping affects a spring-mass system, and the results really helped me understand why engineers choose specific damping values.\n\n**The Setup:**\n\nA mass on a spring follows: m·(d²x/dt²) + c·(dx/dt) + kx = 0\n\nThe damping ratio ζ = c/(2√(km)) determines everything:\n\n**What I Found:**\n\n1. **Undamped (ζ = 0):** The mass bounces forever with constant energy. Phase space shows a perfect ellipse.\n\n2. **Underdamped (ζ < 1):** This is most real-world systems. Oscillations decay exponentially. The phase trajectory spirals inward.\n\n3. **Critically damped (ζ = 1):** Returns to equilibrium FASTEST without overshooting. This is the sweet spot for shock absorbers and door closers.\n\n4. **Overdamped (ζ > 1):** No oscillation but slower return than critical damping. Useful when you absolutely can't have overshoot.\n\n**Real Applications:**\n\n- Car suspensions are slightly underdamped for comfort\n- Seismometers use critical damping for accuracy\n- Tall buildings use tuned mass dampers based on these principles\n\nThe natural frequency ω₀ = √(k/m) ≈ 3.16 rad/s for my parameters. For the underdamped case, the Q-factor came out to 3.16, meaning the system rings for about 3 cycles before significant decay.\n\n**Code used:** NumPy, SciPy (odeint), Matplotlib\n\nView the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/spring_mass_system.ipynb",3501"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/spring_mass_system.ipynb",3502"category": "general",3503"date": "2026-05-26",3504"time": "09:00"3505},3506{3507"source": "generative_adversarial_network_gan",3508"content_type": "notebook",3509"subreddit": "CoCalc",3510"title": "Built a GAN from scratch in NumPy - Here's what I learned about the math behind generative AI",3511"body": "I implemented a Generative Adversarial Network using only NumPy (no PyTorch/TensorFlow) to really understand how they work. Here's the key takeaway:\n\n**The Core Idea (ELI5):**\nImagine a counterfeiter (Generator) trying to make fake money, and a detective (Discriminator) trying to catch them. They keep getting better at their jobs until the counterfeiter makes perfect fakes.\n\n**The Math:**\n- Generator G(z) maps random noise z to data\n- Discriminator D(x) outputs probability that x is real\n- They play a minimax game: min_G max_D V(D,G)\n- At equilibrium, D(x) = 1/2 (can't tell real from fake)\n\n**What I Learned:**\n1. Non-saturating loss is crucial - using max log(D(G(z))) instead of min log(1-D(G(z))) prevents vanishing gradients\n2. The optimal discriminator gives D*(x) = p_data(x) / (p_data(x) + p_g(x))\n3. GAN training minimizes Jensen-Shannon divergence between real and generated distributions\n4. Xavier initialization matters a lot for stable training\n\nMy simple GAN learned to generate samples from N(4.0, 1.25) and matched both mean and variance accurately.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/generative_adversarial_network_gan.ipynb\n\nHappy to answer questions about the implementation!",3512"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/generative_adversarial_network_gan.ipynb",3513"category": "general",3514"date": "2026-05-27",3515"time": "09:00"3516},3517{3518"source": "spectral_methods_chebyshev_polynomials",3519"content_type": "notebook",3520"subreddit": "SageMath",3521"title": "I built a spectral solver with Chebyshev polynomials - here's why they achieve exponential convergence",3522"body": "Ever wondered why weather prediction models and turbulence simulations use spectral methods instead of finite differences? I just implemented one from scratch and finally understand the magic.\n\n**The Problem with Finite Differences**\n\nFinite difference methods approximate derivatives locally: f'(x) ≈ (f(x+h) - f(x-h))/(2h). The error is O(h²) or O(h⁴) - you need LOTS of points for high accuracy.\n\n**Enter Chebyshev Polynomials**\n\nChebyshev polynomials T_n(x) are defined by a beautiful identity: T_n(x) = cos(n*arccos(x)). They satisfy a simple recurrence:\n\n- T₀(x) = 1\n- T₁(x) = x\n- Tn₊₁(x) = 2x*T_n(x) - Tn₋₁(x)\n\n**Why They're Special**\n\n1. They're orthogonal with respect to weight w(x) = 1/sqrt(1-x²)\n2. The Chebyshev-Gauss-Lobatto points xⱼ = cos(pij/N) cluster near boundaries, avoiding the Runge phenomenon\n3. You can build a differentiation MATRIX - derivatives become matrix-vector multiplication!\n\n**The Results**\n\nI solved the BVP: -u'' + u = (2+4x)e^x with u(-1) = u(1) = 0.\n\n| N | Max Error |\n|---|-----------|\n| 8 | 2.5 x 10⁻5 |\n| 16 | 1.8 x 10⁻10 |\n| 32 | 2.2 x 10⁻14 |\n\nThat's **exponential convergence** - errors decay like e⁽-cN) instead of h^p. For smooth problems, spectral methods are unbeatable.\n\n**Try it yourself:** You can view and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/spectral_methods_chebyshev_polynomials.ipynb",3523"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/spectral_methods_chebyshev_polynomials.ipynb",3524"category": "general",3525"date": "2026-05-27",3526"time": "09:00"3527},3528{3529"source": "stochastic_differential_equations",3530"content_type": "notebook",3531"subreddit": "CoCalc",3532"title": "I built a Python simulator for Stochastic Differential Equations (SDEs) - here's what I learned about modeling randomness",3533"body": "Hey everyone!\n\nI've been learning about Stochastic Differential Equations (SDEs) and wanted to share my implementation and what I learned.\n\n**What are SDEs?**\n\nThink of regular differential equations (ODEs) as describing how things change over time deterministically. SDEs add randomness to the mix using something called a Wiener process (basically mathematical Brownian motion).\n\nThe general form is:\n\ndX = mu(X,t)dt + sigma(X,t)dW\n\nWhere:\n- mu is the \"drift\" (predictable trend)\n- sigma is the \"diffusion\" (how much randomness)\n- dW is a random increment from the Wiener process\n\n**What I implemented:**\n\n1. **Geometric Brownian Motion (GBM)** - Used in the Black-Scholes model for stock prices. The cool thing: the solution is lognormally distributed!\n\n2. **Ornstein-Uhlenbeck Process** - Models mean-reverting behavior. Great for interest rates or anything that tends to return to an average value.\n\n3. **Numerical Methods:**\n - Euler-Maruyama (simple but converges slowly at O(sqrt(dt)))\n - Milstein (adds a correction term, converges at O(dt))\n\n**Key takeaway:**\n\nMilstein's method adds this correction term: (1/2)*sigma*sigma'*[(dW)^2 - dt]\n\nThis accounts for Ito's lemma (the stochastic version of the chain rule) and dramatically improves accuracy.\n\n**View the interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/stochastic_differential_equations.ipynb\n\nThe code is clean NumPy/Matplotlib with no external dependencies. Happy to answer any questions!",3534"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/stochastic_differential_equations.ipynb",3535"category": "general",3536"date": "2026-05-28",3537"time": "09:00"3538},3539{3540"source": "enzyme_kinetics_michaelis_menten",3541"content_type": "notebook",3542"subreddit": "CoCalc",3543"title": "Title: [OC] Simulating Enzyme Kinetics with Python - The Michaelis-Menten Model",3544"body": "Body:\n\n**What I built:** A Python simulation of Michaelis-Menten enzyme kinetics with parameter estimation and visualization.\n\n**The Science (ELI5):**\n\nImagine enzymes as tiny machines that grab molecules (substrates) and convert them into products. The Michaelis-Menten equation describes how fast this happens:\n\nv = (Vmax · [S]) / (Km + [S])\n\nThink of it like a restaurant:\n- Vmax = maximum orders the kitchen can handle (when fully busy)\n- Km = how many customers it takes to get the kitchen half-busy\n- Low Km = efficient kitchen that gets busy quickly\n\n**What I learned:**\n\n1. The famous hyperbolic curve emerges from simple enzyme-substrate binding kinetics\n2. The Lineweaver-Burk plot (plotting 1/v vs 1/[S]) linearizes the equation, but nonlinear regression is actually more accurate\n3. Parameter sensitivity matters - Km directly reflects enzyme-substrate affinity\n\n**Code highlights:**\n- Used `scipy.optimize.curve_fit` for nonlinear least squares\n- Generated synthetic experimental data with Gaussian noise\n- Compared true parameters vs fitted values\n\n**View the interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/enzyme_kinetics_michaelis_menten.ipynb\n\nHappy to answer questions about the implementation!",3545"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/enzyme_kinetics_michaelis_menten.ipynb",3546"category": "general",3547"date": "2026-05-28",3548"time": "09:00"3549},3550{3551"source": "decision_tree_classifier",3552"content_type": "notebook",3553"subreddit": "CoCalc",3554"title": "I built a Decision Tree Classifier from scratch in Python - here's how information theory makes it work",3555"body": "I just finished implementing a decision tree classifier using only numpy, and I wanted to share what I learned about the math behind it.\n\n**The Core Idea (ELI5)**\n\nImagine you're playing 20 questions to guess an animal. You want each question to eliminate as many possibilities as possible. Decision trees work the same way - at each node, they ask the question that best separates the classes.\n\n**The Math**\n\nThe algorithm uses Gini impurity to measure how \"mixed\" a set of samples is:\n\nG(S) = 1 - Σpₖ²\n\nWhere pₖ is the proportion of samples in class k. A pure node (all one class) has G=0, while maximum impurity occurs when classes are equally mixed.\n\nAt each split, we find the feature and threshold that maximizes information gain - the reduction in impurity from parent to children.\n\n**What I Learned**\n\n1. Decision boundaries are always axis-aligned (perpendicular to feature axes) because each split tests only one feature\n2. Despite simple splits, combining them creates complex non-linear boundaries\n3. Controlling max_depth and min_samples_split is crucial to prevent overfitting\n\nThe implementation handles multi-class classification and achieves strong accuracy on synthetic clustered data.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/decision_tree_classifier.ipynb\n\nHappy to answer questions about the implementation!",3556"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/decision_tree_classifier.ipynb",3557"category": "general",3558"date": "2026-05-29",3559"time": "09:00"3560},3561{3562"source": "bell_inequality",3563"content_type": "notebook",3564"subreddit": "CoCalc",3565"title": "Title: I built a Python simulation that proves Einstein wrong about quantum mechanics",3566"body": "Body:\n\n**TL;DR:** Simulated Bell's inequality test showing quantum entanglement produces correlations impossible under classical physics.\n\n**The Background:**\n\nIn 1935, Einstein argued quantum mechanics was incomplete - surely \"hidden variables\" must explain those weird correlations without \"spooky action at a distance.\" In 1964, John Bell proved we could actually TEST this with a mathematical inequality.\n\n**What I Built:**\n\nA Python simulation comparing:\n1. **Quantum mechanical model** - Entangled singlet state |ψ⁻⟩ = (1/√2)(|↑↓⟩ - |↓↑⟩)\n2. **Local hidden variable model** - Classical particles with predetermined outcomes\n\n**The Test (CHSH Inequality):**\n\nMeasure the parameter S = E(a,b) - E(a,b') + E(a',b) + E(a',b')\n\n- Classical physics: |S| ≤ 2 (always)\n- Quantum mechanics: S = 2√2 ≈ 2.828 (at optimal angles)\n\n**Results:**\n\nWith 100,000 trials, the quantum simulation gives S ≈ 2.82 with p-value < 0.001. The classical bound is violated with overwhelming statistical significance.\n\n**What I Learned:**\n\n- The triangular correlation function for LHV models vs sinusoidal for QM\n- Why specific angles (0°, 45°, 90°, 135°) maximize the violation\n- Bootstrap methods for confidence intervals in physics simulations\n\nThis is the same test that won the 2022 Nobel Prize (Aspect, Clauser, Zeilinger)!\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bell_inequality.ipynb",3567"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bell_inequality.ipynb",3568"category": "general",3569"date": "2026-05-29",3570"time": "09:00"3571},3572{3573"source": "monte_carlo_tree_search",3574"content_type": "notebook",3575"subreddit": "CoCalc",3576"title": "I implemented Monte Carlo Tree Search from scratch - here's what I learned about exploration vs exploitation",3577"body": "Hey everyone! Just finished implementing MCTS (Monte Carlo Tree Search) - the algorithm that powered AlphaGo's historic victory. Wanted to share what I learned.\n\n**What is MCTS?**\n\nThink of it like this: imagine you're playing a game and want to find the best move. MCTS builds a tree of possible game states by:\n\n1. **Selection** - Pick promising branches to explore\n2. **Expansion** - Add new game states to the tree\n3. **Simulation** - Play random games from that state\n4. **Backpropagation** - Update the tree with what you learned\n\nThe magic is in the UCT formula that decides which branch to explore:\n\nUCTᵢ = X̄ᵢ + C√(ln(Nₚ)/Nᵢ)\n\nWhere X̄ᵢ is the average reward (exploitation) and the square root term encourages visiting less-explored nodes (exploration).\n\n**Key findings:**\n\n- The exploration constant C = √2 (≈1.414) works best, just as theory predicts\n- For Tic-Tac-Toe, 200-500 iterations is enough to never lose\n- Selecting the most-visited child (not highest value) gives more robust results\n- Convergence is logarithmic - you get diminishing returns after a point\n\n**Code:** Full Python implementation with numpy and matplotlib. Tested three experiments:\n1. Performance vs random player with different C values\n2. Convergence rate analysis (entropy of move selection)\n3. Visit distribution across the search tree\n\nThe algorithm converges to optimal play as iterations → ∞, with regret bounded by O(√(n·ln(n))).\n\nCheck out the interactive notebook here: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/monte_carlo_tree_search.ipynb\n\nHappy to answer questions about the implementation!",3578"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/monte_carlo_tree_search.ipynb",3579"category": "general",3580"date": "2026-05-30",3581"time": "09:00"3582},3583{3584"source": "inverse_iteration",3585"content_type": "notebook",3586"subreddit": "CoCalc",3587"title": "Inverse Iteration: Finding Eigenvectors When You Know the Eigenvalue [Python Implementation]",3588"body": "**The Problem**\n\nYou have a matrix and know (approximately) one of its eigenvalues. Now you need the corresponding eigenvector. How do you find it efficiently?\n\n**The Solution: Inverse Iteration**\n\nHere's the clever insight: if λ is an eigenvalue of A, then 1/(λ - μ) is an eigenvalue of (A - μI)⁻¹. When your shift μ is close to λ, that eigenvalue becomes huge compared to the others.\n\nSo we apply the power method to the inverse matrix:\n\n1. Start with random vector v\n2. Solve (A - μI)w = v (not explicitly inverting!)\n3. Normalize: v = w/‖w‖\n4. Repeat until convergence\n\n**Why It's Fast**\n\nThe convergence rate depends on |λⱼ - μ|/|λₙₑₐᵣₑₛₜ - μ|. When μ is close to your target eigenvalue, this ratio is tiny, giving rapid (sometimes cubic!) convergence.\n\n**What I Learned**\n\n- LU decomposition is key for efficiency - factor once, solve many times\n- Rayleigh quotient iteration updates μ dynamically and converges even faster\n- The \"nearly singular\" matrix when μ ≈ λ isn't actually a problem - the solution direction is preserved\n\n**The Notebook**\n\nI built a Python implementation with:\n- Basic inverse iteration with LU factorization\n- Rayleigh quotient iteration for comparison\n- Convergence visualizations for different shift qualities\n- Demo on symmetric positive definite matrices\n\nView and run the notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/inverse_iteration.ipynb\n\nThis is fundamental stuff that powers production eigensolvers. Understanding it helps you appreciate what NumPy/SciPy are doing under the hood.\n\n---",3589"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/inverse_iteration.ipynb",3590"category": "general",3591"date": "2026-05-30",3592"time": "09:00"3593},3594{3595"source": "genus_of_surfaces",3596"content_type": "notebook",3597"subreddit": "CoCalc",3598"title": "Title: Visualizing Surface Genus with Python - From Spheres to Triple Tori",3599"body": "---\n\n**What is genus?**\n\nIn topology, the genus of a surface counts its \"handles\" or \"holes.\" Think of it like this:\n- A sphere has genus 0 (no holes)\n- A donut (torus) has genus 1 (one hole)\n- A figure-8 pretzel shape has genus 2\n\n**Why does it matter?**\n\nThe genus completely classifies orientable surfaces! Every compact, connected, orientable surface is topologically equivalent to a \"g-holed torus.\"\n\n**The cool math:**\n\nThe Euler characteristic χ relates to genus by: χ = 2 - 2g\n\nThe Gauss-Bonnet theorem then gives us: ∫∫ K dA = 2πχ = 4π(1-g)\n\nThis means:\n- Sphere: total curvature = 4π\n- Torus: total curvature = 0 (positive outside, negative inside, they cancel!)\n- Double torus: total curvature = -4π\n\n**The code:**\n\nI implemented parametric equations for spheres, tori, and multi-tori in Python using numpy and matplotlib. For the double torus, I used a figure-8 (lemniscate) path with a tube wrapped around it. The Gaussian curvature is color-mapped onto each surface.\n\nI also numerically verified Gauss-Bonnet by integrating K over the surface - sphere gives ~12.566 (4π), torus gives ~0.\n\n**View the full notebook:**\n\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/genus_of_surfaces.ipynb\n\nHappy to answer questions about the math or the implementation!",3600"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/genus_of_surfaces.ipynb",3601"category": "general",3602"date": "2026-05-31",3603"time": "09:00"3604},3605{3606"source": "finite_element_method_heat_transfer",3607"content_type": "notebook",3608"subreddit": "CoCalc",3609"title": "[OC] Built a Finite Element Method solver for heat transfer in Python - here's how it works",3610"body": "I created a Python notebook implementing the Finite Element Method (FEM) for 1D steady-state heat conduction. FEM is the backbone of engineering simulation software, and building one from scratch really helps understand the fundamentals.\n\n**The Problem:**\n\nWe want to solve the heat equation: -k(d²T/dx²) = Q(x)\n\nWhere T is temperature, k is thermal conductivity, and Q is a heat source.\n\n**How FEM Works (ELI5):**\n\n1. Chop your rod into small pieces (elements)\n2. Assume temperature varies linearly within each piece\n3. Use calculus tricks (integration by parts) to convert the differential equation into matrix multiplication\n4. Solve the matrix equation KT = f for nodal temperatures\n\n**What I Learned:**\n\n- Linear elements give O(h²) convergence - halving element size reduces error by 4x\n- Boundary conditions come in two flavors:\n - Dirichlet: \"Temperature at this point is 100°C\"\n - Neumann: \"Heat flux at this boundary is 50 W/m²\"\n- Composite materials (like a rod with two different metals) work naturally by varying k per element\n\n**Key Results:**\n\nThe FEM solution matches analytical solutions for all test cases. The convergence plot confirms second-order accuracy.\n\n**Interactive Notebook:**\n\nYou can view and run this notebook directly in your browser:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/finite_element_method_heat_transfer.ipynb\n\nHappy to answer questions about the implementation!",3611"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/finite_element_method_heat_transfer.ipynb",3612"category": "general",3613"date": "2026-05-31",3614"time": "09:00"3615},3616{3617"source": "central_limit_theorem_demonstration",3618"content_type": "notebook",3619"subreddit": "CoCalc",3620"title": "I simulated the Central Limit Theorem 10,000 times - here's what I learned about why sample means are always normal",3621"body": "Ever wondered why statisticians are obsessed with normal distributions? The Central Limit Theorem (CLT) is why - and I built a Python simulation to see it in action.\n\n**The Core Idea (ELI5):**\n\nImagine you have a bag of weird-shaped dice. Each die has its own strange distribution of outcomes. Now, roll n dice and take the average. Do this thousands of times.\n\nHere's the magic: as n gets larger, those averages will form a bell curve - no matter how weird your original dice were!\n\n**The Math (Unicode, not LaTeX):**\n\nIf you have random samples X₁, X₂, ..., Xₙ with mean μ and standard deviation σ, then the standardized sample mean:\n\nZₙ = (X̄ₙ - μ)/(σ/√n)\n\nconverges to a standard normal N(0,1) as n → ∞.\n\n**What I Tested:**\n\n1. **Uniform(0, 1)** - flat distribution\n2. **Exponential(λ=1)** - heavily right-skewed\n3. **Poisson(λ=3)** - discrete, asymmetric\n\nFor each, I generated 10,000 sample means at n = 1, 5, 30, and 100.\n\n**Results:**\n\n- At n=1, you see the original distribution\n- At n=5, it's starting to look bell-shaped\n- At n=30 (the famous \"rule of 30\"), you get a solid normal approximation\n- At n=100, it's nearly indistinguishable from a perfect normal\n\nI used Kolmogorov-Smirnov tests to quantify this - by n=30, p-values were consistently above 0.05, confirming normality.\n\n**Why This Matters:**\n\nThis is why t-tests work. This is why confidence intervals are valid. This is why you can make inferences about populations from samples - even when you don't know the underlying distribution!\n\n**View the full notebook with code and visualizations:**\n\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/central_limit_theorem_demonstration.ipynb\n\nLibraries used: numpy, scipy.stats, matplotlib",3622"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/central_limit_theorem_demonstration.ipynb",3623"category": "general",3624"date": "2026-06-01",3625"time": "09:00"3626},3627{3628"source": "fast_fourier_transform_implementation",3629"content_type": "notebook",3630"subreddit": "CoCalc",3631"title": "Implemented FFT from scratch - here's why it's O(N log N) instead of O(N²)",3632"body": "I just implemented the Fast Fourier Transform algorithm from scratch and wanted to share what I learned!\n\n**The Problem:**\nThe Discrete Fourier Transform converts time-domain signals to frequency-domain. The naive formula is:\n\nXₖ = Σₙ₌₀ᴺ⁻¹ xₙ · e^(-i2πkn/N)\n\nComputing this directly requires N multiplications for each of N outputs = O(N²). For N=2048, that's ~4 million operations.\n\n**The Clever Trick:**\nThe Cooley-Tukey algorithm splits the sum into even and odd indexed terms:\n- Eₖ = FFT of even samples\n- Oₖ = FFT of odd samples\n\nThen combines them with the \"butterfly operation\":\n- Xₖ = Eₖ + W · Oₖ\n- Xₖ₊ₙ/₂ = Eₖ - W · Oₖ\n\nWhere W = e^(-i2πk/N) is called the \"twiddle factor.\"\n\nApply this recursively log₂(N) times → O(N log N)!\n\n**Results:**\nFor N=2048:\n- Naive: O(N²) ≈ 4M operations\n- FFT: O(N log N) ≈ 22K operations\n- Speedup: ~170x\n\nI tested on a signal with 50Hz + 120Hz components plus noise - the FFT perfectly identified both frequencies.\n\n**Code & Notebook:**\nFull implementation with visualizations: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/fast_fourier_transform_implementation.ipynb\n\n---",3633"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/fast_fourier_transform_implementation.ipynb",3634"category": "general",3635"date": "2026-06-01",3636"time": "09:00"3637},3638{3639"source": "hypergeometric_functions",3640"content_type": "notebook",3641"subreddit": "CoCalc",3642"title": "Title: Exploring Hypergeometric Functions: The \"Universal\" Special Functions",3643"body": "Body:\n\nI created a computational notebook exploring hypergeometric functions - one of the most important (and underappreciated) classes of special functions in mathematics.\n\n**What are hypergeometric functions?**\n\nThe Gaussian hypergeometric function ₂F₁(a,b;c;z) is defined by:\n\n₂F₁(a,b;c;z) = ∑ [(a)ₙ(b)ₙ / (c)ₙ] × zⁿ/n!\n\nwhere (q)ₙ is the Pochhammer symbol (rising factorial).\n\n**Why should you care?**\n\nHere's the cool part - tons of \"different\" functions are actually just hypergeometric functions with special parameters:\n\n| Function | Hypergeometric Form |\n|----------|-------------------|\n| (1-z)⁻ᵃ | ₂F₁(a,b;b;z) |\n| ln(1+z) | z × ₂F₁(1,1;2;-z) |\n| arcsin(z) | z × ₂F₁(1/2,1/2;3/2;z²) |\n| Legendre Pₙ(z) | ₂F₁(-n,n+1;1;(1-z)/2) |\n\n**What I implemented:**\n\n1. Series computation from scratch using the Pochhammer symbol\n2. Verification against SciPy's hyp2f1 (errors < 10⁻¹⁰)\n3. Visualization of different parameter choices\n4. Verified Euler's and Pfaff's transformation formulas\n5. Applied to hydrogen atom radial wavefunctions\n\n**Key insight:** The hypergeometric differential equation z(1-z)w'' + [c-(a+b+1)z]w' - abw = 0 has exactly three regular singular points, making ₂F₁ the canonical solution for this class of ODEs.\n\nYou can view and run the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hypergeometric_functions.ipynb",3644"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hypergeometric_functions.ipynb",3645"category": "general",3646"date": "2026-06-02",3647"time": "09:00"3648},3649{3650"source": "image_classification_cnn",3651"content_type": "notebook",3652"subreddit": "CoCalc",3653"title": "[Tutorial] Build a CNN from Scratch with NumPy - Understanding Convolution, Pooling, and Backpropagation",3654"body": "I created an educational Jupyter notebook that implements a Convolutional Neural Network from first principles. No PyTorch, no TensorFlow - just NumPy and math.\n\n**Why build from scratch?**\n\nMost tutorials use `model.fit()` and call it a day. But when your model doesn't work, you're stuck. Understanding the internals lets you:\n- Debug training issues (vanishing gradients, dead neurons)\n- Design better architectures for your specific problem\n- Optimize for deployment (quantization, pruning)\n- Actually understand what \"convolution\" means!\n\n**What the notebook covers:**\n\n1. **Convolution Operation** - Implementing the 2D convolution from scratch, with hand-crafted edge detection filters to visualize what convolution does\n\n2. **Building CNN Layers**:\n - `ConvLayer`: Learnable filters with Xavier initialization\n - `ReLU`: Simple non-linearity\n - `MaxPool2D`: Spatial downsampling for translation invariance\n - `Dense`: Fully connected classification head\n\n3. **Backpropagation Through Spatial Operations** - This is the hard part! The notebook shows how to:\n - Compute gradients of the convolution operation\n - Propagate gradients through pooling (sparse gradients)\n - Update filter weights with SGD\n\n4. **Training and Visualization**:\n - Mini-batch training loop\n - Loss/accuracy curves\n - Learned filter visualization\n - Feature map activations for different inputs\n - Confusion matrix analysis\n\n**Architecture:**\n```\nInput (8x8x1)\n -> Conv(4 filters, 3x3) -> ReLU -> (6x6x4)\n -> MaxPool(2x2) -> (3x3x4)\n -> Flatten -> (36)\n -> Dense(10) -> Softmax\n -> Output (10 classes)\n```\n\n**Key insight:** The convolution gradient is itself a convolution - correlate the input with the output gradient to get the filter gradient, and full-convolve the output gradient with the flipped filter to get the input gradient.\n\nThe notebook uses sklearn's digits dataset (8x8 grayscale images) for fast iteration, but the concepts apply to any image classification task.\n\nLink: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/image_classification_cnn/image_classification_cnn.ipynb\n\nHappy to answer questions!\n\nSuggested subreddits: r/MachineLearning, r/learnmachinelearning, r/deeplearning, r/computervision, r/Python",3655"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/image_classification_cnn/image_classification_cnn.ipynb",3656"category": "general",3657"date": "2026-06-02",3658"time": "09:00"3659},3660{3661"source": "cw_complexes",3662"content_type": "notebook",3663"subreddit": "CoCalc",3664"title": "[OC] I built a Python visualization of CW Complexes - topology's building blocks for constructing any space",3665"body": "Hey r/learnpython!\n\nI just finished a computational notebook exploring CW complexes from algebraic topology, and wanted to share what I learned.\n\n**ELI5: What's a CW Complex?**\n\nImagine building shapes with LEGO, but for mathematicians:\n- 0-cells = points (the dots)\n- 1-cells = line segments/loops (connect the dots)\n- 2-cells = filled-in disks (patch the holes)\n- n-cells = higher-dimensional balls\n\nYou build complex shapes by \"gluing\" these cells together. The way you glue (the \"attaching map\") determines what shape you get.\n\n**Cool Examples:**\n\nA sphere (S²) needs just ONE point and ONE disk - glue the disk's entire boundary to that single point. Boom, sphere!\n\nA torus (donut shape) needs:\n- 1 point\n- 2 loops (one around the hole, one around the tube)\n- 1 disk glued via the pattern aba⁻¹b⁻¹\n\n**The Euler Characteristic**\n\nThere's a beautiful formula: χ = c₀ - c₁ + c₂\n\n- Sphere: χ = 1 - 0 + 1 = 2\n- Torus: χ = 1 - 2 + 1 = 0\n- Klein bottle: χ = 1 - 2 + 1 = 0\n\nThis single number captures fundamental info about the shape!\n\n**The Code**\n\nUsed numpy and matplotlib to visualize:\n- 3D sphere and torus with their cell structure\n- Fundamental domain diagrams showing how edges get identified\n- Bar charts comparing cell counts across surfaces\n\nCheck out the full interactive notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/cw_complexes.ipynb\n\nHappy to answer questions about the topology or the code!",3666"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/cw_complexes.ipynb",3667"category": "general",3668"date": "2026-06-03",3669"time": "09:00"3670},3671{3672"source": "simulated_annealing_optimization",3673"content_type": "notebook",3674"subreddit": "CoCalc",3675"title": "I implemented Simulated Annealing from scratch and finally understand why it works so well",3676"body": "Just finished building a Simulated Annealing optimizer and wanted to share what I learned.\n\n**The concept (ELI5):** Imagine you're blindfolded on a bumpy landscape trying to find the lowest valley. If you only ever walk downhill, you'll get stuck in the first dip you find. But what if you occasionally accept uphill steps? You might escape that small dip and find a much deeper valley elsewhere.\n\nThat's SA in a nutshell. The \"temperature\" controls how willing you are to accept worse solutions:\n- Hot = adventurous (explore everywhere)\n- Cold = conservative (only accept improvements)\n\n**The math:** When we find a worse solution with energy change ΔE > 0, we accept it with probability:\n\nP = exp(-ΔE/T)\n\nAs temperature T drops, this probability shrinks toward zero.\n\n**What I tested it on:** The Rastrigin function—a classic optimization nightmare with hundreds of local minima arranged like an egg carton. Global minimum is at (0, 0) with f = 0.\n\n**Results:** Starting from a random point, the algorithm successfully navigated the minefield and found f(x) ≈ 0.0. The convergence plot shows the classic SA pattern—wild fluctuations at high temps, then smooth descent as we cool.\n\n**Key takeaways:**\n1. Cooling rate matters: too fast and you get stuck, too slow and it takes forever\n2. Initial temperature should be high enough to accept most moves initially\n3. The acceptance rate naturally decays—that's the exploration→exploitation transition\n\nFull notebook with visualizations: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/simulatedₐnnealingₒptimization.ipynb\n\nHappy to answer questions about the implementation!\n\n---",3677"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/simulatedₐnnealingₒptimization.ipynb",3678"category": "general",3679"date": "2026-06-03",3680"time": "09:00"3681},3682{3683"source": "level_set_method_interface_tracking",3684"content_type": "notebook",3685"subreddit": "CoCalc",3686"title": "Title: I built a Level Set Method simulation in Python - here's how it elegantly handles shape merging",3687"body": "Body:\n\n**The Problem**\n\nTracking moving boundaries in simulations (like bubbles, flames, or tumor growth) is tricky. If you track individual points on the boundary, what happens when two shapes merge? Or one splits in two? You need complicated logic to handle these \"topological changes.\"\n\n**The Elegant Solution**\n\nThe Level Set Method (1988) takes a different approach. Instead of tracking points ON the boundary, you create a function φ(x,y) where:\n- φ > 0 outside the shape\n- φ < 0 inside the shape\n- φ = 0 IS the boundary\n\nThe interface just \"falls out\" as the zero contour.\n\n**Why This Is Brilliant**\n\nWhen two regions grow together, their φ functions naturally blend, and the zero contour automatically becomes one connected shape. No special merging code needed!\n\n**What I Implemented**\n\n1. **Vortex advection** - A circle getting stretched by a swirling velocity field\n2. **Curvature flow** - A star shape smoothing into a circle (high curvature regions move faster)\n3. **Merging** - Two circles expanding until they join into one blob\n4. **Reinitialization** - Restoring the \"signed distance\" property where |∇φ| = 1\n\n**Key Equations (simplified)**\n\n- Evolution: ∂φ/∂t + v·∇φ = 0 (advection by velocity v)\n- Normal speed: ∂φ/∂t + F|∇φ| = 0\n- Curvature: κ = ∇·(∇φ/|∇φ|)\n\n**Tech Stack**\n\nPure NumPy for numerics, Matplotlib for visualization. About 200 lines of code for the core methods.\n\nView the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/level_set_method_interface_tracking.ipynb",3688"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/level_set_method_interface_tracking.ipynb",3689"category": "general",3690"date": "2026-06-04",3691"time": "09:00"3692},3693{3694"source": "hermite_polynomials",3695"content_type": "notebook",3696"subreddit": "SageMath",3697"title": "I built a visualization of Hermite polynomials and quantum harmonic oscillator wave functions in Python",3698"body": "Hey everyone! I just finished building an interactive notebook exploring Hermite polynomials—one of the most important polynomial families in physics and mathematics.\n\n**What are Hermite polynomials?**\n\nELI5: Imagine you're trying to describe how a quantum particle (like an electron) bounces back and forth in a potential well. Hermite polynomials are the mathematical \"shapes\" that describe these bouncing patterns. The more energy the particle has, the more complex the shape.\n\n**The first few polynomials:**\n- H₀(x) = 1\n- H₁(x) = 2x\n- H₂(x) = 4x² - 2\n- H₃(x) = 8x³ - 12x\n- H₄(x) = 16x⁴ - 48x² + 12\n- H₅(x) = 32x⁵ - 160x³ + 120x\n\n**Key properties I explored:**\n\n1. **Zeros:** Hₙ(x) has exactly n real zeros—this corresponds to the n nodes in the quantum wave function\n\n2. **Orthogonality:** Different Hermite polynomials are \"perpendicular\" to each other when integrated with the Gaussian weight e^(-x²). I verified this numerically and got a beautiful diagonal matrix!\n\n3. **Recurrence relation:** Hₙ₊₁(x) = 2xHₙ(x) - 2nHₙ₋₁(x). This makes computing higher-order polynomials efficient.\n\n4. **Quantum wave functions:** When you multiply Hₙ(x) by e^(-x²/2) and normalize, you get the wave function ψₙ(x) for the quantum harmonic oscillator.\n\n**What I learned:**\n\n- How to use scipy.special.hermite() for polynomial evaluation\n- Numerical integration with np.trapz to verify orthogonality\n- The connection between classical polynomials and quantum mechanics\n\nThe visualizations include the raw polynomials, normalized wave functions, probability densities, and an orthogonality verification matrix.\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hermite_polynomials.ipynb\n\nHappy to answer any questions about the implementation!\n---",3699"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hermite_polynomials.ipynb",3700"category": "general",3701"date": "2026-06-04",3702"time": "09:00"3703},3704{3705"source": "beta_function",3706"content_type": "notebook",3707"subreddit": "CoCalc",3708"title": "Exploring the Beta Function with Python - Verification of Properties and Visualizations",3709"body": "I created a notebook exploring the Beta function B(a,b), one of the fundamental special functions in mathematics.\n\n**What is the Beta function?**\n\nIt's defined by the integral:\nB(a,b) = ∫₀¹ t^(a-1)(1-t)^(b-1) dt\n\nThink of it as measuring the \"area under a curve\" where the curve shape depends on parameters a and b.\n\n**Why should you care?**\n\n1. It's the normalizing constant for the Beta distribution, which is everywhere in Bayesian statistics\n2. It connects to factorials: for integers, B(m,n) = (m-1)!(n-1)!/(m+n-1)!\n3. It has a beautiful relationship with the Gamma function: B(a,b) = Γ(a)Γ(b)/Γ(a+b)\n\n**Cool findings:**\n\n- B(1/2, 1/2) = π (exactly!)\n- It's perfectly symmetric: B(a,b) = B(b,a)\n- The regularized incomplete Beta function is the CDF of the Beta distribution\n\n**What I implemented:**\n\n- Three computation methods (direct integration, Gamma relation, SciPy)\n- Numerical verification of all key properties\n- Visualizations including 3D surface plots and Beta distribution PDFs\n\nThe notebook uses NumPy, SciPy, and Matplotlib. All three computation methods agree to 8+ decimal places.\n\n**View the full interactive notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/beta_function.ipynb",3710"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/beta_function.ipynb",3711"category": "general",3712"date": "2026-06-05",3713"time": "09:00"3714},3715{3716"source": "delaunay_triangulation",3717"content_type": "notebook",3718"subreddit": "CoCalc",3719"title": "Implementing and Verifying Delaunay Triangulation in Python - Understanding the Empty Circumcircle Property",3720"body": "Hey everyone! I just worked through Delaunay triangulation and wanted to share what I learned.\n\n**ELI5: What is Delaunay Triangulation?**\n\nImagine you have a bunch of dots on paper and want to connect them into triangles. There are many ways to do this, but Delaunay triangulation is special: it guarantees that if you draw a circle through the three corners of ANY triangle, no other dot will be inside that circle.\n\n**Why does this matter?**\n\nThis \"empty circumcircle\" property means the triangles are as \"fat\" as possible - no super skinny triangles that cause problems in simulations. It maximizes the minimum angle across all triangles.\n\n**Cool fact:** Delaunay triangulation is the dual graph of Voronoi diagrams! Each Delaunay edge connects two points whose Voronoi cells share an edge.\n\n**The Math (simplified)**\n\nFor a triangle with vertices (x₁,y₁), (x₂,y₂), (x₃,y₃), the circumcenter is found using determinants. To test if point d is inside the circumcircle, we compute:\n\nInCircle(a,b,c,d) > 0 means d is inside\nInCircle(a,b,c,d) < 0 means d is outside\n\n**Implementation**\n\nUsing SciPy's `Delaunay` class (O(n log n) Quickhull algorithm), I:\n1. Generated 20 random points\n2. Computed the triangulation\n3. Verified EVERY triangle satisfies the empty circumcircle property\n4. Visualized the Delaunay-Voronoi duality\n\n**Triangle quality stats from my run:**\n- Minimum angle: ~20-30°\n- Maximum angle: ~120-130°\n- Mean angle: 60° (as expected for triangles!)\n\n**Applications:**\n- Finite element mesh generation\n- Terrain modeling (GIS)\n- Pathfinding/navigation meshes\n- Surface reconstruction from point clouds\n\n**View the full notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/delaunay_triangulation.ipynb\n\nHappy to answer questions about the implementation!",3721"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/delaunay_triangulation.ipynb",3722"category": "general",3723"date": "2026-06-05",3724"time": "09:00"3725},3726{3727"source": "advection_equation_transport",3728"content_type": "notebook",3729"subreddit": "CoCalc",3730"title": "I built a Python simulation comparing numerical schemes for wave transport (advection equation) - here's what I learned about diffusion vs dispersion",3731"body": "Hey r/learnpython!\n\nI've been learning about partial differential equations and wanted to share a simulation I built that reveals something really interesting about numerical methods.\n\n**The Problem: How do things move?**\n\nThe advection equation describes how a quantity (like dye in water, smoke in air, or temperature) gets carried along by a flow:\n\n∂u/∂t + c·∂u/∂x = 0\n\nThe beautiful thing is this has an exact solution: whatever shape you start with just slides along at speed c without changing.\n\n**The Challenge: Computers can't do calculus directly**\n\nSo we approximate derivatives using finite differences. But here's where it gets interesting - different approximations have wildly different behavior!\n\n**What I implemented:**\n\n1. **Upwind scheme** (1st order): Uses the point \"upwind\" from where information flows\n2. **Lax-Wendroff scheme** (2nd order): More accurate but more complex\n\n**The surprising result:**\n\n- Upwind scheme causes **diffusion** - peaks get shorter and spread out (like adding blur)\n- Lax-Wendroff causes **dispersion** - creates oscillations near sharp edges (like ringing artifacts)\n\nThis explains why weather models, game physics, and CFD codes use sophisticated \"high-resolution\" schemes that combine the best of both approaches!\n\n**Technical details:**\n- Grid: 100 points, 100 time steps\n- Verified convergence rates match theory: O(Δx) for upwind, O(Δx²) for Lax-Wendroff\n- CFL stability condition: ν = cΔt/Δx ≤ 1\n\nView the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/advection_equation_transport.ipynb\n\nHappy to answer questions about the implementation!\n\n---",3732"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/advection_equation_transport.ipynb",3733"category": "general",3734"date": "2026-06-06",3735"time": "09:00"3736},3737{3738"source": "k_nearest_neighbors",3739"content_type": "notebook",3740"subreddit": "CoCalc",3741"title": "I built a K-Nearest Neighbors classifier from scratch - here's what I learned about the bias-variance tradeoff",3742"body": "I wanted to deeply understand KNN, so I implemented it from scratch instead of using sklearn.\n\n**The Core Idea (ELI5):**\n\nImagine you're a new kid at school and want to know which lunch table to sit at. You look at the k kids sitting nearest to you and join whichever group has the most people. That's literally KNN!\n\n**The Math:**\n\nKNN measures \"nearness\" using Euclidean distance:\n\nd(x,y) = √Σ(xᵢ - yᵢ)²\n\nFor each new point, find the k closest training points and let them vote. The class with the most votes wins.\n\n**What I Learned:**\n\n1. **k=1 is deceptively good on training data** - 100% accuracy because each point is its own nearest neighbor. But it overfits terribly.\n\n2. **The bias-variance tradeoff is real** - Small k = low bias, high variance (wiggly boundaries). Large k = high bias, low variance (smooth boundaries).\n\n3. **Optimal k exists** - For my dataset, k=5 gave the best test accuracy at 98.3%.\n\n4. **It's a \"lazy learner\"** - O(1) training (just store data) but O(nd) prediction. For big datasets, you'd want KD-trees.\n\n**Code highlights:**\n- Used numpy for vectorized distance calculations\n- Implemented both standard and weighted voting (inverse distance weights)\n- Visualized decision boundaries for k=1, 5, 15\n\nCheck out the full notebook with interactive code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/k_nearest_neighbors.ipynb\n\nHappy to answer questions about the implementation!\n\n---",3743"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/k_nearest_neighbors.ipynb",3744"category": "general",3745"date": "2026-06-06",3746"time": "09:00"3747},3748{3749"source": "geodesic_deviation",3750"content_type": "notebook",3751"subreddit": "CoCalc",3752"title": "I simulated geodesic deviation (tidal forces) in Schwarzschild spacetime with Python - here's what I learned",3753"body": "Geodesic deviation is one of the most elegant concepts in general relativity. It answers a simple question: if two particles are falling freely side by side, do they stay together?\n\nIn flat spacetime, yes. But in curved spacetime, the answer is the **geodesic deviation equation**:\n\nD²ξ/dτ² = R·u·u·ξ\n\nWhere ξ is the separation vector between geodesics, u is the 4-velocity, and R is the Riemann curvature tensor.\n\n**What this means physically:**\n\nThe Riemann tensor - this abstract 4-index object - directly measures tidal gravitational forces. For a Schwarzschild black hole:\n\n- Radial direction: +2M/r³ (stretching toward the hole)\n- Transverse direction: -M/r³ (compression perpendicular to infall)\n\nThis is \"spaghettification\" - you get stretched long and squeezed thin.\n\n**What I built:**\n\nA Python simulation using scipy's odeint to numerically integrate the deviation equations for a radially infalling observer. The visualization shows:\n\n1. Deviation vectors growing/shrinking over proper time\n2. The infall trajectory approaching r = 2M\n3. A sphere deforming into an ellipsoid\n4. Tidal tensor components vs radius (that 1/r³ scaling is brutal)\n\n**Key insight:** This is exactly what LIGO/VIRGO measure. Gravitational waves cause geodesic deviation between their test masses. The detector is literally a geodesic deviation measurement device.\n\n**View the full notebook with code and plots:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/geodesic_deviation.ipynb",3754"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/geodesic_deviation.ipynb",3755"category": "general",3756"date": "2026-06-07",3757"time": "09:00"3758},3759{3760"source": "lyapunov_exponent_calculation",3761"content_type": "notebook",3762"subreddit": "CoCalc",3763"title": "Calculating the Lyapunov Exponent: How to Quantify Chaos in Python",3764"body": "**ELI5: What's a Lyapunov exponent?**\n\nImagine you're tracking two butterflies that start almost at the same spot. In a calm room, they'd stay close together. But in a chaotic system (like weather), they quickly end up in completely different places. The Lyapunov exponent (λ) measures HOW FAST this separation happens.\n\n- λ > 0: Chaos! Trajectories diverge exponentially\n- λ = 0: Marginal - neither growing nor shrinking\n- λ < 0: Stable - trajectories converge\n\n**The Lorenz System**\n\nI computed λ for the famous Lorenz equations (the \"butterfly effect\" system):\n\ndx/dt = σ(y - x)\ndy/dt = x(ρ - z) - y\ndz/dt = xy - βz\n\nWith standard parameters (σ=10, ρ=28, β=8/3), I got λ ≈ 0.91, matching the literature value of 0.9056.\n\n**The Algorithm**\n\nThe trick is to evolve a tiny perturbation vector alongside your main trajectory:\n\n1. Start with a random unit vector δ\n2. Integrate both the system and dδ/dt = J·δ (where J is the Jacobian)\n3. Every so often, record ‖δ‖ and renormalize to unit length\n4. λ = (1/T) × sum of all ln‖δ‖\n\nThis prevents numerical overflow from the exponential growth.\n\n**What I Learned**\n\n- Predictability horizon ≈ (1/λ) × ln(tolerance/error)\n- Even with 10⁻⁶ accuracy, Lorenz predictions fail after ~15 time units\n- Chaos kicks in around ρ ≈ 24.7 (I did a parameter scan!)\n\nThe code uses NumPy and SciPy's odeint. Pretty straightforward once you understand the math.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lyapunov_exponent_calculation.ipynb",3765"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lyapunov_exponent_calculation.ipynb",3766"category": "general",3767"date": "2026-06-07",3768"time": "09:00"3769},3770{3771"source": "quantum_error_correction_codes",3772"content_type": "notebook",3773"subreddit": "CoCalc",3774"title": "Title: I simulated quantum error correction codes in Python - here's what I learned about protecting quantum information",3775"body": "Body:\n\n**The Problem**\n\nClassical computers handle errors easily - just copy your bits and take a majority vote. But quantum mechanics has the \"no-cloning theorem\" that forbids copying quantum states. So how do quantum computers deal with errors?\n\n**The Solution: Quantum Error Correction**\n\nInstead of copying, we *encode* one logical qubit into multiple physical qubits using entanglement.\n\nFor the 3-qubit bit-flip code:\n- |0⟩ becomes |000⟩\n- |1⟩ becomes |111⟩\n- A superposition α|0⟩ + β|1⟩ becomes α|000⟩ + β|111⟩\n\n**The Clever Part: Syndrome Measurement**\n\nWe don't measure the qubits directly (that would destroy the superposition). Instead, we measure \"stabilizers\" - operators like Z₁Z₂ that tell us if adjacent qubits AGREE, without revealing what they actually are.\n\nThe syndrome (s₁, s₂) uniquely identifies which qubit flipped:\n- (0,0) = no error\n- (1,0) = qubit 1 flipped\n- (1,1) = qubit 2 flipped\n- (0,1) = qubit 3 flipped\n\n**Results from My Simulation**\n\nI ran Monte Carlo simulations comparing coded vs uncoded qubits:\n\n- Uncoded error rate: p (linear)\n- Coded error rate: ≈ 3p² (quadratic!)\n\nBelow the threshold p < 0.5, the code ALWAYS helps. At p = 0.1, that's a 3x improvement. At p = 0.01, it's 300x better!\n\n**The Shor Code**\n\nThe 3-qubit code only handles bit-flips. The 9-qubit Shor code concatenates bit-flip AND phase-flip protection, correcting any single-qubit error. It's a [[9,1,3]] code: 9 physical qubits, 1 logical qubit, distance 3.\n\n**Why This Matters**\n\nThe threshold theorem proves that if hardware error rates stay below ~1%, we can do arbitrarily long quantum computations by using error correction. This is the foundation of fault-tolerant quantum computing.\n\nView the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_error_correction_codes.ipynb",3776"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_error_correction_codes.ipynb",3777"category": "physics",3778"date": "2026-06-08",3779"time": "09:00"3780},3781{3782"source": "time_dependent_schrodinger_animation",3783"content_type": "notebook",3784"subreddit": "CoCalc",3785"title": "TITLE: I animated quantum tunneling by solving the time-dependent Schrödinger equation — here's what I learned",3786"body": "BODY:\n\n**The Question**\n\nWhat happens when a quantum particle hits a wall it doesn't have enough energy to climb over?\n\nClassical physics has a simple answer: it bounces back. If you throw a ball at a wall and it doesn't have enough energy to go over, it reflects. Every time. No exceptions.\n\nQuantum mechanics disagrees.\n\n**The Simulation**\n\nI wrote a Python simulation using the split-operator method (also called split-step Fourier method) to solve:\n\niℏ(∂Ψ/∂t) = ĤΨ\n\nwhere Ĥ = −(ℏ²/2m)(∂²/∂x²) + V(x)\n\nThe initial condition is a Gaussian wavepacket — think of it as a localized \"blob\" of probability moving to the right with momentum p₀ = ℏk₀.\n\nThe barrier is a rectangular potential with height V₀ greater than the particle's kinetic energy E = ℏ²k₀²/2m.\n\n**What Happens**\n\nThe wavepacket hits the barrier and *splits*. Part of it reflects back (as classical physics predicts), but part of it *tunnels through* the barrier and appears on the other side.\n\nIn momentum space, you can see this clearly: the initial distribution peaked at +k₀ splits into two peaks at +k₀ (transmitted) and −k₀ (reflected).\n\n**Why This Matters**\n\nQuantum tunneling isn't just a curiosity — it's everywhere:\n- Alpha decay in radioactive nuclei\n- Electron tunneling in flash memory and SSDs\n- Scanning tunneling microscopes (STM)\n- Nuclear fusion in stars\n\n**The Math**\n\nThe split-operator method works by decomposing the time evolution:\n\nexp(−iĤΔt/ℏ) ≈ exp(−iVΔt/2ℏ) · exp(−iT̂Δt/ℏ) · exp(−iVΔt/2ℏ)\n\nThe kinetic operator T̂ is diagonal in momentum space (after FFT), so each step is O(N log N). The method is symplectic, unitary, and time-reversible — normalization is preserved exactly.\n\n**Try It Yourself**\n\nFull interactive notebook with animation:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/time_dependent_schrodinger_animation.ipynb\n\nLibraries used: NumPy, Matplotlib, IPython for animation.\n\n---\n\nTL;DR: Quantum particles tunnel through barriers they \"shouldn't\" be able to pass. I animated it and the wavefunction literally splits in two — part bounces, part phases through like a ghost.",3787"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/time_dependent_schrodinger_animation.ipynb",3788"category": "general",3789"date": "2026-06-08",3790"time": "09:00"3791},3792{3793"source": "quantum_harmonic_oscillator",3794"content_type": "notebook",3795"subreddit": "CoCalc",3796"title": "I simulated the Quantum Harmonic Oscillator in Python—here's what I learned about zero-point energy and the uncertainty principle",3797"body": "I just finished implementing a complete numerical simulation of the quantum harmonic oscillator, and I wanted to share some insights!\n\n**ELI5: What is it?**\n\nImagine a ball on a spring. Classically, if you don't push it, it sits still at the bottom. But quantum mechanically, even in its lowest energy state, it's still jiggling! This \"zero-point energy\" exists because you can't simultaneously know exactly where the particle is AND how fast it's moving (Heisenberg uncertainty principle).\n\n**The Physics**\n\nThe quantum harmonic oscillator is one of the few exactly-solvable systems in quantum mechanics. The energy levels are:\n\nEₙ = ℏω(n + ½) where n = 0, 1, 2, ...\n\nThat \"+½\" is the zero-point energy—it's never zero!\n\n**What I Implemented**\n\nUsing Python with NumPy and SciPy, I:\n- Calculated wavefunctions using Hermite polynomials\n- Verified orthonormality numerically (the overlap matrix is the identity!)\n- Demonstrated ladder operators: â|n⟩ = √n|n-1⟩\n- Showed the correspondence principle: at high n, quantum → classical\n- Confirmed the virial theorem: ⟨T⟩ = ⟨V⟩ = Eₙ/2\n\n**Cool Visualization**\n\nThe plot shows:\n1. Wavefunctions offset by their energy levels\n2. Probability densities for different states\n3. Classical vs quantum probability at n=10\n4. How uncertainties grow with quantum number\n\n**Why This Matters**\n\nThis isn't just textbook physics—it's the foundation for:\n- Quantum field theory (photons ARE harmonic oscillators)\n- Understanding molecular vibrations (spectroscopy)\n- Quantum computing with trapped ions\n- Phonons in solid-state physics\n\n**View the Full Notebook**\n\nYou can run this notebook yourself and explore the code:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_harmonic_oscillator.ipynb\n\nHappy to answer questions!",3798"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_harmonic_oscillator.ipynb",3799"category": "physics",3800"date": "2026-06-09",3801"time": "09:00"3802},3803{3804"source": "convolutional_layer_implementation",3805"content_type": "notebook",3806"subreddit": "CoCalc",3807"title": "I implemented a CNN convolutional layer from scratch in NumPy - here's what I learned",3808"body": "I just finished building a complete 2D convolutional layer implementation from first principles. No PyTorch, no TensorFlow - just NumPy and math.\n\n**Why do this?**\n\nFrameworks are great, but I wanted to really understand what happens when you call `nn.Conv2d()`. Turns out there's some elegant math underneath.\n\n**The core operation**\n\nThe 2D convolution (actually cross-correlation in deep learning) is:\n\nY[i, j] = ΣΣ X[i·s + m, j·s + n] · W[m, n] + b\n\nWhere s is stride, and the sums run over the kernel dimensions.\n\n**What surprised me**\n\n1. **It's not actually convolution** - DL frameworks use cross-correlation (no kernel flip)\n2. **Backprop through conv = another convolution** - the gradient w.r.t. input is a \"full convolution\" with the flipped kernel\n3. **Xavier initialization matters** - proper scaling: std = sqrt(2/(fan_in + fan_out))\n\n**What I implemented**\n\n- Forward pass with configurable padding and stride\n- Full backward pass (gradients for weights, bias, AND input)\n- Gradient checking using numerical differentiation\n- Edge detection demo with Sobel and Laplacian kernels\n- A training loop that learns to detect vertical vs horizontal lines\n\n**Key insight**\n\nThe gradient check was crucial. Numerical gradient formula:\n\ngrad ≈ (f(x+eps) - f(x-eps)) / (2·eps)\n\nMy implementation passed with relative error < 10⁻⁵.\n\n**Interactive notebook**\n\nYou can run the full notebook with all the code and visualizations here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/convolutional_layer_implementation.ipynb\n\nHappy to answer questions about the implementation details!",3809"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/convolutional_layer_implementation.ipynb",3810"category": "general",3811"date": "2026-06-09",3812"time": "09:00"3813},3814{3815"source": "graph_neural_networks",3816"content_type": "notebook",3817"subreddit": "CoCalc",3818"title": "[Tutorial] Graph Neural Networks from Scratch - Implementing GCN for Node Classification",3819"body": "I created an educational Jupyter notebook that implements Graph Neural Networks from the ground up using only NumPy. No PyTorch Geometric, no DGL - just pure Python to understand what's really happening.\n\n**What's covered:**\n\n1. **Graph Representation** - How to encode graphs as adjacency matrices and feature matrices\n\n2. **Message Passing** - The fundamental mechanism where nodes aggregate information from neighbors: h_v = UPDATE(h_v, AGGREGATE({h_u : u in N(v)}))\n\n3. **GCN Layer Implementation** - The spectral convolution H' = ReLU(D^{-1/2} A D^{-1/2} H W) with proper symmetric normalization\n\n4. **Semi-Supervised Learning** - Achieving high accuracy with only 8 labeled nodes out of 34 total\n\n5. **Visualizations** - Training curves, learned embeddings, and the classic Karate Club graph with predictions\n\n**Why this matters:**\nGNNs are everywhere now - social network analysis, drug discovery, recommendation systems, traffic prediction. Understanding the fundamentals helps you choose the right architecture and debug issues.\n\n**Key insight:** The symmetric normalization D^{-1/2} A D^{-1/2} is crucial - it prevents high-degree nodes from dominating and ensures numerical stability.\n\nLink: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/graph_neural_networks/graph_neural_networks.ipynb\n\nHappy to answer questions!\n\nSuggested subreddits: r/MachineLearning, r/learnmachinelearning, r/deeplearning, r/datascience",3820"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/graph_neural_networks/graph_neural_networks.ipynb",3821"category": "machine-learning",3822"date": "2026-06-10",3823"time": "09:00"3824},3825{3826"source": "bessel_functions",3827"content_type": "notebook",3828"subreddit": "CoCalc",3829"title": "Visualizing Bessel Functions in Python - From Theory to Vibrating Drum Modes",3830"body": "If you've taken differential equations or physics, you've probably encountered Bessel functions but maybe found them abstract. I built a notebook that makes them tangible.\n\n**What are Bessel functions?**\n\nThey solve this differential equation:\n\nx²y'' + xy' + (x² - n²)y = 0\n\nThink of it as the \"circular coordinate\" version of sines and cosines. Just like sin/cos appear when you solve wave equations in Cartesian coordinates, Bessel functions Jₙ(x) and Yₙ(x) appear for cylindrical symmetry.\n\n**What the notebook covers:**\n\n1. **Four types of Bessel functions** - Jₙ(x), Yₙ(x), Iₙ(x), Kₙ(x) with their properties\n2. **Numerical verification** - Checked that recurrence relations like Jₙ₋₁(x) + Jₙ₊₁(x) = (2n/x)Jₙ(x) actually work\n3. **Orthogonality** - Verified ∫₀¹ x·Jₙ(αₙₘx)·Jₙ(αₙₖx)dx = 0 for m ≠ k\n4. **Physical application** - Visualized vibrating circular membrane modes (like a drum!)\n\n**Key insight:** The zeros of Jₙ(x) determine natural frequencies. This is why drums have their characteristic sound - each mode vibrates at a frequency proportional to these zeros.\n\n**Tools used:** numpy, scipy.special, matplotlib\n\nExplore the full notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bessel_functions.ipynb",3831"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bessel_functions.ipynb",3832"category": "general",3833"date": "2026-06-10",3834"time": "09:00"3835},3836{3837"source": "mandelbrot_set_fractal_visualization",3838"content_type": "notebook",3839"subreddit": "CoCalc",3840"title": "I built a Mandelbrot set visualizer in Python - here's the math behind it",3841"body": "Just finished a project exploring one of math's most beautiful objects: the Mandelbrot set.\n\n**What is it?**\n\nThe Mandelbrot set is all complex numbers c where this sequence stays bounded:\n\nz₀ = 0\nzₙ₊₁ = zₙ² + c\n\nThat's it. Square, add c, repeat. If the sequence never blows up to infinity, c is in the set.\n\n**The escape time algorithm:**\n\nTo visualize it, we check how many iterations it takes before |zₙ| > 2 (once it exceeds 2, it's guaranteed to escape to infinity). Color each point by its \"escape time.\"\n\n**Cool math facts I learned:**\n\n1. The boundary has Hausdorff dimension 2—it's so complex it essentially fills space locally\n2. The main cardioid shape comes from c = e^(iθ)/2 - e^(2iθ)/4\n3. Zoom in anywhere on the boundary and you find mini copies of the entire set\n\n**Implementation highlights:**\n\n- Used NumPy's vectorized operations with boolean masking for speed\n- 1000×800 grid with 100 max iterations for the main view\n- Zoomed into \"Seahorse Valley\" at 500 iterations to see the fine structure\n\nThe most mind-blowing part: such a simple rule (z → z² + c) creates literally infinite complexity.\n\n**View the full interactive notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/mandelbrot_set_fractal_visualization.ipynb\n\n---",3842"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/mandelbrot_set_fractal_visualization.ipynb",3843"category": "general",3844"date": "2026-06-11",3845"time": "09:00"3846},3847{3848"source": "birthday_paradox_simulation",3849"content_type": "notebook",3850"subreddit": "CoCalc",3851"title": "I simulated the Birthday Paradox with Monte Carlo methods - here's why only 23 people gives you 50% odds",3852"body": "Ever heard that you only need 23 people for a 50% chance two share a birthday? I always thought this was a trick until I actually coded it up.\n\n**The intuition trap:**\n\nWe naturally compare 23 people to 365 days and think \"no way.\" But we're asking the wrong question. We're not checking if someone shares YOUR birthday—we're checking if ANY two people match.\n\n**The math:**\n\nWith n people, the number of pairs to compare is n(n-1)/2\n\nFor 23 people: 23 × 22 / 2 = 253 pairs!\n\nThe exact probability: P(match) = 1 - ∏ (365-i)/365 for i from 0 to n-1\n\n**My simulation:**\n\nI ran 10,000 Monte Carlo trials for each group size from 1 to 80. The simulated results match the theoretical curve almost perfectly.\n\nKey findings:\n- n=23: P = 0.5073 (the crossover!)\n- n=50: P ≈ 0.97\n- n=70: P > 0.999\n\n**Why this matters:**\n\nThis isn't just a party trick. The same principle underlies:\n- Birthday attacks in cryptography (breaking hash functions in 2^(n/2) instead of 2^n operations)\n- Hash table collision estimation\n- Duplicate detection in databases\n\nThe code uses NumPy for random birthday generation and Matplotlib for visualization. Clean, readable, and reproducible.\n\n**View the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/birthday_paradox_simulation.ipynb",3853"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/birthday_paradox_simulation.ipynb",3854"category": "general",3855"date": "2026-06-11",3856"time": "09:00"3857},3858{3859"source": "polarization_states",3860"content_type": "notebook",3861"subreddit": "CoCalc",3862"title": "Visualizing Polarization States with Python - From Linear to Circular Light",3863"body": "I created a Jupyter notebook exploring polarization—one of the most beautiful concepts in physics that's also incredibly practical (3D movies, LCD screens, sunglasses!).\n\n**ELI5 Version:**\nImagine shaking a rope up and down vs. side to side vs. in circles. Light does the same thing with its electric field. The pattern it makes is called polarization.\n\n**What I learned:**\n\n1. **Any polarization can be written as two perpendicular waves:**\n - Ex = E0x·cos(kz - wt + phi_x)\n - Ey = E0y·cos(kz - wt + phi_y)\n\n2. **Phase difference determines everything:**\n - d = 0 → diagonal line (linear 45 degrees)\n - d = pi/2 with equal amplitudes → perfect circle\n - Other combinations → ellipses\n\n3. **The Poincare sphere is mind-blowing:** Every possible polarization state maps to a unique point on a sphere. North pole = right circular, south pole = left circular, equator = all linear states.\n\n4. **Jones matrices are like polarization transformers:** A quarter-wave plate turns linear into circular. A half-wave plate rotates the polarization axis.\n\n5. **Stokes parameters (S0, S1, S2, S3):** Complete description of any light, even partially polarized.\n\n**Python libraries used:** NumPy, Matplotlib (including 3D plotting)\n\nThe code generates visualizations of polarization ellipses, 3D wave propagation, and Poincare sphere representations.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/polarization_states.ipynb",3864"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/polarization_states.ipynb",3865"category": "general",3866"date": "2026-06-12",3867"time": "09:00"3868},3869{3870"source": "naive_bayes",3871"content_type": "notebook",3872"subreddit": "CoCalc",3873"title": "Implemented Gaussian Naive Bayes from Scratch - Here's How the Math Works",3874"body": "I built a Naive Bayes classifier from scratch to really understand what's happening under the hood. Here's what I learned:\n\n**The Core Idea:**\nNaive Bayes uses Bayes' theorem to classify data points:\nP(class|features) ∝ P(features|class) × P(class)\n\n**The \"Naive\" Assumption:**\nThe classifier assumes all features are conditionally independent given the class. So instead of calculating P(x₁, x₂, ..., xₙ|class), we just multiply individual probabilities: P(x₁|class) × P(x₂|class) × ... × P(xₙ|class)\n\n**Why It Works:**\nEven though features are rarely truly independent, the classifier only needs to get the *ranking* of class probabilities right, not the exact values. This is why such a simple model often performs surprisingly well!\n\n**For Gaussian (continuous) features:**\nEach feature is modeled as a normal distribution. We just need to estimate the mean μ and variance σ² for each feature in each class from training data.\n\n**Results:**\nOn a synthetic 2D dataset with two Gaussian-distributed classes, the classifier learned clear decision boundaries and achieved strong test accuracy. The probability contour plot shows smooth transitions between classes.\n\n**Key Takeaways:**\n- Training is O(n × K) - just compute means and variances\n- Prediction requires only the argmax of posterior probabilities\n- Great for high-dimensional data where more complex models overfit\n\nCheck out the interactive notebook with full implementation and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/naive_bayes.ipynb\n\n---",3875"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/naive_bayes.ipynb",3876"category": "general",3877"date": "2026-06-12",3878"time": "09:00"3879},3880{3881"source": "relativistic_kinematics",3882"content_type": "notebook",3883"subreddit": "CoCalc",3884"title": "I built a Python notebook visualizing relativistic kinematics - here's what I learned about Einstein's physics",3885"body": "I wanted to understand special relativity beyond just the equations, so I coded up the key relationships in Python and visualized them.\n\n**The Core Idea**\n\nEverything in relativistic kinematics comes from one formula - the Lorentz factor:\n\nγ = 1/√(1 - v²/c²)\n\nwhere β = v/c is your speed as a fraction of light speed.\n\n**What happens at high speeds:**\n\n1. **Time dilation** - A clock moving at 0.9994c (like cosmic ray muons) experiences time 29x slower. That's why muons created in the upper atmosphere survive long enough to reach Earth's surface.\n\n2. **Length contraction** - Objects shrink along their direction of motion by factor 1/γ\n\n3. **Momentum goes wild** - At β = 0.9, relativistic momentum p = γm₀v is 2.29x the classical prediction. As v → c, momentum → ∞, which is why you can't accelerate to light speed.\n\n4. **Velocity addition** - Add 0.6c + 0.7c and you don't get 1.3c. Einstein's formula gives (0.6 + 0.7)/(1 + 0.6×0.7) = 0.915c. The universe enforces its speed limit!\n\n**The energy-momentum relation**\n\nThe most elegant equation: E² = (pc)² + (m₀c²)²\n\nFor massless particles (photons), this gives E = pc. For massive particles at rest, E = m₀c² (that famous equation!).\n\n**Code highlights:**\n\n- Used numpy for vectorized calculations\n- Created 6 publication-quality plots with matplotlib\n- Verified the energy-momentum relation numerically\n\nThe full notebook is available here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/relativistic_kinematics.ipynb\n\nWhat aspects of special relativity would you like to see explored next? I'm thinking about relativistic Doppler effect or maybe the twin paradox with actual numbers.",3886"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/relativistic_kinematics.ipynb",3887"category": "general",3888"date": "2026-06-13",3889"time": "09:00"3890},3891{3892"source": "rayleigh_quotient_iteration",3893"content_type": "notebook",3894"subreddit": "CoCalc",3895"title": "Rayleigh Quotient Iteration: How to Triple Your Eigenvalue Precision Each Iteration",3896"body": "I created a Jupyter notebook exploring Rayleigh Quotient Iteration (RQI), and wanted to share what I learned about this elegant algorithm.\n\n**What is it?**\n\nRQI finds eigenvalues and eigenvectors of matrices with cubic convergence - meaning the number of correct digits approximately triples each iteration. That's incredibly fast!\n\n**How does it work?**\n\nThe algorithm combines two ideas:\n\n1. **Rayleigh quotient**: For a vector v and matrix A, compute ρ(v) = vᵀAv/vᵀv. This gives the \"best\" eigenvalue estimate for that vector.\n\n2. **Inverse iteration**: Solve (A - σI)w = v and normalize. This amplifies the eigenvector component nearest to shift σ.\n\nThe trick: RQI uses the Rayleigh quotient as the shift, updating it each iteration. Since ρ(v) approximates the eigenvalue with O(ε²) error when v has O(ε) error, you get:\n\n- Shift error: O(ε²)\n- New vector error: O(ε²)\n- Combined: O(ε³) — cubic convergence!\n\n**Key results from my implementation:**\n\n- RQI converges in 3-5 iterations to machine precision\n- Regular inverse iteration (fixed shift) needs 20+ iterations\n- The log-log plot of errors shows slope ≈ 3, confirming cubic rate\n\n**When to use RQI:**\n\n- Need a single eigenvalue/eigenvector with high precision\n- Have a good initial guess\n- As refinement after other methods (like QR algorithm)\n\n**Caveats:**\n\n- Must solve a different linear system each iteration (can't reuse LU factorization)\n- Converges to the nearest eigenvalue - may not be the one you want\n- Matrix becomes ill-conditioned near convergence\n\nView the full notebook with interactive code here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/rayleigh_quotientᵢteration.ipynb\n\nHappy to answer questions about the implementation!",3897"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/rayleigh_quotientᵢteration.ipynb",3898"category": "general",3899"date": "2026-06-13",3900"time": "09:00"3901},3902{3903"source": "givens_rotation",3904"content_type": "notebook",3905"subreddit": "CoCalc",3906"title": "Implemented QR Decomposition Using Givens Rotations - Here's How It Works",3907"body": "I've been exploring numerical linear algebra methods and just implemented QR decomposition using Givens rotations. Here's an ELI5 explanation:\n\n**What's a Givens Rotation?**\n\nImagine you have a vector [3, 4]. A Givens rotation finds the angle θ that rotates this vector onto the x-axis, giving [5, 0]. The rotation matrix is:\n\n```\n[cos(θ) sin(θ) ]\n[-sin(θ) cos(θ)]\n```\n\n**Why use them for QR?**\n\nTo decompose A = QR, we need R to be upper triangular (zeros below diagonal). Givens rotations let us zero out one element at a time by rotating pairs of rows.\n\n**The numerically stable trick:**\n\nInstead of computing √(a² + b²) directly (which can overflow), we use:\n- If |b| > |a|: τ = -a/b, then s = 1/√(1+τ²), c = s·τ\n\nThis keeps τ ≤ 1, avoiding numerical issues.\n\n**Key findings:**\n- Applied 6 rotations to decompose a 4×3 matrix\n- ||A - QR|| ≈ 10⁻¹⁵ (machine precision!)\n- ||QᵀQ - I|| ≈ 10⁻¹⁵ (Q stays orthogonal)\n\n**When to use Givens over Householder:**\n- Sparse matrices (Givens preserves sparsity)\n- Parallel computing (rotations on different rows are independent)\n- Updating existing factorizations\n\nCheck out the full notebook with visualizations: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/givens_rotation.ipynb",3908"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/givens_rotation.ipynb",3909"category": "general",3910"date": "2026-06-14",3911"time": "09:00"3912},3913{3914"source": "feynman_diagrams_basics",3915"content_type": "notebook",3916"subreddit": "CoCalc",3917"title": "Built a Python visualization system for Feynman diagrams - the pictorial language of particle physics",3918"body": "**What are Feynman diagrams?**\n\nImagine you want to calculate the probability that two electrons bounce off each other. In quantum field theory, this involves some seriously complex math with integrals over all possible particle paths.\n\nIn 1948, Richard Feynman invented a brilliant shortcut: draw pictures! Each line and vertex in the diagram corresponds to a specific mathematical term. Draw the picture → translate to math → get your answer.\n\n**What I built:**\n\nA Python visualization system that draws the 6 fundamental diagrams in Quantum Electrodynamics (QED):\n\n1. **Basic QED vertex** - where electrons emit/absorb photons\n2. **Møller scattering** - two electrons bouncing off each other\n3. **Pair annihilation** - electron + positron → two photons\n4. **Compton scattering** - photon bouncing off an electron\n5. **Bhabha scattering** - electron + positron → electron + positron\n6. **Vacuum polarization** - a quantum loop that modifies the photon propagator\n\n**Cool physics insight:**\n\nAlso calculated how the fine structure constant α ≈ 1/137 changes with energy! At low energies (everyday physics): α⁻¹ ≈ 137. At high energies (like the Z boson mass ~91 GeV): α⁻¹ ≈ 129.\n\nThis \"running\" happens because virtual particle loops (like the vacuum polarization diagram) screen the electric charge differently at different scales.\n\n**Tech stack:** Python, NumPy, Matplotlib\n\nThe wavy photon lines are generated parametrically with sine wave oscillations perpendicular to the line direction. Fermion lines use matplotlib's arrow annotations.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/feynman_diagrams_basics.ipynb",3919"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/feynman_diagrams_basics.ipynb",3920"category": "general",3921"date": "2026-06-14",3922"time": "09:00"3923},3924{3925"source": "euler_characteristic",3926"content_type": "notebook",3927"subreddit": "CoCalc",3928"title": "Title: I built a Python notebook exploring the Euler Characteristic - one of topology's most elegant invariants",3929"body": "Body:\n\n**What is the Euler Characteristic?**\n\nEver notice something strange about 3D shapes? Take any convex polyhedron, count its vertices (V), edges (E), and faces (F), and calculate V - E + F. You always get 2!\n\n- Cube: 8 - 12 + 6 = 2\n- Tetrahedron: 4 - 6 + 4 = 2\n- Icosahedron: 12 - 30 + 20 = 2\n\nThis number is called the Euler characteristic (chi), discovered by Leonhard Euler in 1758.\n\n**Why does this matter?**\n\nThe Euler characteristic is a *topological invariant* - it doesn't change when you stretch, bend, or deform a shape (without tearing or gluing). This makes it incredibly powerful for classifying surfaces:\n\n- Sphere: chi = 2\n- Torus (donut): chi = 0\n- Double torus: chi = -2\n\nFor orientable surfaces, you can even calculate the genus (number of holes): g = (2 - chi) / 2\n\n**What's in the notebook?**\n\n- Verification of Euler's formula for all 5 Platonic solids\n- Computing chi from Betti numbers (homology approach)\n- 3D visualizations of triangulated polyhedra\n- Demonstration of the Gauss-Bonnet theorem\n- Product formula verification: chi(X x Y) = chi(X) * chi(Y)\n\n**Cool connection:** The Gauss-Bonnet theorem links the Euler characteristic to geometry. For any closed surface, the integral of Gaussian curvature equals 2*pi*chi. This means topology constrains geometry!\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/euler_characteristic.ipynb\n\nBuilt with Python, NumPy, and Matplotlib. Happy to answer questions about the math or implementation!",3930"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/euler_characteristic.ipynb",3931"category": "general",3932"date": "2026-06-15",3933"time": "09:00"3934},3935{3936"source": "digital_filter_design",3937"content_type": "notebook",3938"subreddit": "CoCalc",3939"title": "Implemented Butterworth, Chebyshev, and FIR filters in Python - here's what I learned about digital signal processing",3940"body": "I've been learning digital signal processing and wanted to share a notebook I put together on filter design.\n\n**The Setup:**\nCreated a test signal with three sine waves (10Hz, 50Hz, 200Hz) plus noise, then designed filters to extract each component separately.\n\n**What are digital filters?**\nThink of them as frequency-selective systems. The basic equation is:\n\ny[n] = Σ bₖ·x[n-k] - Σ aₖ·y[n-k]\n\nThe b coefficients weight the input, and the a coefficients create feedback. FIR filters have no feedback (all a=0), while IIR filters use feedback to achieve sharper cutoffs with fewer coefficients.\n\n**Three filter types I implemented:**\n\n1. **Butterworth Lowpass (4th order, fc=30Hz)** - \"Maximally flat\" means no ripples in the passband. It grabbed the 10Hz component cleanly.\n\n2. **Chebyshev Type I Bandpass (40-60Hz)** - Allows some ripple in the passband (I used 1dB) but gives a much steeper roll-off. Perfect for isolating that 50Hz signal.\n\n3. **FIR Highpass (101 taps, Hamming window)** - No feedback = always stable. Used the window method where you truncate the ideal impulse response. Extracted the 200Hz component with excellent stopband rejection.\n\n**Key insight:** Used `scipy.signal.filtfilt` for zero-phase filtering. This applies the filter forward then backward, eliminating phase distortion. Essential for preserving signal timing.\n\n**Stability check:** All IIR poles were inside the unit circle (max magnitude ~0.78), confirming stability. FIR filters have all poles at z=0, so they're always stable.\n\nYou can run this notebook yourself and experiment with different cutoff frequencies and filter orders:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/digital_filter_design.ipynb\n\nHappy to answer questions about filter design!",3941"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/digital_filter_design.ipynb",3942"category": "general",3943"date": "2026-06-15",3944"time": "09:00"3945},3946{3947"source": "differential_evolution",3948"content_type": "notebook",3949"subreddit": "CoCalc",3950"title": "[OC] Implemented Differential Evolution from scratch - found global optimum in landscape with 100+ local minima",3951"body": "I built a complete implementation of the Differential Evolution (DE) algorithm in Python and wanted to share what I learned.\n\n**What is DE?**\n\nDE is a population-based optimization algorithm that doesn't need gradients. It works by evolving a population of candidate solutions through:\n\n1. **Mutation**: Create a \"mutant\" vector by adding scaled differences between random population members\n2. **Crossover**: Mix the mutant with the current solution\n3. **Selection**: Keep whichever is better\n\n**The Test**\n\nI used the Rastrigin function - a notorious benchmark with ~100 local minima that looks like an egg carton. The global minimum is at the origin with f(0,0) = 0.\n\n**Results**\n\nWith population size 50, mutation factor F=0.8, and crossover rate CR=0.9:\n- Found solution within 10⁻⁶ of the global optimum\n- Converged in ~150 generations\n- Population naturally clusters around the optimum over time\n\n**Key Takeaways**\n\n- F controls exploration: too low = premature convergence, too high = instability\n- CR controls information exchange: higher values speed up convergence but may reduce robustness\n- DE is remarkably simple to implement (~50 lines of core logic)\n\nThe notebook includes 3D surface plots, convergence curves, and a parameter sensitivity analysis.\n\nFull interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/differential_evolution.ipynb\n\n---",3952"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/differential_evolution.ipynb",3953"category": "general",3954"date": "2026-06-16",3955"time": "09:00"3956},3957{3958"source": "genetic_algorithm_optimization",3959"content_type": "notebook",3960"subreddit": "CoCalc",3961"title": "Implemented a Genetic Algorithm to Optimize the Rastrigin Function - Here's How Evolution Solves Hard Math Problems",3962"body": "I built a Genetic Algorithm (GA) in Python to tackle the Rastrigin function, a classic optimization benchmark that trips up gradient-based methods.\n\n**ELI5: What's a Genetic Algorithm?**\n\nImagine you're trying to find the lowest point in a mountain range while blindfolded. Instead of walking downhill (gradient descent), you:\n\n1. Drop 100 random hikers (population)\n2. The ones at lower elevations get to \"reproduce\" more\n3. Their children inherit mixed traits from both parents (crossover)\n4. Some kids randomly wander a bit (mutation)\n5. Repeat for many generations\n\nOver time, the population clusters around the lowest valley - even if there are tons of fake valleys (local minima).\n\n**Why the Rastrigin Function?**\n\nf(x) = 10n + Σᵢ[xᵢ² - 10cos(2πxᵢ)]\n\nThis function has approximately 10ⁿ local minima! For just 2 dimensions, that's ~100 places where gradient descent would get stuck. The global minimum is at x = [0, 0] with f(x) = 0.\n\n**Key Implementation Details:**\n\n- Tournament selection (pick best from k random individuals)\n- Arithmetic crossover: child = α·parent1 + (1-α)·parent2\n- Gaussian mutation: x' = x + N(0, σ²)\n- Elitism: keep the 2 best solutions each generation\n\n**Results:**\n\nAfter 150 generations with 100 individuals, the algorithm found x ≈ [0, 0] with fitness ≈ 0. The convergence plot shows exponential improvement early on, then gradual refinement.\n\n**What I Learned:**\n\n- Mutation rate is crucial: too high = random search, too low = premature convergence\n- Tournament size controls selection pressure\n- Elitism prevents losing good solutions but can reduce diversity\n\nCheck out the full notebook with visualizations and code here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/genetic_algorithm_optimization.ipynb\n\nHappy to answer questions about the implementation!\n\n---",3963"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/genetic_algorithm_optimization.ipynb",3964"category": "general",3965"date": "2026-06-16",3966"time": "09:00"3967},3968{3969"source": "heston_stochastic_volatility",3970"content_type": "notebook",3971"subreddit": "CoCalc",3972"title": "Implementing the Heston Stochastic Volatility Model in Python with Monte Carlo Simulation",3973"body": "I built a Python implementation of the Heston model for simulating asset prices with stochastic volatility. This is one of the most important models in quantitative finance because it addresses a major flaw in Black-Scholes: the assumption of constant volatility.\n\n**What makes Heston special?**\n\nThe model treats volatility as its own random process that:\n1. Mean-reverts to a long-term level θ\n2. Has its own \"volatility of volatility\" σ\n3. Correlates with asset returns (the leverage effect)\n\n**The math (simplified):**\n\nPrice evolves as: dS = μS dt + √v S dW\nVariance evolves as: dv = κ(θ - v) dt + σ√v dW\n\nThe key is that price and variance are correlated. In equity markets, ρ is typically negative (-0.5 to -0.9), meaning when prices drop, volatility spikes.\n\n**What I learned:**\n\n- The Feller condition (2κθ > σ²) keeps variance positive\n- Euler-Maruyama with full truncation handles negative variance samples\n- The model naturally produces volatility smiles and fat-tailed distributions\n\n**Results from 10,000 paths:**\n- Terminal returns show negative skew (left tail)\n- Excess kurtosis of ~0.8 (fatter tails than normal)\n- 99% VaR came out around -30%\n\nThe visualization shows price paths with volatility clustering, the variance mean-reverting to θ, and an implied volatility smile.\n\n**Interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/heston_stochastic_volatility.ipynb\n\nCode uses only NumPy, SciPy, and Matplotlib - fully self-contained and reproducible.",3974"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/heston_stochastic_volatility.ipynb",3975"category": "general",3976"date": "2026-06-17",3977"time": "09:00"3978},3979{3980"source": "lattice_boltzmann_fluid_flow",3981"content_type": "notebook",3982"subreddit": "CoCalc",3983"title": "I built a fluid dynamics simulator using the Lattice Boltzmann Method in pure Python/NumPy - here's how it works",3984"body": "Hey everyone! I've been learning about computational fluid dynamics and wanted to share a project that finally made the concepts click for me.\n\n**What is the Lattice Boltzmann Method?**\n\nInstead of directly solving the Navier-Stokes equations (which describe fluid motion), LBM works at a \"mesoscopic\" scale. Imagine tracking probabilities of finding particles moving in certain directions on a grid. Two simple steps repeat:\n\n1. **Collision** - particles at each point redistribute according to local equilibrium\n2. **Streaming** - particles move to neighboring grid points\n\nThat's it! From these simple rules, complex fluid behavior emerges.\n\n**The simulation**\n\nI simulated flow around a cylinder (think: wind past a bridge pillar). At Reynolds number 100, something beautiful happens - the wake becomes unstable and starts shedding vortices alternately from top and bottom. This is called the \"von Karman vortex street.\"\n\n**Key equations (in plain English)**\n\n- Equilibrium distribution: basically a Maxwell-Boltzmann distribution discretized onto the lattice\n- Viscosity: controlled by the \"relaxation time\" tau - how quickly particles return to equilibrium\n- Strouhal number: relates shedding frequency to flow speed and obstacle size (got St around 0.18, matching experiments!)\n\n**What I learned**\n\n- Bounce-back boundaries naturally create no-slip walls\n- The D2Q9 lattice (9 velocities in 2D) is surprisingly powerful\n- LBM is embarrassingly parallel - great for GPU acceleration\n\nThe whole thing runs in pure NumPy. No fancy CFD libraries needed!\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lattice_boltzmann_fluid_flow.ipynb\n\nHappy to answer questions about the implementation!",3985"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lattice_boltzmann_fluid_flow.ipynb",3986"category": "fluid-dynamics",3987"date": "2026-06-17",3988"time": "09:00"3989},3990{3991"source": "modular_arithmetic",3992"content_type": "notebook",3993"subreddit": "SageMath",3994"title": "I built a complete modular arithmetic toolkit in Python - here's what I learned about the math behind RSA encryption",3995"body": "Hey everyone! Just finished a notebook exploring modular arithmetic from first principles, and I wanted to share what I learned.\n\n**What is modular arithmetic?**\n\nThink of it like clock math. On a 12-hour clock, 10 + 5 = 3 (not 15). We say 15 ≡ 3 (mod 12). Numbers \"wrap around\" when they reach the modulus.\n\n**Key concepts I implemented:**\n\n1. **Euler's Totient Function φ(n)** - counts integers from 1 to n that are coprime to n. For example, φ(12) = 4 because only {1, 5, 7, 11} share no factors with 12.\n\n2. **Modular Exponentiation** - computing a^k mod n efficiently using binary exponentiation. This runs in O(log k) time instead of O(k).\n\n3. **Chinese Remainder Theorem** - if you know x mod 3 = 2, x mod 5 = 3, x mod 7 = 2, you can find the unique x mod 105.\n\n4. **Modular Inverse** - finding a^(-1) mod n using the Extended Euclidean Algorithm.\n\n**The RSA Application:**\n\nThe notebook culminates in a working RSA demo:\n- Choose primes p=61, q=53\n- Compute n=3233 and φ(n)=3120\n- Public key e=17, private key d=2753\n- Encrypt: message^17 mod 3233\n- Decrypt: ciphertext^2753 mod 3233\n\nEuler's theorem guarantees this works: since ed ≡ 1 (mod φ(n)), we have m^(ed) ≡ m (mod n).\n\n**Visualizations include:**\n- Euler's totient function for n=1 to 100\n- Multiplication table mod 12\n- Powers of primitive roots in Z/13Z*\n- Clock arithmetic diagram\n\nThe notebook is fully interactive - you can run it yourself:\n\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/modular_arithmetic.ipynb\n\nWould love to hear if anyone has questions or suggestions for improvements!",3996"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/modular_arithmetic.ipynb",3997"category": "general",3998"date": "2026-06-18",3999"time": "09:00"4000},4001{4002"source": "linear_regression_from_scratch",4003"content_type": "notebook",4004"subreddit": "CoCalc",4005"title": "I built Linear Regression from scratch to understand what sklearn is actually doing - here's what I learned",4006"body": "I wanted to truly understand linear regression, so I implemented it from scratch using only numpy. No sklearn, no magic - just math.\n\n**The Problem:**\nGiven data points (X, y), find weights β that minimize the error between predictions and actual values.\n\n**Two Solutions I Implemented:**\n\n**1. Closed-Form (Normal Equation)**\nThe analytical solution: β = (XᵀX)⁻¹Xᵀy\n\nThis solves the optimization in one matrix operation. Fast for small datasets but O(n³) complexity due to matrix inversion.\n\n**2. Gradient Descent**\nIteratively update: β ← β - α∇J(β)\n\nWhere the gradient is: ∇J = -(2/n)Xᵀ(y - Xβ)\n\nMore memory efficient and scales better to large datasets.\n\n**Results:**\n- Generated synthetic data: y = 3 + 2x₁ + 1.5x₂ + noise\n- Both methods recovered the true parameters almost exactly\n- R² score: ~0.91\n- The gradient descent cost curve shows nice exponential convergence\n\n**Key Takeaways:**\n1. The closed-form solution is elegant but doesn't scale\n2. Gradient descent needs careful learning rate tuning\n3. Residual plots help verify model assumptions\n4. Understanding the math makes debugging much easier\n\nThe visualization shows the regression fit, gradient descent convergence, residuals, and parameter comparison.\n\nView the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/linear_regression_from_scratch.ipynb\n\nHappy to answer questions about the implementation!",4007"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/linear_regression_from_scratch.ipynb",4008"category": "general",4009"date": "2026-06-18",4010"time": "09:00"4011},4012{4013"source": "riemann_curvature_tensor",4014"content_type": "notebook",4015"subreddit": "CoCalc",4016"title": "I built a Python calculator for the Riemann curvature tensor - the math behind Einstein's General Relativity",4017"body": "The Riemann curvature tensor is arguably the most important object in differential geometry. It measures the intrinsic curvature of a space - properties that can be detected without leaving the surface.\n\n**The key intuition:** If you parallel transport a vector around a closed loop in flat space, it returns unchanged. In curved space, it rotates! The Riemann tensor quantifies exactly how much.\n\n**What my code does:**\n\n1. Takes any metric tensor g_mn as input\n2. Computes Christoffel symbols using numerical differentiation\n3. Calculates the full Riemann tensor R^p_smn\n4. Derives contractions: Ricci tensor, Ricci scalar, Kretschmann scalar\n\n**Cool results:**\n\n- **Sphere:** Gaussian curvature K = 1/R² everywhere (constant positive curvature)\n- **Torus:** K > 0 on outer edge, K < 0 on inner edge, with integral K dA = 0 (Gauss-Bonnet theorem!)\n- **Schwarzschild black hole:** The Kretschmann scalar K = 48M²/r⁶ diverges at r=0, proving it's a true singularity (not a coordinate artifact like the event horizon)\n\n**Symmetry verification:**\n\nThe code numerically confirms all Riemann tensor symmetries:\n- R_psmn = -R_psnm (antisymmetric in last two indices)\n- R_psmn = -R_spmn (antisymmetric in first two indices)\n- R_psmn = R_mnps (pair symmetry)\n- R_psmn + R_pmns + R_pnsm = 0 (first Bianchi identity)\n\nThe visualization shows parallel transport on a sphere - a vector carried around a triangular path (pole to equator to pole) rotates 90 degrees, directly demonstrating curvature.\n\nExplore the full notebook with code and derivations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/riemann_curvature_tensor.ipynb",4018"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/riemann_curvature_tensor.ipynb",4019"category": "general",4020"date": "2026-06-19",4021"time": "09:00"4022},4023{4024"source": "rossler_attractor",4025"content_type": "notebook",4026"subreddit": "CoCalc",4027"title": "Simulating the Rössler Attractor in Python: A Visual Introduction to Chaos Theory",4028"body": "I created a notebook exploring the Rössler attractor—a beautiful example of deterministic chaos that's simpler than the famous Lorenz system.\n\n**What is it?**\n\nThe Rössler system is three coupled differential equations:\n- dx/dt = -y - z\n- dy/dt = x + ay\n- dz/dt = b + z(x - c)\n\nThink of it as a mathematical recipe where each variable's rate of change depends on the others.\n\n**ELI5 Version:**\n\nImagine spinning a ball on a string, but occasionally giving it a \"kick\" (the z equation). The ball traces a spiral pattern but never quite repeats its path—that's chaos!\n\n**The \"Butterfly Effect\" Demo:**\n\nI ran two simulations with initial conditions differing by just 0.000001. At first they're identical, but after ~30 time units, they've completely diverged. This exponential separation is quantified by the Lyapunov exponent (λ₁ ≈ 0.07).\n\n**Code Stack:**\n- numpy for arrays\n- scipy.integrate.odeint for solving ODEs\n- matplotlib for 3D visualization\n\nThe notebook includes:\n- 3D attractor visualization\n- xy, xz projections\n- Time series plots\n- Lyapunov exponent estimation\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/rossler_attractor.ipynb\n\nQuestions welcome! Chaos theory is endlessly fascinating.",4029"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/rossler_attractor.ipynb",4030"category": "general",4031"date": "2026-06-19",4032"time": "09:00"4033},4034{4035"source": "kalman_filter_finance",4036"content_type": "notebook",4037"subreddit": "CoCalc",4038"title": "Built a Kalman Filter for Dynamic Hedge Ratios in Pairs Trading — 48% Better Than Rolling OLS",4039"body": "**The Problem**\n\nIn pairs trading, you need to know the hedge ratio β between two cointegrated assets: y = βx + error. But here's the catch — β isn't constant! Market conditions change, correlations shift, and what worked last month might be wrong today.\n\n**The Traditional Approach**\n\nMost people use rolling OLS with a fixed window (say, 60 days). Problem: it lags behind regime changes because you're averaging stale data.\n\n**The Kalman Filter Solution**\n\nInstead of treating β as fixed, model it as a hidden state that evolves over time:\n\n- State equation: βₜ = βₜ₋₁ + noise (β follows a random walk)\n- Observation: yₜ = βₜxₜ + noise (we observe prices with noise)\n\nThe Kalman Filter recursively updates your β estimate as each new price arrives:\n\n1. **Predict**: Use yesterday's β as today's prior estimate\n2. **Update**: Combine prior with new observation, weighted by \"Kalman gain\"\n\nThe Kalman gain automatically balances how much you trust the new data vs your prior — more uncertainty in your prior = more weight on new data.\n\n**Results**\n\nOn synthetic data with regime changes:\n- Static OLS: MSE = 0.02+\n- Rolling OLS (60-day): MSE = 0.0037\n- Kalman Filter: MSE = 0.0019\n\nThat's a **48% improvement** over rolling OLS.\n\n**The Code**\n\nThe core is surprisingly simple (~50 lines). Key parameters:\n- δ (delta): Process noise — how quickly can β change?\n- Vₑ: Measurement noise — how noisy are observations?\n\n**What I Learned**\n\n1. The Kalman Filter is the optimal linear estimator (minimum variance)\n2. It naturally provides confidence intervals via the covariance matrix\n3. It adapts to regime changes almost instantly — no lag from averaging\n\n**Interactive Notebook**\n\nCheck out the full implementation with visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/kalman_filter_finance.ipynb\n\nHappy to answer questions!\n\n----------------------------------------",4040"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/kalman_filter_finance.ipynb",4041"category": "general",4042"date": "2026-06-20",4043"time": "09:00"4044},4045{4046"source": "bayesian_inference_mcmc",4047"content_type": "notebook",4048"subreddit": "CoCalc",4049"title": "I built an MCMC sampler from scratch to learn Bayesian inference - here's what clicked",4050"body": "Hey everyone! I've been learning Bayesian statistics and finally built something that made it all click: a Metropolis-Hastings MCMC sampler from scratch in Python (numpy/scipy only, no PyMC or Stan).\n\n**The Problem:**\nYou have 100 noisy measurements. You know they came from a normal distribution, but you don't know the mean (μ) or standard deviation (σ). How do you figure them out AND quantify your uncertainty?\n\n**The Bayesian Approach:**\nInstead of point estimates, Bayes gives you probability distributions:\n\nP(θ|D) ∝ P(D|θ) · P(θ)\n\n- P(θ|D) = posterior (what we want - our updated beliefs)\n- P(D|θ) = likelihood (how probable is our data given parameters)\n- P(θ) = prior (what we believed before seeing data)\n\n**The Challenge:**\nComputing the posterior exactly requires an integral that's often impossible to solve. That's where MCMC comes in - it samples from the posterior without computing the nasty integral.\n\n**How Metropolis-Hastings Works (ELI5):**\n1. Start somewhere in parameter space\n2. Propose a random step\n3. If the new spot has higher probability, accept it\n4. If lower probability, accept it sometimes (proportional to the ratio)\n5. Repeat 50,000 times\n6. The places you visit most often ARE the posterior distribution\n\n**Results:**\n- True values: μ=5.0, σ=2.0\n- Posterior recovered both with tight 95% credible intervals\n- Acceptance rate hit 23% (textbook says optimal is ~23.4% for 2D)\n- Trace plots show good mixing after burn-in\n\n**What I Learned:**\n- Burn-in matters - first 10,000 samples are garbage while the chain finds its footing\n- Proposal step size is crucial - too small = slow exploration, too big = lots of rejections\n- Effective Sample Size (ESS) tells you how many independent samples you really have\n- Posterior predictive checks are essential - sample from your posterior, generate fake data, compare to real data\n\nCheck out the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bayesian_inference_mcmc.ipynb\n\nHappy to answer questions about the implementation!",4051"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bayesian_inference_mcmc.ipynb",4052"category": "general",4053"date": "2026-06-20",4054"time": "09:00"4055},4056{4057"source": "antenna_radiation_pattern",4058"content_type": "notebook",4059"subreddit": "CoCalc",4060"title": "I built an antenna radiation pattern simulator in Python - here's what I learned about how antennas \"aim\" their signals",4061"body": "Hey everyone! I just finished a computational physics project analyzing antenna radiation patterns and wanted to share what I learned.\n\n**ELI5 Version:**\nImagine you're holding a flashlight. A regular bulb sends light everywhere (like an isotropic antenna), but a flashlight focuses it in one direction. Antennas work the same way - they can send radio waves in specific patterns depending on their design.\n\n**What the code does:**\n\n1. Calculates radiation patterns for dipole antennas\n2. Models array factors for multiple-element systems\n3. Computes directivity and half-power beamwidth (HPBW)\n\n**Key findings:**\n\n- A short dipole has pattern F(θ) = |sin θ| - it's like a donut shape\n- Half-wave dipole is slightly more directional: D = 1.64 vs 1.5\n- Arrays are where it gets interesting: combine 4 elements at λ/2 spacing and you get a focused beam\n- More elements = narrower beam but watch out for grating lobes when spacing > λ\n\n**The math (in plain terms):**\n\nDirectivity tells you how well an antenna focuses power:\nD = 4π / ∫|Fn(θ,φ)|² sin θ dθ dφ\n\nFor a 4-element broadside array, you can get much higher directivity than a single dipole.\n\n**Practical applications:**\n- WiFi routers use multiple antennas for better coverage\n- 5G uses massive MIMO arrays\n- Radar systems use phased arrays to steer beams electronically\n\nThe visualization shows polar plots of different patterns plus how changing element count and spacing affects the beam.\n\nCheck out the full notebook with code and interactive plots:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/antenna_radiation_pattern.ipynb\n\nHappy to answer questions about the implementation!",4062"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/antenna_radiation_pattern.ipynb",4063"category": "general",4064"date": "2026-06-21",4065"time": "09:00"4066},4067{4068"source": "lennard_jones_potential",4069"content_type": "notebook",4070"subreddit": "CoCalc",4071"title": "Visualizing the Lennard-Jones Potential - The Foundation of Molecular Simulations",4072"body": "I created a Jupyter notebook exploring the Lennard-Jones potential, which is one of the most important equations in computational physics and chemistry.\n\n**What is it?**\n\nThe LJ potential describes how two neutral atoms interact:\n\nV(r) = 4ε[(σ/r)¹² - (σ/r)⁶]\n\nWhere:\n- r = distance between atoms\n- ε = depth of the potential well (interaction strength)\n- σ = distance where potential equals zero\n\n**ELI5 Version:**\n\nImagine two magnets, but weird. When far apart, they gently attract (van der Waals forces - think gecko feet sticking to walls). But push them too close and they STRONGLY repel (their electron clouds don't want to overlap - Pauli exclusion principle).\n\nThe \"sweet spot\" where forces balance is at r ≈ 1.122σ.\n\n**Why r⁻¹² and r⁻⁶?**\n\n- The r⁻⁶ term has real physics behind it (quantum mechanical perturbation theory for induced dipoles)\n- The r⁻¹² term is chosen because 12 = 2×6, making computation faster!\n\n**What I learned:**\n\n1. How to decompose the potential into repulsive/attractive components\n2. Noble gases have very different parameters - Xenon's well depth (230K) is 22× larger than Helium (10.2K)\n3. The force is simply -dV/dr, and you can visualize where it's repulsive vs attractive\n\n**View the interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lennard_jones_potential.ipynb\n\nCode uses NumPy and Matplotlib. Happy to answer questions about the implementation!",4073"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/lennard_jones_potential.ipynb",4074"category": "general",4075"date": "2026-06-21",4076"time": "09:00"4077},4078{4079"source": "numerical_differentiation",4080"content_type": "notebook",4081"subreddit": "CoCalc",4082"title": "Understanding Numerical Differentiation: From Theory to Python Implementation",4083"body": "I created an interactive notebook exploring how computers approximate derivatives using finite differences.\n\n**ELI5 version:** Imagine you want to know how steep a hill is at a specific point, but you can't use calculus formulas. Instead, you walk a tiny distance forward, measure the height change, and divide by the distance. That's numerical differentiation!\n\n**The three main methods:**\n\n1. **Forward difference:** f'(x) ≈ [f(x+h) - f(x)]/h\n2. **Backward difference:** f'(x) ≈ [f(x) - f(x-h)]/h\n3. **Central difference:** f'(x) ≈ [f(x+h) - f(x-h)]/2h\n\n**Key insight:** Central difference is more accurate (O(h²) error vs O(h)) because the odd-powered error terms cancel out when you subtract.\n\n**The plot twist:** You'd think smaller h = better approximation, but there's a trade-off:\n- Truncation error decreases as h → 0\n- Round-off error increases as h → 0 (dividing tiny differences)\n\nOptimal step size is around h ≈ 10⁻⁵ for double precision floats.\n\nThe notebook includes Python implementations and error convergence plots showing these theoretical predictions in action.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/numerical_differentiation.ipynb",4084"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/numerical_differentiation.ipynb",4085"category": "general",4086"date": "2026-06-22",4087"time": "09:00"4088},4089{4090"source": "random_graph_erdos_renyi",4091"content_type": "notebook",4092"subreddit": "CoCalc",4093"title": "Implementing Erdős-Rényi Random Graphs: Watching Phase Transitions Emerge from Code",4094"body": "I just built a Python implementation of the Erdős-Rényi random graph model, and I wanted to share what I learned because the results are genuinely fascinating.\n\n**What's the Erdős-Rényi Model?**\n\nImagine you have n nodes and you connect each pair with probability p. That's it. Pure randomness. But here's where it gets interesting...\n\n**The Phase Transition**\n\nThere's a critical threshold at c = np = 1 (average degree = 1). Below this:\n- All components are tiny (O(log n) size)\n- The network is fragmented\n\nAbove this:\n- A \"giant component\" suddenly appears\n- It contains a macroscopic fraction of all nodes\n\nThe fraction S in the giant component satisfies: S = 1 - e^(-cS)\n\nThis is a transcendental equation—it only has a non-trivial solution when c > 1.\n\n**The Code**\n\nI used scipy.sparse for the adjacency matrix (essential for large graphs) and scipy.sparse.csgraph for connected components. For degree distribution analysis, the empirical results matched the theoretical Poisson distribution almost perfectly.\n\n**What I Learned**\n\n1. The phase transition is SHARP—it really does happen right at c=1\n2. Degree distribution → Poisson(λ) is a consequence of edge independence\n3. Force-directed layouts make these graphs beautiful to visualize\n\n**View the Full Notebook**\n\nYou can explore the complete implementation with interactive plots here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/random_graph_erdos_renyi.ipynb\n\nHappy to answer questions about the implementation!\n\n---",4095"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/random_graph_erdos_renyi.ipynb",4096"category": "general",4097"date": "2026-06-22",4098"time": "09:00"4099},4100{4101"source": "residue_theorem",4102"content_type": "notebook",4103"subreddit": "CoCalc",4104"title": "I implemented the Residue Theorem in Python - here's how it transforms impossible integrals into simple calculations",4105"body": "Hey everyone!\n\nI just finished a notebook exploring the Residue Theorem from complex analysis, and I wanted to share what I learned because it genuinely blew my mind.\n\n**What is the Residue Theorem?**\n\nIn simple terms: instead of computing a contour integral around a closed loop in the complex plane directly (which can be hard), you just find the \"residues\" at the singularities inside that loop and add them up.\n\nThe formula: ∮f(z)dz = 2πi × (sum of all residues inside the contour)\n\n**ELI5 version:** Imagine you're walking around a lake and want to measure something about the water. Instead of measuring the whole perimeter, you only need to check certain \"special points\" (poles) inside the lake. The global information (the integral) is determined entirely by local information (the residues).\n\n**What I implemented:**\n\n1. Computed residues at simple poles for f(z) = 1/(z²+1)\n - Pole at z = i has residue 1/(2i)\n - Pole at z = -i has residue -1/(2i)\n\n2. Verified contour integrals numerically match the theorem prediction\n\n3. The coolest part: evaluated the real integral ∫₀^∞ 1/(x²+1)dx = π/2 by using a semicircular contour that only encloses z = i\n\n4. Also handled double poles using the derivative formula\n\n**Key takeaway:** The Residue Theorem connects local behavior (what happens at singularities) to global properties (the value of the integral). This is why it's so powerful for solving real-world integrals in physics and engineering.\n\nCheck out the full interactive notebook here: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/residue_theorem.ipynb\n\nHappy to answer questions!\n\n---",4106"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/residue_theorem.ipynb",4107"category": "general",4108"date": "2026-06-23",4109"time": "09:00"4110},4111{4112"source": "n_body_gravitational_simulation",4113"content_type": "notebook",4114"subreddit": "CoCalc",4115"title": "I built an N-body gravitational simulation in Python - here's how it works",4116"body": "Hey everyone! I just finished implementing an N-body gravitational simulation and wanted to share what I learned.\n\n**The Problem**\n\nThe N-body problem asks: given N objects with mass, how do they move under mutual gravitational attraction? It's one of the classic problems in physics - there's no general analytical solution for N>2, so we need numerical methods.\n\n**The Physics (ELI5)**\n\nEvery object pulls on every other object with a force proportional to their masses and inversely proportional to distance squared: F = Gm₁m₂/r²\n\nThe tricky part is that all objects move simultaneously, so each one's position affects all the others. It's like a cosmic dance where everyone reacts to everyone else at once!\n\n**The Implementation**\n\nI used Python with NumPy and implemented:\n\n1. **Velocity Verlet integration** - A symplectic integrator that conserves energy well over long times. The basic idea:\n - Update positions: r(t+dt) = r(t) + v(t)dt + ½a(t)dt²\n - Compute new accelerations from updated positions\n - Update velocities: v(t+dt) = v(t) + ½[a(t) + a(t+dt)]dt\n\n2. **Softening parameter** - To prevent numerical blow-ups when particles get close, we add ε² to the distance: a ∝ 1/(r²+ε²)^(3/2)\n\n**Results**\n\n- 8 bodies: 1 central \"star\" (mass=100) + 7 smaller orbiting bodies\n- 5000 time steps with dt=0.001\n- Energy drift: less than 0.01%!\n\nThe smaller bodies follow roughly Keplerian orbits but with perturbations from each other - you can see the chaotic nature of the 3+ body problem.\n\n**What I Learned**\n\n- Why symplectic integrators matter for long-term stability\n- How softening prevents singularities\n- Why direct N-body is O(N²) - every particle interacts with every other particle\n\nFor larger simulations, you'd want Barnes-Hut (O(N log N)) or Fast Multipole Method (O(N)).\n\n**Try it yourself!**\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/n_body_gravitational_simulation.ipynb\n\nHappy to answer questions about the implementation!",4117"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/n_body_gravitational_simulation.ipynb",4118"category": "general",4119"date": "2026-06-23",4120"time": "09:00"4121},4122{4123"source": "rsa_encryption_basics",4124"content_type": "notebook",4125"subreddit": "CoCalc",4126"title": "I built RSA encryption from scratch in Python - here's what I learned about the math behind your internet security",4127"body": "I've always been curious about how public-key cryptography actually works, so I decided to implement RSA from the ground up. Here's a breakdown:\n\n**The Core Idea (ELI5)**\n\nImagine a mailbox where anyone can drop in letters (encrypt), but only you have the key to open it (decrypt). That's public-key cryptography.\n\n**The Math**\n\n1. Pick two prime numbers p and q\n2. Multiply them: n = p × q (this is public)\n3. Calculate φ(n) = (p-1)(q-1) (Euler's totient)\n4. Find e where gcd(e, φ(n)) = 1 (public exponent, usually 65537)\n5. Find d where e × d ≡ 1 (mod φ(n)) (private exponent)\n\n**Encryption/Decryption**\n\n- Encrypt: c = m^e mod n\n- Decrypt: m = c^d mod n\n\n**Why It's Secure**\n\nMultiplying p × q is easy. Factoring n back into p and q is HARD (sub-exponential time). That's the \"trapdoor\" function.\n\n**What I Learned**\n\n- Decryption is slower than encryption because d is much larger than e\n- Key generation time grows rapidly with key size due to primality testing\n- 2048-bit keys are the current minimum recommendation (~112 bits of security)\n- RSA will be vulnerable to quantum computers running Shor's algorithm\n\n**Try It Yourself**\n\nView and run the interactive notebook here: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/rsa_encryption_basics.ipynb\n\nHappy to answer questions about the implementation!\n\n---",4128"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/rsa_encryption_basics.ipynb",4129"category": "general",4130"date": "2026-06-24",4131"time": "09:00"4132},4133{4134"source": "prisoners_dilemma_simulation",4135"content_type": "notebook",4136"subreddit": "CoCalc",4137"title": "I simulated the Prisoner's Dilemma in Python - here's why Tit-for-Tat wins",4138"body": "**What is the Prisoner's Dilemma?**\n\nELI5: Imagine you and a friend are caught by police and separated. You can either stay quiet (cooperate) or snitch (defect). If you both stay quiet, you get light sentences. If you both snitch, you both get medium sentences. But if you snitch while your friend stays quiet, you go free and they get the maximum sentence.\n\nThe math says you should always defect - but if you play this game repeatedly, something interesting happens.\n\n**What I built:**\n\nA Python simulation with 6 strategies competing in a round-robin tournament:\n- Always Cooperate\n- Always Defect\n- Tit-for-Tat (copy opponent's last move)\n- Random\n- Grim Trigger (cooperate until betrayed, then always defect)\n- Pavlov (win-stay, lose-shift)\n\n**Key findings:**\n\n1. Tit-for-Tat consistently performs well by being \"nice\" (starts cooperating), \"retaliatory\" (punishes defection), and \"forgiving\" (returns to cooperation)\n\n2. The payoff ratio matters: R/P = 3 means mutual cooperation yields 3× the long-term benefit of mutual defection\n\n3. Unconditional strategies fail - \"Always Cooperate\" gets exploited, \"Always Defect\" misses out on reciprocity\n\n**The code uses:**\n- NumPy for simulation\n- matplotlib for visualization\n- Object-oriented strategy classes\n\nThis models real scenarios like climate agreements, arms races, and business competition.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/prisoners_dilemma_simulation.ipynb",4139"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/prisoners_dilemma_simulation.ipynb",4140"category": "general",4141"date": "2026-06-24",4142"time": "09:00"4143},4144{4145"source": "krylov_subspace_methods",4146"content_type": "notebook",4147"subreddit": "CoCalc",4148"title": "Implemented Conjugate Gradient and GMRES from scratch - here's how Krylov subspace methods work",4149"body": "I just built two fundamental iterative solvers from scratch and wanted to share what I learned about Krylov subspace methods.\n\n**The Problem:** Solve Ax = b where A is a huge sparse matrix (think millions of entries, but mostly zeros).\n\n**The Insight:** You don't need to invert A! The solution lies in a special subspace called the Krylov subspace:\n\nK_k(A, v) = span{v, Av, A²v, ..., Aᵏ⁻¹v}\n\nThis comes from the Cayley-Hamilton theorem - every matrix satisfies its own characteristic polynomial, so A⁻¹ can be written as a polynomial in A.\n\n**Two Main Methods:**\n\n1. **Conjugate Gradient (CG)** - For symmetric positive definite matrices\n - Generates A-orthogonal search directions\n - Minimizes the A-norm of error\n - Only needs short recurrences (memory efficient!)\n\n2. **GMRES** - For general matrices\n - Uses Arnoldi iteration to build orthonormal basis\n - Minimizes 2-norm of residual\n - Memory grows with iterations\n\n**My Results:**\n\nTested on the 2D Poisson equation (50×50 grid = 2500 unknowns):\n- Custom CG: 143 iterations\n- Custom GMRES: 142 iterations\n- Both achieved relative error of ~1e-7\n\nThe coolest part? Convergence rate depends on the condition number κ = λₘₐₓ/λₘᵢₙ. For CG:\n\n‖error_k‖ ≤ 2 × ((√κ - 1)/(√κ + 1))ᵏ × ‖error_0‖\n\nAs the grid gets finer, κ increases (O(n²) for Poisson), so you need more iterations. This is why preconditioning is crucial for real applications!\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/krylov_subspace_methods.ipynb\n\nHappy to answer questions about the implementation!",4150"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/krylov_subspace_methods.ipynb",4151"category": "general",4152"date": "2026-06-25",4153"time": "09:00"4154},4155{4156"source": "k_means_clustering",4157"content_type": "notebook",4158"subreddit": "CoCalc",4159"title": "I implemented K-Means clustering from scratch - here's what I learned about how it actually works",4160"body": "Hey everyone! I just finished implementing K-Means clustering from scratch in Python (no sklearn), and wanted to share the key insights.\n\n**What is K-Means?**\n\nIt's an unsupervised algorithm that partitions data into K clusters by minimizing the \"within-cluster sum of squares\" (WCSS):\n\nJ = ∑ ||xᵢ - μₖ||²\n\nBasically, it tries to make each cluster as tight as possible around its center (centroid).\n\n**How it works (Lloyd's Algorithm):**\n\n1. Initialize K centroids (I used K-Means++ for better starting points)\n2. Assign each point to its nearest centroid\n3. Update each centroid as the mean of its assigned points\n4. Repeat until convergence\n\n**What I learned:**\n\n- **Convergence is guaranteed** because J is bounded below by 0 and each step can only decrease it\n- **But it's only a local minimum** - different initializations give different results\n- **K-Means++ matters** - random initialization can lead to poor clustering\n- **The elbow method works** - plotting inertia vs K clearly showed K=4 was optimal for my data\n\n**Results:**\n\n- Tested on synthetic Gaussian clusters (4 groups)\n- Converged in ~10 iterations\n- Silhouette score: 0.58 (solid separation)\n- Learned centroids were within 0.1 units of true centers\n\n**Limitations to keep in mind:**\n\n- Assumes spherical clusters\n- Sensitive to outliers\n- You need to specify K beforehand\n\nThe full notebook with visualizations showing centroid movement during optimization is available here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/k_means_clustering.ipynb\n\nHappy to answer questions about the implementation!\n\n---",4161"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/k_means_clustering.ipynb",4162"category": "general",4163"date": "2026-06-25",4164"time": "09:00"4165},4166{4167"source": "greeks_calculation_options",4168"content_type": "notebook",4169"subreddit": "CoCalc",4170"title": "Built a complete Black-Scholes Greeks calculator - here's what I learned about options risk",4171"body": "I just finished implementing all 5 primary Greeks for options pricing and wanted to share some insights.\n\n**ELI5 version:** The Greeks tell you how sensitive your option price is to different factors:\n\n- **Delta (Δ)** - If the stock goes up $1, how much does your option go up? ATM calls have delta ≈ 0.5\n- **Gamma (Γ)** - How fast does delta change? Think of it as acceleration. Peaks at-the-money\n- **Theta (Θ)** - How much value do you lose each day? Time decay is brutal for ATM options\n- **Vega (ν)** - If volatility increases 1%, what happens to your option? Always positive\n- **Rho (ρ)** - Interest rate sensitivity. Usually the least important Greek\n\n**Key formulas (using Black-Scholes):**\n\nDelta_call = N(d₁)\n\nGamma = N'(d₁) / (S × σ × √T)\n\nwhere d₁ = [ln(S/K) + (r + σ²/2)T] / (σ√T)\n\n**Biggest insight:** Gamma explodes as expiration approaches for ATM options. A 1-week ATM option has way more gamma than a 1-year option. This is why market makers constantly rehedge near expiration.\n\nThe code uses NumPy and SciPy with a clean class-based structure. All analytical formulas - no numerical differentiation needed.\n\n**View and run the full notebook here:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/greeks_calculation_options.ipynb\n\nHappy to answer questions about the implementation or the math!",4172"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/greeks_calculation_options.ipynb",4173"category": "general",4174"date": "2026-06-26",4175"time": "09:00"4176},4177{4178"source": "diffie_hellman_key_exchange",4179"content_type": "notebook",4180"subreddit": "CoCalc",4181"title": "[OC] Python implementation of Diffie-Hellman Key Exchange with security analysis and MITM demo",4182"body": "I built a complete implementation of the Diffie-Hellman key exchange protocol to understand how two parties can establish a shared secret over an insecure channel.\n\n**ELI5 Version:**\nImagine Alice and Bob want to agree on a secret color, but they can only communicate by shouting across a room full of people. They each pick a private color, mix it with a shared base color, and exchange the results. Then each person mixes their private color with what they received. Due to how color mixing works, they both end up with the same final color—but eavesdroppers who only saw the intermediate colors can't figure out the final shade.\n\n**The Math:**\n- Public parameters: prime p and generator g\n- Alice picks private a, sends A = g^a mod p\n- Bob picks private b, sends B = g^b mod p\n- Both compute shared secret s = g^(ab) mod p\n\n**What I learned:**\n\n1. The security relies on the discrete logarithm problem: given g^x mod p, finding x is computationally infeasible for large primes\n\n2. Attack time grows exponentially—my brute-force analysis showed clear exponential growth even for small key sizes\n\n3. Basic DH is vulnerable to man-in-the-middle attacks! An attacker can intercept both public keys and establish separate secrets with each party. That's why real implementations use authentication (certificates, signatures)\n\n4. Modern systems use 2048+ bit primes for adequate security (112-bit security level per NIST)\n\nThe notebook includes visualizations of attack complexity growth and a working MITM demonstration.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/diffie_hellman_key_exchange.ipynb\n\nHappy to answer questions about the implementation!",4183"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/diffie_hellman_key_exchange.ipynb",4184"category": "general",4185"date": "2026-06-26",4186"time": "09:00"4187},4188{4189"source": "dragon_curve_fractal",4190"content_type": "notebook",4191"subreddit": "CoCalc",4192"title": "Visualizing the Dragon Curve Fractal in Python - Space-Filling Curves with Simple Rules",4193"body": "**ELI5:** Imagine folding a strip of paper in half over and over, then unfolding it so every crease makes a 90° angle. The shape you get is a Dragon Curve!\n\n**What I learned building this:**\n\nThe Dragon Curve is a fractal discovered by NASA physicists in the 1960s (and featured in Jurassic Park!). The mind-blowing part: it has a fractal dimension of D=2, meaning it's *space-filling* in the limit - yet it never crosses itself.\n\n**The math (simplified):**\n\nThe turn sequence at iteration n follows:\nSₙ = Sₙ₋₁ + R + reverse(flip(Sₙ₋₁))\n\nSo you take the previous sequence, add a right turn, then append the reversed and flipped version of the previous sequence.\n\n**Cool properties:**\n- At iteration n, you get 2ⁿ segments\n- Self-similar: two copies make the next iteration (rotated 90°)\n- Four copies tile the entire plane perfectly\n\n**Implementation highlights:**\n- Used numpy for coordinate generation\n- matplotlib LineCollection for efficient segment rendering\n- Color gradient shows progression through the curve\n\nAt iteration 15, the curve has 32,768 segments!\n\n**View and run the notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/dragon_curve_fractal.ipynb",4194"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/dragon_curve_fractal.ipynb",4195"category": "general",4196"date": "2026-06-27",4197"time": "09:00"4198},4199{4200"source": "conjugate_gradient_method",4201"content_type": "notebook",4202"subreddit": "CoCalc",4203"title": "Understanding the Conjugate Gradient Method: Solving Linear Systems Without Matrix Inversion",4204"body": "I created a notebook exploring the Conjugate Gradient (CG) method, one of the most elegant algorithms in numerical linear algebra.\n\n**The Problem:** Solve Ax = b where A is a symmetric positive definite matrix.\n\n**ELI5 Version:**\nImagine you're trying to find the lowest point in a valley. You could walk directly downhill (steepest descent), but you'd zigzag inefficiently. The CG method is smarter—each step takes you in a direction that's \"independent\" from all previous ones, so you never waste effort retreading old ground. For an n-dimensional valley, you reach the bottom in at most n steps.\n\n**The Math (simplified):**\n- Solving Ax = b is equivalent to minimizing f(x) = ½xᵀAx - bᵀx\n- Gradient: ∇f(x) = Ax - b (the residual r = b - Ax)\n- Search directions p₀, p₁, ... are A-conjugate: pᵢᵀApⱼ = 0 for i ≠ j\n- Step size: α = (rᵀr)/(pᵀAp)\n- Update: xk₊₁ = x_k + αp_k\n\n**Key Finding:**\nConvergence speed depends on the condition number κ = λₘₐₓ/λₘᵢₙ. The error bound is:\n\n‖x_k - x*‖_A ≤ 2((√κ - 1)/(√κ + 1))^k ‖x₀ - x*‖_A\n\nSo iterations scale as O(√κ), not O(κ) like steepest descent.\n\n**Experiments:**\n- Tested 100×100 matrices with κ ∈ {10, 100, 1000}\n- Higher condition numbers need more iterations (as expected)\n- Theoretical bounds match observed convergence closely\n\n**Why It Matters:**\nFor large sparse systems (think: finite element methods, optimization), CG beats direct solvers. No need to form or invert matrices—just matrix-vector products.\n\nView the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/conjugate_gradient_method.ipynb",4205"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/conjugate_gradient_method.ipynb",4206"category": "general",4207"date": "2026-06-27",4208"time": "09:00"4209},4210{4211"source": "density_matrix",4212"content_type": "notebook",4213"subreddit": "CoCalc",4214"title": "Built a density matrix toolkit in Python - visualizing pure vs mixed quantum states",4215"body": "Hey everyone! I created a Jupyter notebook exploring density matrices in quantum mechanics, and wanted to share what I learned.\n\n**ELI5 Version:**\nImagine you have a coin. If you KNOW it's heads-up, that's a \"pure state.\" But if someone flipped it and you don't know the result - you just know there's a 70% chance it's heads - that's a \"mixed state.\" The density matrix is a mathematical object that can describe BOTH situations in one framework.\n\n**What the notebook covers:**\n\n1. **Pure state density matrices** - For a state |ψ⟩, we compute ρ = |ψ⟩⟨ψ|\n\n2. **Mixed states** - Classical mixtures like ρ = p|0⟩⟨0| + (1-p)|1⟩⟨1|\n\n3. **Purity and entropy** - Pure states have Tr(ρ²) = 1 and zero entropy. The maximally mixed state (p=0.5) has minimum purity = 0.5 and maximum entropy = ln(2)\n\n4. **Bloch sphere** - Single qubit states map to a 3D ball. Pure states live on the surface (|r⃗| = 1), mixed states are inside (|r⃗| < 1)\n\n5. **Time evolution** - Simulated Rabi oscillations under H = (ω/2)σₓ using the von Neumann equation\n\n**Libraries used:** NumPy, SciPy (for matrix exponential), Matplotlib\n\nThe code is pretty clean - about 150 lines total. The trickiest part was getting the von Neumann entropy calculation right (need to filter out zero eigenvalues to avoid log(0)).\n\nView and run the notebook directly in your browser:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/density_matrix.ipynb\n\nHappy to answer questions about the implementation or the physics!\n\n---",4216"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/density_matrix.ipynb",4217"category": "general",4218"date": "2026-06-28",4219"time": "09:00"4220},4221{4222"source": "robot_kinematics",4223"content_type": "notebook",4224"subreddit": "CoCalc",4225"title": "[Educational] Robot Arm Kinematics from Scratch - Complete Python Notebook with Visualizations",4226"body": "I created an educational notebook that implements robot arm kinematics from first principles for a 2-link planar robot (2R arm).\n\n**Topics covered:**\n\n1. **Forward Kinematics**\n - Compute end-effector (x, y) from joint angles (theta1, theta2)\n - Trigonometric formulation\n\n2. **Inverse Kinematics**\n - Find joint angles to reach a target position\n - Law of cosines approach\n - Multiple solutions (elbow-up vs elbow-down configurations)\n\n3. **Workspace Analysis**\n - Reachable region visualization\n - Inner and outer boundaries\n\n4. **Jacobian Matrix**\n - Relates joint velocities to end-effector velocities\n - Manipulability measure (determinant of J)\n - Singularity analysis\n\n5. **Trajectory Planning**\n - 5th-order polynomial for smooth motion\n - Zero velocity and acceleration at start/end\n - Path visualization in workspace\n\nThe notebook includes 4 detailed visualizations showing arm configurations, workspace, trajectories, and manipulability ellipsoids.\n\nGreat for students learning robotics fundamentals or engineers wanting to understand the math behind robot motion.\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/robot_kinematics/robot_kinematics.ipynb\n\nSuggested subreddits: r/robotics, r/ControlTheory, r/MechanicalEngineering, r/learnpython",4227"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/robot_kinematics/robot_kinematics.ipynb",4228"category": "general",4229"date": "2026-06-28",4230"time": "09:00"4231},4232{4233"source": "laguerre_polynomials",4234"content_type": "notebook",4235"subreddit": "SageMath",4236"title": "Implemented Laguerre Polynomials in Python - From Math Theory to Hydrogen Atom Wave Functions",4237"body": "I built a computational notebook exploring Laguerre polynomials and wanted to share what I learned!\n\n**What are Laguerre polynomials?**\n\nThey're a family of orthogonal polynomials that are solutions to a specific differential equation:\n\nx·(d²L_n/dx²) + (1-x)·(dL_n/dx) + n·L_n = 0\n\nThe first few are simple:\n- L₀(x) = 1\n- L₁(x) = 1 - x\n- L₂(x) = 1 - 2x + x²/2\n\n**Why should you care?**\n\nThe radial wave functions of the hydrogen atom (yes, actual quantum mechanics!) are expressed using associated Laguerre polynomials. When you see those familiar orbital shapes (1s, 2p, 3d), Laguerre polynomials are doing the heavy lifting.\n\n**Implementation highlights:**\n\n1. Built a recurrence-based calculator that matches SciPy to machine precision\n2. Numerically verified orthogonality - the integral ∫₀^∞ e⁽-x)·L_m(x)·L_n(x) dx really does equal δ_mn\n3. Computed hydrogen atom radial probability densities for various quantum states\n4. Demonstrated Gauss-Laguerre quadrature - with just 5 nodes, you get exact integrals for polynomials up to degree 9!\n\n**What I learned:**\n\nThe orthogonality property with weight e⁽-x) on [0,∞) is what makes these polynomials so useful. Unlike Legendre (bounded interval) or Hermite (Gaussian weight), Laguerre polynomials naturally handle semi-infinite domains with exponential decay - exactly what you need for atomic physics.\n\nInteractive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/laguerre_polynomials.ipynb\n\nHappy to answer questions about the implementation or the math!",4238"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/laguerre_polynomials.ipynb",4239"category": "general",4240"date": "2026-06-29",4241"time": "09:00"4242},4243{4244"source": "diffraction_patterns",4245"content_type": "notebook",4246"subreddit": "CoCalc",4247"title": "Simulating Diffraction Patterns in Python: Single-Slit, Double-Slit, and Airy Disk",4248"body": "I created a computational notebook exploring three fundamental diffraction phenomena that demonstrate the wave nature of light.\n\n**What I simulated:**\n\n1. **Single-slit diffraction** - When light passes through a narrow slit, it spreads out. The intensity follows I = I₀(sin β/β)² where β = πa·sin θ/λ. Narrower slits → wider patterns.\n\n2. **Double-slit diffraction** - Young's famous experiment! The pattern combines interference (cos² term from two sources) with the single-slit envelope. The ratio d/a tells you how many bright fringes fit in the central maximum.\n\n3. **Circular aperture (Airy pattern)** - This one uses Bessel functions! The resulting concentric rings define resolution limits. The first dark ring appears at θ ≈ 1.22λ/D (Rayleigh criterion).\n\n**Why it matters:**\n- Microscope resolution limits\n- Telescope mirror sizing\n- Spectroscopy with diffraction gratings\n- Holography fundamentals\n\n**Parameters used:**\n- λ = 632.8 nm (He-Ne laser)\n- Slit width: 50 μm\n- Slit separation: 250 μm\n- Aperture diameter: 100 μm\n\nThe code is self-contained using NumPy, SciPy, and Matplotlib.\n\n**View the interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/diffraction_patterns.ipynb\n\n---",4249"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/diffraction_patterns.ipynb",4250"category": "general",4251"date": "2026-06-29",4252"time": "09:00"4253},4254{4255"source": "variational_autoencoder",4256"content_type": "notebook",4257"subreddit": "CoCalc",4258"title": "TITLE: I built a Variational Autoencoder from scratch in NumPy - here's what I learned about generative models",4259"body": "BODY:\n\nI wanted to deeply understand VAEs beyond just using PyTorch's built-in layers, so I implemented one from scratch with only NumPy. Here's what clicked for me:\n\n**The Problem VAEs Solve**\n\nRegular autoencoders compress data to a single point in latent space. But this creates \"holes\" - points between encodings that decode to garbage. VAEs fix this by learning a *distribution* instead.\n\n**The Key Insight**\n\nVAEs encode each input to parameters μ (mean) and σ (standard deviation) of a Gaussian. Then they sample z from this distribution. But wait - you can't backprop through random sampling!\n\n**The Reparameterization Trick**\n\nInstead of sampling z ~ N(μ, σ²), we compute:\nz = μ + σ · ε, where ε ~ N(0, 1)\n\nNow the randomness is in ε (which doesn't need gradients), and μ and σ are deterministic functions we can differentiate!\n\n**The Loss Function**\n\nVAE loss has two parts:\n1. Reconstruction loss - how well does the decoder recreate the input?\n2. KL divergence - how close is q(z|x) to the prior N(0, I)?\n\nThe KL term acts as regularization, keeping the latent space smooth and preventing the model from just memorizing.\n\n**What I Learned**\n\n- Implementing backprop for the KL term manually really clarified how the gradients flow\n- The balance between reconstruction and KL is crucial - too much KL and outputs are blurry\n- Watching samples improve from random noise to structured data is incredibly satisfying\n\nThe full notebook with derivations and visualizations is here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/variational_autoencoder.ipynb\n\nHappy to answer questions about the implementation!",4260"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/variational_autoencoder.ipynb",4261"category": "general",4262"date": "2026-06-30",4263"time": "09:00"4264},4265{4266"source": "recurrent_neural_network",4267"content_type": "notebook",4268"subreddit": "CoCalc",4269"title": "I built an RNN from scratch in NumPy to understand how neural networks learn sequences",4270"body": "I wanted to really understand how Recurrent Neural Networks work, so I implemented one from scratch without any deep learning frameworks.\n\n**What I learned:**\n\nRNNs process sequences by maintaining a hidden state that updates at each time step:\n\nhₜ = tanh(Wₕₕ·hₜ₋₁ + Wₓₕ·xₜ + b)\n\nThis is the key equation! The hidden state hₜ depends on both the current input xₜ AND the previous hidden state hₜ₋₁. That's where the \"memory\" comes from.\n\n**The implementation includes:**\n\n1. Forward pass through entire sequences\n2. Backpropagation through time (BPTT) - computing gradients that flow backward through all time steps\n3. Gradient clipping to prevent exploding gradients\n4. Xavier initialization for stable training\n\n**Why vanilla RNNs struggle:**\n\nThe gradient ∂L/∂Wₕₕ involves products like ∏(∂hⱼ/∂hⱼ₋₁). Since tanh derivatives are ≤1 and we multiply many of them together, gradients vanish exponentially for long sequences. That's why LSTMs add gates to control information flow.\n\n**Results:**\n\nTrained on 500 sine wave sequences (length 25). The network learned to predict the next value with low MSE. Watching the hidden units oscillate in sync with the input signal was really satisfying!\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/recurrent_neural_network.ipynb\n\nHappy to answer questions about the implementation!",4271"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/recurrent_neural_network.ipynb",4272"category": "machine-learning",4273"date": "2026-06-30",4274"time": "09:00"4275},4276{4277"source": "hydrogen_atom_wavefunctions",4278"content_type": "notebook",4279"subreddit": "CoCalc",4280"title": "I built a Python simulation of hydrogen atom wavefunctions - visualizing where electrons actually are",4281"body": "I've always found quantum mechanics fascinating but abstract. So I decided to actually compute and visualize the hydrogen atom wavefunctions in Python.\n\n**ELI5 version:** In quantum mechanics, we can't say exactly where an electron is. Instead, we have a wavefunction ψ that tells us the probability of finding it somewhere. For hydrogen, |ψ|² gives the probability density.\n\n**What I built:**\n\nThe wavefunction separates into radial and angular parts:\n\nψ(r,θ,φ) = R(r) × Y(θ,φ)\n\nI implemented the radial part R_nl(r) using:\n- Associated Laguerre polynomials (via scipy.special.genlaguerre)\n- Proper normalization factors\n- Calculated probability densities r²|R|²\n\n**Key insights from the visualization:**\n\n1. **Nodes matter**: The 1s orbital has no radial nodes, 2s has 1, 3s has 2. These are places where the probability of finding the electron is exactly zero.\n\n2. **Most probable radius**: For 1s, the electron is most likely at r = a₀ (Bohr radius ≈ 0.529 Å). This matches the old Bohr model prediction!\n\n3. **Higher orbitals spread out**: 3s electrons can be found much further from the nucleus than 1s electrons.\n\n4. **Energy levels**: E_n = -13.6 eV/n² - only depends on n, not l or m (this degeneracy is special to hydrogen).\n\n**Code highlights:**\n- Used numpy for numerical computation\n- scipy.special for Laguerre polynomials and spherical harmonics\n- matplotlib for visualization\n\nThis is one of the few quantum systems we can solve exactly, which makes it perfect for learning.\n\n**View the full interactive notebook:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hydrogen_atom_wavefunctions.ipynb\n\nHappy to answer questions about the physics or implementation!",4282"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hydrogen_atom_wavefunctions.ipynb",4283"category": "general",4284"date": "2026-07-01",4285"time": "09:00"4286},4287{4288"source": "hodgkin_huxley_neuron_model",4289"content_type": "notebook",4290"subreddit": "CoCalc",4291"title": "I implemented the Hodgkin-Huxley neuron model in Python - here's how neurons actually fire",4292"body": "I've been learning computational neuroscience and just finished implementing the famous Hodgkin-Huxley model that won the 1952 Nobel Prize. Thought I'd share what I learned!\n\n**ELI5: How do neurons fire?**\n\nYour neuron membrane is like a tiny battery. It maintains a voltage difference (-65mV at rest) using ion pumps. When stimulated:\n\n1. Sodium (Na⁺) channels open → positive ions rush in → voltage spikes up\n2. Sodium channels close, Potassium (K⁺) channels open → positive ions rush out → voltage drops back down\n3. Brief \"overshoot\" below resting potential, then recovery\n\nThis whole cycle takes about 1-2 milliseconds!\n\n**The Math (simplified)**\n\nThe membrane voltage V follows:\n\nCₘ dV/dt = I_external - I_Na - I_K - I_leak\n\nWhere each ionic current depends on:\n- Maximum conductance (how \"open\" the channels can get)\n- Gating variables (probability channels are open)\n- Driving force (difference from equilibrium potential)\n\nThe gating variables (m, h, n) are where it gets interesting - they follow their own differential equations with voltage-dependent rate constants.\n\n**What I learned:**\n\n- Action potentials are \"all-or-none\" - they either happen fully or not at all\n- The refractory period exists because channels need time to reset\n- Multiple spikes occur during sustained stimulation\n- scipy.integrate.odeint handles the coupled ODEs beautifully\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hodgkin_huxley_neuron_model.ipynb\n\nHappy to answer questions about the implementation!",4293"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hodgkin_huxley_neuron_model.ipynb",4294"category": "general",4295"date": "2026-07-01",4296"time": "09:00"4297},4298{4299"source": "gravitational_waves",4300"content_type": "notebook",4301"subreddit": "CoCalc",4302"title": "I built a gravitational wave simulator in Python - here's what I learned about GW150914",4303"body": "Hey everyone! I just finished a Python project simulating gravitational waves from binary black hole mergers, specifically modeling something like the famous GW150914 detection.\n\n**What are gravitational waves?**\n\nThey're ripples in spacetime predicted by Einstein in 1916. When massive objects accelerate (like two black holes spiraling together), they create these waves that travel at light speed. LIGO first detected them in 2015!\n\n**The cool physics:**\n\nIn the weak-field limit, spacetime can be written as flat space plus a small perturbation: g_μν = η_μν + h_μν\n\nThe key quantity is the \"chirp mass\": M = (m₁m₂)^(3/5)/(m₁+m₂)^(1/5)\n\nThis determines how the frequency evolves: f ∝ (t_c - t)^(-3/8)\n\nThat's the \"chirp\" - frequency increasing as the black holes get closer!\n\n**What I simulated:**\n\n- Two black holes: 36 and 29 solar masses\n- Distance: 410 Megaparsecs\n- Generated both polarization states (h₊ and h×)\n- Created time-frequency spectrograms showing the chirp\n\n**Mind-blowing result:**\n\nAt merger, the peak gravitational wave luminosity was ~3.6×10⁴⁹ watts. That's MORE than all stars in the observable universe combined, just for a brief moment!\n\nThe peak strain at Earth was only ~10⁻²¹. That's like measuring a change smaller than the width of a proton across LIGO's 4km arms. Incredible engineering!\n\n**What I learned:**\n\n1. The quadrupole formula connects mass distribution to wave strain\n2. Two polarizations carry complementary info about source orientation\n3. These signals enable \"standard siren\" cosmology for measuring the universe's expansion\n\nCheck out the notebook with full code and visualizations here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gravitational_waves.ipynb\n\nHappy to answer questions about the implementation!",4304"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/gravitational_waves.ipynb",4305"category": "general",4306"date": "2026-07-02",4307"time": "09:00"4308},4309{4310"source": "autoencoder_dimensionality_reduction",4311"content_type": "notebook",4312"subreddit": "CoCalc",4313"title": "Built an autoencoder from scratch in NumPy - here's how it compresses 10D data to 2D while preserving cluster structure",4314"body": "I implemented a neural network autoencoder using only NumPy (no PyTorch/TensorFlow) to understand how dimensionality reduction works at a fundamental level.\n\n**What's an autoencoder?**\n\nThink of it as a \"bottleneck\" network:\n- Encoder: Takes your high-dimensional data and squeezes it through a narrow layer\n- Decoder: Tries to reconstruct the original from that compressed representation\n- Training: Minimize the difference between input and output\n\nThe math is straightforward:\n- Encoder: z = ReLU(W₁x + b₁)\n- Decoder: x' = W₂z + b₂\n- Loss: ||x - x'||² (mean squared error)\n\n**My experiment:**\n\nGenerated synthetic data with 3 clusters living in 10 dimensions (but really lying on a 2D manifold). The autoencoder learned to compress this to 2D while keeping the clusters perfectly separated - and it never saw the cluster labels during training!\n\n**Key learnings:**\n\n1. Xavier initialization matters - prevents vanishing/exploding gradients\n2. Adam optimizer converges much faster than vanilla SGD\n3. Compared to PCA: both work, but autoencoders can capture nonlinear relationships\n\n**Variance explained:** ~87% with just 2 latent dimensions\n\nThe full notebook walks through the math (encoder/decoder equations, backpropagation, Adam updates) and includes visualizations of the latent space vs. the true underlying coordinates.\n\nView and run the notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/autoencoder_dimensionality_reduction.ipynb",4315"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/autoencoder_dimensionality_reduction.ipynb",4316"category": "general",4317"date": "2026-07-02",4318"time": "09:00"4319},4320{4321"source": "portfolio_optimization_markowitz",4322"content_type": "notebook",4323"subreddit": "CoCalc",4324"title": "Built Markowitz Portfolio Optimization from scratch in Python - Efficient Frontier visualization with scipy",4325"body": "I implemented Harry Markowitz's 1952 Mean-Variance Optimization model in Python and wanted to share the results and code approach.\n\n**ELI5 the concept:**\nImagine you have $1000 to invest across 5 stocks. How do you split it up? Markowitz showed that the answer depends on what return you want—and crucially, that you can get LESS risk by combining assets than by holding any single one. This is diversification mathematically proven.\n\n**The math (in Unicode since Reddit doesn't render LaTeX):**\n- Portfolio return: μₚ = Σ wᵢμᵢ (weighted sum of returns)\n- Portfolio risk: σₚ = √(wᵀΣw) where Σ is the covariance matrix\n- Optimization: minimize σₚ² subject to target return\n\n**What I built:**\n1. Portfolio return/variance calculators\n2. Constrained optimization using scipy.optimize.minimize (SLSQP)\n3. Efficient frontier computation (100 optimal portfolios)\n4. Special portfolio finders (min variance, max Sharpe ratio)\n5. Visualization with Capital Market Line\n\n**Key findings from 5-asset simulation:**\n- Min Variance Portfolio: 11.8% return, 10.2% volatility\n- Max Sharpe Portfolio: 15.4% return, 18.1% volatility\n- Sharpe Ratio: ~0.68 (excess return per unit risk)\n\nThe coolest insight: the efficient frontier lies LEFT of all individual assets, proving diversification reduces risk.\n\n**View the full notebook with code and interactive plots:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/portfolio_optimization_markowitz.ipynb\n\nLibraries used: numpy, scipy, matplotlib\n\nHappy to answer questions about the implementation!",4326"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/portfolio_optimization_markowitz.ipynb",4327"category": "general",4328"date": "2026-07-03",4329"time": "09:00"4330},4331{4332"source": "bisection_method",4333"content_type": "notebook",4334"subreddit": "CoCalc",4335"title": "Visualizing the Bisection Method: A Simple but Powerful Root-Finding Algorithm",4336"body": "I created a Jupyter notebook exploring the bisection method, one of the most fundamental algorithms in numerical analysis. Here's what I learned:\n\n**The Basic Idea (ELI5)**\n\nImagine you're playing a number guessing game. Someone picks a number between 1 and 100, and tells you \"higher\" or \"lower\" after each guess. The optimal strategy? Always guess the middle.\n\nThat's exactly what the bisection method does for finding roots (where a function equals zero).\n\n**How It Works**\n\n1. Start with an interval [a, b] where f(a) and f(b) have opposite signs\n2. The Intermediate Value Theorem guarantees a root exists between them\n3. Compute the midpoint c = (a + b)/2\n4. Check which half contains the root (based on sign of f(c))\n5. Repeat with the new, smaller interval\n\n**Key Results**\n\nFor f(x) = x³ - x - 2 on [1, 2]:\n- Found root x ≈ 1.521379706804568\n- Required 37 iterations for tolerance 10⁻¹⁰\n- Error bound after n iterations: (b₀ - a₀)/2ⁿ⁺¹\n\n**Why It Matters**\n\nThe bisection method has LINEAR convergence (error halves each step), which is slower than Newton-Raphson's quadratic convergence. But here's the tradeoff:\n\n✓ Guaranteed to converge for continuous functions\n✓ No derivatives needed\n✓ Numerically stable\n✗ Needs initial bracketing with sign change\n✗ Can't find tangent roots\n\n**Bonus:** Also found the Dottie number (≈0.739) by solving cos(x) - x = 0. It's the unique fixed point of cosine!\n\nCheck out the full notebook with visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bisection_method.ipynb",4337"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/bisection_method.ipynb",4338"category": "general",4339"date": "2026-07-03",4340"time": "09:00"4341},4342{4343"source": "support_vector_machine_basics",4344"content_type": "notebook",4345"subreddit": "CoCalc",4346"title": "I implemented Support Vector Machines from scratch - here's how the math actually works",4347"body": "I've been trying to really understand SVMs beyond just calling sklearn, so I implemented one from the ground up. Here's what I learned:\n\n**The Core Idea (ELI5)**\n\nImagine you have red and blue dots on paper. You want to draw a line separating them. But not just any line - the BEST line that's as far as possible from both groups. That distance is called the \"margin.\"\n\n**The Math**\n\nFor a hyperplane wᵀx + b = 0:\n- Distance from point xᵢ to hyperplane = |wᵀxᵢ + b| / ||w||\n- We want: yᵢ(wᵀxᵢ + b) ≥ 1 for all points\n- Margin width = 2/||w||\n\nSo maximizing margin = minimizing ||w||². That's the optimization problem!\n\n**Soft Margins (Real Data is Messy)**\n\nReal data isn't perfectly separable, so we add slack variables ξᵢ:\n\nmin (1/2)||w||² + C∑ξᵢ\n\nThe C parameter is crucial:\n- Low C (like 0.1) = wide margin, allows misclassifications\n- High C (like 100) = narrow margin, strict classification\n\n**The Kernel Trick (Mind-Blowing Part)**\n\nWhat if data isn't linearly separable? Like points arranged in circles?\n\nInstead of finding a line in 2D, map to higher dimensions where it IS separable. But computing in infinite dimensions is impossible... unless you use the kernel trick!\n\nK(xᵢ, xⱼ) = φ(xᵢ)ᵀφ(xⱼ)\n\nThe RBF kernel K(xᵢ, xⱼ) = exp(-γ||xᵢ - xⱼ||²) implicitly maps to infinite dimensions. You never actually compute φ(x)!\n\n**Results**\n\n- Linear SVM: 100% accuracy on separable data, only a handful of support vectors needed\n- RBF SVM: Perfectly classifies circular data that's impossible for linear methods\n\n**Code**\n\nUsed NumPy + SciPy's SLSQP optimizer to solve the dual Lagrangian. The elegance of Lagrange multipliers is that αᵢ > 0 only for support vectors - most training points don't matter!\n\nFull notebook with visualizations: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/support_vector_machine_basics.ipynb",4348"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/support_vector_machine_basics.ipynb",4349"category": "general",4350"date": "2026-07-04",4351"time": "09:00"4352},4353{4354"source": "quantum_tunneling_simulation",4355"content_type": "notebook",4356"subreddit": "CoCalc",4357"title": "I built a quantum tunneling simulation in Python - here's what I learned about particles passing through impossible barriers",4358"body": "Ever wonder how particles can pass through barriers they shouldn't have enough energy to cross? I simulated this quantum mechanical phenomenon and wanted to share the key insights.\n\n**What is quantum tunneling?**\n\nIn classical physics, if you throw a ball at a wall with insufficient energy to go over it, the ball bounces back. Always. But in quantum mechanics, particles behave as waves, and there's a probability they'll appear on the other side of the barrier.\n\n**The math (simplified):**\n\nThe transmission coefficient T tells us the probability of tunneling. For a barrier of height V₀ and width a:\n\nT ≈ e^(-2κa)\n\nwhere κ = √(2m(V₀-E))/ℏ\n\nThis exponential dependence is crucial - double the barrier width and transmission can drop by many orders of magnitude!\n\n**What I learned:**\n\n1. **Exponential sensitivity**: A 0.1 nm barrier gives ~66% transmission for a 5 eV electron through a 10 eV barrier. Increase to 0.5 nm? Transmission drops to 0.002%.\n\n2. **Energy matters**: As particle energy approaches barrier height, tunneling probability increases dramatically.\n\n3. **Real applications**: This explains how scanning tunneling microscopes achieve atomic resolution, how tunnel diodes work, and even nuclear alpha decay.\n\n**The code:**\n\nUsed NumPy/SciPy to implement the analytical solution to the time-independent Schrödinger equation. The transfer matrix method calculates wave function coefficients across barrier boundaries.\n\nCheck out the full notebook with visualizations here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_tunneling_simulation.ipynb\n\nHappy to answer questions about the implementation or physics!",4359"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_tunneling_simulation.ipynb",4360"category": "physics",4361"date": "2026-07-04",4362"time": "09:00"4363},4364{4365"source": "greens_function",4366"content_type": "notebook",4367"subreddit": "CoCalc",4368"title": "Green's Functions Explained: How Physicists Solve Differential Equations with \"Impulse Responses\"",4369"body": "**What are Green's functions?**\n\nImagine you have a complex system - heat flowing through a rod, electric potential from charges, quantum particles moving through space. These are all described by differential equations that can be hard to solve directly.\n\nGreen's functions provide a clever shortcut: instead of solving for arbitrary sources, first solve for a single point source (a delta function δ). This gives you G(x, x'), which tells you the response at position x when you poke the system at position x'.\n\n**Why is this useful?**\n\nOnce you know G, you can solve for ANY source distribution f(x) using superposition:\n\nu(x) = ∫G(x, x')f(x')dx'\n\nIt's like knowing how one drop creates ripples in a pond - then you can predict the pattern from any arrangement of drops!\n\n**What I implemented:**\n\nI built a Python notebook solving the 1D Poisson equation:\n\nd²u/dx² = f(x), with u(0) = u(L) = 0\n\nThe Green's function turns out to be:\n- G(x, x') = x(x' - L)/L for x ≤ x'\n- G(x, x') = x'(x - L)/L for x > x'\n\n**Cool findings:**\n\n1. The Green's function is symmetric: G(x, x') = G(x', x). This reflects physical reciprocity - the response at A from a source at B equals the response at B from a source at A.\n\n2. Numerical integration matched the analytical solution with maximum error ~10⁻¹⁵!\n\n3. Also visualized the heat kernel - the Green's function for diffusion equations. It's a Gaussian that spreads out over time while maintaining total \"mass\" (integral = 1).\n\n**Applications:**\n\nGreen's functions appear everywhere:\n- Electrostatics (potential from charges)\n- Heat conduction\n- Quantum mechanics (propagators)\n- Signal processing (impulse response)\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/greens_function.ipynb\n\nHappy to answer questions about the math or implementation!\n\n---",4370"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/greens_function.ipynb",4371"category": "general",4372"date": "2026-07-05",4373"time": "09:00"4374},4375{4376"source": "ant_colony_optimization",4377"content_type": "notebook",4378"subreddit": "CoCalc",4379"title": "I implemented Ant Colony Optimization in Python - here's how swarm intelligence solves the Traveling Salesman Problem",4380"body": "**ELI5 Version:** Imagine you're an ant looking for food. You wander randomly, but when you find something good, you leave a scent trail on your way home. Other ants smell this trail and follow it. The more ants that use a path, the stronger it smells. Over time, the shortest paths end up smelling the strongest because ants can walk them faster and more often.\n\n**The Algorithm:**\n\nAnt Colony Optimization (ACO) turns this biological behavior into math:\n\n1. **Transition Probability**: Each ant at city i chooses next city j with probability proportional to [τ_ij^α · η_ij^β], where τ is pheromone and η = 1/distance\n\n2. **Pheromone Update**: After all ants complete tours, pheromones evaporate by factor (1-ρ), then each ant deposits Q/L_k on its path (L_k = tour length)\n\n**Key Parameters:**\n- α (alpha): How much ants trust pheromones\n- β (beta): How much ants prefer shorter edges\n- ρ (rho): Evaporation rate - prevents premature convergence\n\n**Results:**\nRunning 20 ants for 100 iterations on 25 random cities, the algorithm found a near-optimal tour. The convergence plot shows rapid improvement early on, then fine-tuning.\n\n**What I Learned:**\n- The balance between α and β controls exploration vs exploitation\n- Too high evaporation → lose good solutions; too low → get stuck\n- No single ant finds the answer - it emerges from collective behavior\n\nView and run the full interactive notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/ant_colony_optimization.ipynb\n\nHappy to answer questions about the implementation!",4381"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/ant_colony_optimization.ipynb",4382"category": "general",4383"date": "2026-07-05",4384"time": "09:00"4385},4386{4387"source": "newton_raphson_root_finding",4388"content_type": "notebook",4389"subreddit": "CoCalc",4390"title": "TITLE: Implemented Newton-Raphson Root Finding with Visualizations - Quadratic Convergence is Satisfying",4391"body": "BODY:\n\nJust built a complete implementation of the Newton-Raphson method and wanted to share some insights!\n\n**What is Newton-Raphson?**\n\nIt's an iterative algorithm to find roots of equations (where f(x) = 0). The update formula is:\n\nxn₊₁ = x_n - f(x_n) / f'(x_n)\n\nGeometrically, you're finding where the tangent line at your current point crosses the x-axis.\n\n**Why it's cool: Quadratic Convergence**\n\nThe number of correct digits roughly DOUBLES each iteration. Finding √2 starting from x=1:\n\n| Iteration | Value | Error |\n|-----------|-------|-------|\n| 0 | 1.0 | 4e-1 |\n| 1 | 1.5 | 9e-2 |\n| 2 | 1.41666... | 2e-3 |\n| 3 | 1.414215... | 2e-6 |\n| 4 | 1.41421356... | 6e-12 |\n\nFive iterations to machine precision!\n\n**Interesting discoveries:**\n\n1. The formula for √a simplifies to xn₊₁ = (x_n + a/x_n)/2 - this is the ancient Babylonian method!\n\n2. Basin of attraction analysis shows which initial guesses lead to which roots. For f(x) = x³ - x (roots at -1, 0, 1), the boundaries between basins can be complex.\n\n3. Failure modes: The method struggles when f'(x) ≈ 0 or when iterates cycle.\n\n**View the full notebook with interactive code:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/newton_raphson_root_finding.ipynb\n\nThe notebook includes:\n- Core algorithm implementation\n- Convergence rate plots\n- Geometric visualization of iterations\n- Basin of attraction analysis\n- Discussion of failure modes and extensions\n\nWould love to hear about interesting applications you've used Newton-Raphson for!",4392"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/newton_raphson_root_finding.ipynb",4393"category": "general",4394"date": "2026-07-06",4395"time": "09:00"4396},4397{4398"source": "hash_function_visualization",4399"content_type": "notebook",4400"subreddit": "CoCalc",4401"title": "I visualized 5 hash functions and compared their distribution - here's what I learned about collisions and uniformity",4402"body": "I've been studying hash functions and wanted to really understand the differences between common algorithms, so I built a comparison tool in Python.\n\n**What I tested:**\n- Division method: h(k) = k mod m\n- Multiplication method (Knuth): uses the golden ratio conjugate ≈ 0.618\n- Polynomial rolling hash: great for strings\n- DJB2: Dan Bernstein's classic algorithm\n- SHA-256: cryptographic hash (truncated)\n\n**Key findings:**\n\n1. **Distribution matters a lot** - The chi-squared (χ²) test revealed SHA-256 and DJB2 have the most uniform distribution. Division method can cluster badly with patterned input.\n\n2. **Collision rate follows the Birthday Paradox** - For load factor α = n/m < 1, collisions grow approximately as n²/2m. Once you exceed 50% capacity, collisions increase rapidly.\n\n3. **Avalanche effect is critical for security** - SHA-256 shows dramatic output changes from tiny input modifications. Simple hash functions barely change.\n\n4. **Practical recommendations:**\n - Hash tables: DJB2 or multiplication method\n - Security: SHA-256 or similar cryptographic hash\n - String hashing: Polynomial or DJB2\n - Always use prime table sizes!\n\nThe visualization includes distribution histograms, collision curves vs load factor, and a heatmap showing how SHA-256 spreads integers 0-999 across buckets.\n\n**View the full interactive notebook:** https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hash_function_visualization.ipynb\n\nHappy to answer questions about hash function theory or the implementation!",4403"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/hash_function_visualization.ipynb",4404"category": "general",4405"date": "2026-07-06",4406"time": "09:00"4407},4408{4409"source": "poisson_process_simulation",4410"content_type": "notebook",4411"subreddit": "CoCalc",4412"title": "I simulated 10,000 Poisson processes to prove the textbook math actually works",4413"body": "The Poisson process is everywhere - customer arrivals at a store, radioactive decay, network packet arrivals. But does the theory actually hold up in practice?\n\n**What is a Poisson process?**\n\nImagine events happening randomly over time, like customers walking into a coffee shop. If arrivals are:\n- Independent (one customer arriving doesn't affect when the next shows up)\n- Occurring at a constant average rate λ (say, 2 per hour)\n\nThen you have a Poisson process! The cool part: the waiting time between events follows an exponential distribution with mean 1/λ.\n\n**What I did:**\n\nI implemented two simulation methods in Python:\n1. **Inter-arrival method:** Generate random exponential waiting times and add them up\n2. **Order statistics method:** First decide how many events occur (Poisson distributed), then place them uniformly in the time interval\n\n**The results:**\n\nAfter 10,000 simulations with λ=2 over T=10 time units:\n- Expected events: 20, Sample mean: 20.0\n- Inter-arrival mean: 0.5 (theory: 1/λ = 0.5)\n- Kolmogorov-Smirnov test: p > 0.05 (inter-arrivals ARE exponential)\n- Chi-squared test: p > 0.05 (event counts ARE Poisson)\n\n**Real-world applications:**\n\n- Queueing theory (bank lines, call centers)\n- Reliability engineering (equipment failures)\n- Telecommunications (data packets)\n- Physics (Geiger counters, photon detection)\n- Finance (trade arrivals)\n\n**Try it yourself:**\n\nView the full interactive notebook with code and visualizations:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/poisson_process_simulation.ipynb",4414"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/poisson_process_simulation.ipynb",4415"category": "general",4416"date": "2026-07-07",4417"time": "09:00"4418},4419{4420"source": "van_der_pol_oscillator",4421"content_type": "notebook",4422"subreddit": "CoCalc",4423"title": "Visualizing the Van der Pol Oscillator: From Harmonic Motion to Relaxation Oscillations",4424"body": "I created a Jupyter notebook exploring the Van der Pol oscillator - one of the most important systems in nonlinear dynamics.\n\n**What is it?**\n\nThe Van der Pol oscillator is governed by:\n\nd²x/dt² - μ(1-x²)dx/dt + x = 0\n\nUnlike a simple spring, this system has *variable damping* that depends on position:\n\n- When |x| < 1: Negative damping → energy is added\n- When |x| > 1: Positive damping → energy is removed\n\nThis self-regulation creates stable \"limit cycles\" - the system always settles into the same periodic orbit.\n\n**What I learned:**\n\n1. The parameter μ controls everything:\n - μ = 0.1: Almost sinusoidal\n - μ = 6.0: Sharp \"relaxation oscillations\" with slow buildup and rapid discharge\n\n2. Period increases with μ. For large μ: T ≈ 1.614μ\n\n3. ALL initial conditions (except x=0, ẋ=0) converge to the same limit cycle\n\n**Real applications:** Heartbeat pacemakers, neuron firing, tunnel diode circuits\n\nThe notebook includes phase portraits, time series, and convergence analysis using NumPy and SciPy.\n\nView and interact with the full notebook here:\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/van_der_pol_oscillator.ipynb",4425"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/van_der_pol_oscillator.ipynb",4426"category": "general",4427"date": "2026-07-07",4428"time": "09:00"4429},4430{4431"source": "strange_attractor_lorenz",4432"content_type": "notebook",4433"subreddit": "CoCalc",4434"title": "Visualizing the Lorenz Strange Attractor - Why Weather Prediction Has Fundamental Limits",4435"body": "I just finished implementing Edward Lorenz's famous chaotic system in Python and wanted to share what I learned.\n\n**The Setup**\n\nThe Lorenz system is three coupled differential equations:\n\ndx/dt = σ(y - x)\ndy/dt = x(ρ - z) - y\ndz/dt = xy - βz\n\nWith the classic parameters σ=10, ρ=28, β=8/3, this system exhibits deterministic chaos.\n\n**What Makes It Chaotic?**\n\n1. **Bounded but non-periodic** - The trajectory never repeats exactly but stays confined to a butterfly-shaped region\n\n2. **Sensitive dependence on initial conditions** - I tested this by running two simulations where x₀ differed by just 10⁻¹⁰. They started identical but completely diverged after ~25 time units\n\n3. **Strange attractor** - The attractor has fractal dimension ≈2.06 (not a simple surface)\n\n**The Code**\n\nUsed scipy.integrate.solve_ivp with the RK45 method. The system has three fixed points: the origin (unstable for ρ>1) and a symmetric pair at (±8.485, ±8.485, 27).\n\n**Why It Matters**\n\nLorenz was studying atmospheric convection when he discovered this. The implication: even with perfect knowledge of atmospheric physics, tiny measurement errors make long-term weather prediction fundamentally impossible. This isn't a computing limitation - it's mathematical reality.\n\n**View the full notebook with code and visualizations:**\nhttps://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/strange_attractor_lorenz.ipynb",4436"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/strange_attractor_lorenz.ipynb",4437"category": "general",4438"date": "2026-07-08",4439"time": "09:00"4440},4441{4442"source": "quantum_entanglement",4443"content_type": "notebook",4444"subreddit": "CoCalc",4445"title": "I simulated quantum entanglement in pure NumPy and proved Bell's inequality violation",4446"body": "I built a quantum entanglement simulator from scratch using only NumPy and it really helped me understand why quantum mechanics is so weird.\n\n**ELI5 version:** Imagine two coins that are magically linked. When you flip one in New York and it lands heads, the other one in Tokyo instantly lands heads too—no matter the distance. That's entanglement.\n\nBut here's the spooky part: before you look, neither coin has a definite state. They're in a \"superposition\" of both heads AND tails. The act of measuring forces them to \"decide.\"\n\n**What I built:**\n- Created all four Bell states (maximally entangled two-qubit states)\n- Simulated measurements at different angles\n- Tested the CHSH inequality\n\n**The key result:** Classical physics says correlations between measurements can't exceed 2 (the CHSH bound). My quantum simulation consistently hits ~2.83, which equals 2√2—the maximum allowed by quantum mechanics.\n\nThis proves local hidden variable theories (Einstein's preferred explanation) are wrong. Quantum correlations are genuinely non-local.\n\n**What this means:**\nThis same principle powers quantum cryptography (unhackable communication), quantum teleportation, and quantum computing.\n\nView and run the full notebook: https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_entanglement.ipynb\n\nHappy to answer questions about the implementation!",4447"link": "https://cocalc.com/github/Ok-landscape/computational-pipeline/blob/main/notebooks/published/quantum_entanglement.ipynb",4448"category": "physics",4449"date": "2026-07-08",4450"time": "09:00"4451}4452]4453}44544455